This April, I had the pleasure of attending the Lone Star PHP conference in Dallas, Texas. It was without a doubt one of the best conferences I have attended: it boasted great speakers, highly informative talks, delicious food and a real sense of community.

Before I get into the many things that I learned at the conference, here is a quick list of some of my personal highlights.

  • Attending a nearby Laravel DFW meetup, featuring Taylor Otwell
  • Getting thrust on stage by PHP internals’ Elizabeth Smith for some PHP Jeopardy
  • Attending a live PHP Roundtable podcast recording hosted by Sammy K. Powers and featuring Anthony Ferrara, Jeremy Mikola, Elizabeth Smith, Chris Tankersley and Magnus Nordlander
  • Getting a hands-on Lesson about PGP encryption and keys from Omni Adam’s and Jeff Carouth
  • Eating lots of Southern Barbeque and Tex Mex
  • Getting to taste the gamut of Omni Adam’s brews
  • Playing the longest ever game of Betrayal at the House on the Hill with Chuck Reeves, Jessica Mauerhan, Matthew Turland, Daniel Cousineau and Ben Ramsey
  • Playing numerous lively (and hilarious) games of Fibbage with various after-party attendees.

Now on to the meat of the conference. Here is a cherry-picked sample of the talks that I felt I took the most away from.

HTTP is Dead, Long Live HTTP/2

Ben Ramsey provided an extremely thorough history of HTTP, explaining the logic behind every RFC and addition to the specification. In 1991, HTTP/0.9 was drafted, which only included the GET method. HTTP/1.0 was released with RFC 1946 in 1996, which included most of the major pieces of the protocol, but was missing the Host header. This omission was corrected with RFC 2068 a year later, which introduced HTTP/1.1. Aside from minor changes to the specification (new request methods, response codes, specification split into smaller components etc.) the Internet has been running on the same infrastructure for nearly two decades.

In 2009, Google began experimenting with their own transfer protocol which they named SPDY (pronounced “speedy”), to better address the needs of their many web applications. By 2012, a number of other major websites including Twitter, Facebook and WordPress supported the protocol. In May 2015, the IETF caught up by publishing RFC 7540, which defined HTTP/2. The protocol is a virtual copy of SPDY.

HTTP/2 offers many benefits over its predecessor, including header compression, multiplexed streams (which solves the FIFO problem) and allowing servers to push several assets through the same TCP connection.

Browser support for HTTP/2 is very good, but is currently only available over TLS. It is fairly trivial to support HTTP/2 with mod_http2, which is available for Apache 2.4.17. Plank will definitely make use of this new protocol wherever possible.

The Formula to Creating Awesome Docs

Fellow Canadian (and Github hottie), Jonathan Reinink opened Saturday’s morning’s talks with a discussion of a facet of development that is often an afterthought. A self-professed documentation lover, Jonathan explained the merits of good documentation and how to do it properly.

Though many find documentation a chore or a nuisance, it has many merits. First, it serves as branding and marketing. It is effectively the “face” of your API or service. Second, creating documentation is a form of teaching. Helping others learn and understand is always rewarding. Third, documentation is learning. Not only are you informing others, but you are able to formalize your own knowledge. But most of all, documentation is vital to the success of your library, framework, language or service, as no
one will use something they don’t understand.

Telling someone to read the code or look at auto-generated API docs is not enough. Proper documentation should be organized to suit multiple needs and explain in both code and plain text how the software is meant to be used. Jonathan proposed organizing documentation according to the following hierarchy of needs:

  1. The Pitch: The front of the documentation site should provide a quick elevator speech that provides the user with everything they need to know about whether or not the software is useful to them. What does it do? What are its major features? Is it stable? What is the license? How will it save time? Is it easy and fun to work with? Any reasons not to use it? How popular and supported is it?
  2. The Example: Next, the documentation should provide a sample of the minimal viable use case. Walk the user through any installation instructions and give them an example of how to use the core feature of the application or service. This could be presented as code samples or screencasts.
  3. The Guides: The meat and potatoes of the documentation are the guides that explain how each individual piece of the software can be used. Every major topic should have its own page, broken down into subsections as needed. Provide a table of contents, which permalinks to each heading and, most of all, provide lots of code samples. Make the whole thing searchable.
  4. The Reference: Once your user has a thorough understanding of your product, you can provide them with the tools to dig deeper themselves. Provide links to change logs and upgrade guides. Provide a link to the API docs. Provide a means for users to contribute.

Over the last few years, a predominant format for laying out documentation has evolved. It involves a header with the project name and version as well as a search input, a sidebar serving as a table of contents and the right column providing the content. This format is used by the League of Extraordinary Packages, Laravel, Vagrant, Stripe, Bourbon, Vue.js, React and many others.

Coping When it All Hits the Fan

One of the most thought provoking talks of the entire conference was Eryn O’Neil‘s discussion on how to deal with inevitable problems in production. Speaking about her personal experiences (with obfuscation for NDA reasons), Eryn recounted how, despite the best laid plans of mice and men, sometimes things just go wrong. Being faced with unexpected complications is something that I have personally had to deal with, so I was interested to see how someone else has handled similar situations, and at much larger scale.

Eryn began by explaining that any problem can be divided into one of two types. The first type is “project problems”, which are the result of mismatched expectations, a lack of communication or (likely) both. These are the types of issues that involve issues with project planning, scoping and/or budget considerations or outright disputes with clients. The one common denominator of all project is that they involve people, and people have complex behaviours, motivations and thought processes, which
can be difficult to untangle. Eryn suggested approach this type of situation in a few steps.

  • Empathy: Try to take a step back in order to understand the other party’s perspective. A useful trick for doing this the “5 whys” approach of repeatedly asking yourself “why” the situation is as it is, in order to dig beneath the surface.
  • Communication: The most important step is to actually sit down with the other party and discuss the issue.
  • Trust: Build a relationship with the other party. Once both parties trust each other, it is easier to reconcile misunderstandings.
  • Flexibility: Be willing to adapt to the situation.

The second type of problem is the technical variety, which can fail even more spectacularly. When something goes wrong in production, the most important first response is to take a step back, calm down and make a plan. Immediately connecting to the server and cowboy coding while in a panic is likely to cause even more problems down the line. Instead work out a plan of what needs to be done and how that will be achieved.

A useful tool for dealing with technical problems is to prepare an incident response arsenal before you encounter problems. This arsenal is a template for how to approach particular kinds of problems, detailing procedures such as who is involved, how they can be contacted, how stakeholders will be communicated with, what is the timeline to remedy and how do we approach the code. Another recommendation is to have one person be in charge of communication and another focused on fixing the issues.

However, the most important preparation you can do is to cultivate a shame-free environment. Make sure it is okay for a member of the team to say “I made a mistake” or “I need help”. Anyone can make a mistake once, but take steps to make sure they don’t happen again. Once you have stopped the bleeding, take the time to implement safeguards against a similar situation occurring again, whether that take the form of code or a checklist of steps to take.