Consistently Snake- and Camel-Casing

I am working on a couple of projects that use Ruby on the back end and JavaScript on the front end. The Ruby convention for variables is snake_case, while JavaScript variables are camelCased. This causes friction when we pass things between the front end and the back end.

An ad-hoc solution might result in the Ruby code handling some camel-cased variables when reading in JSON or when sending a response. Otherwise, the JavaScript code has underscores all over the place, which is also undesirable. It has the effect of cluttering up our front end code.

Overall, this discrepancy makes it harder to derive the right variable name each time on both the server and the client. Languages have conventions primarily to make it easier to remember what to call things. However, this breaks down when there are two or more languages in play that have different conventions.

A solution that I implemented that I’m pretty happy with so far is to consistently snake- and camel-case on the server. This can be done with two steps. First, we create a middleware that intercepts requests with a JSON body and converts the keys to snake-case. Then, whenever we send a JSON response, we convert the response to camel-case for the client to consume.

There are a few advantages to doing it this way. We will have consistent snake-casing on the back end and consistent camel-casing on the front end. Our linters will have fewer false positives. In addition, our tests are also generally easier to write because they can use the correct case (except for server-side controller tests, since these require camel-case input.)

For the specific project that I’m working on, I used a pair of gems written by the same author. The plissken gem turns camel-cased hash keys into their snake-case equivalent, and even works recursively for arrays of hashes. The awrence gem does the reverse, going from snake-case to camel-case. So if we were using Sinatra and ActiveSupport, a middleware might look like:

  ...
  use Rack::Parser, parsers: {
    'application/json' => -> (data) do
      JSON.parse(data).to_snake_keys.with_indifferent_access
    end
  }
  ...

Which will load the JSON body into the params hash, and we can access it with symbols or strings.

Our JSON responses can be automatically camel-cased with the following middleware:

class CamelizeJsonResponseMiddleware < Sinatra::Base
  after do
    pass unless content_type == 'application/json'
    if response.body.length > 0
      body response.body.to_camelback_keys.to_json
    end
  end
end

The nice thing about using middlewares is that we don’t have to remember to convert for each request. Our application is more consistent as a result.

One potential downside is if the gems don’t work as expected or if the input or output is particularly complicated and our expectations are violated. The other would be if someone new came onto the project and doesn’t understand the middlewares. They might be quite confused until they figured out what was going on. I think documentation and logging would help address most of the issues here.

I’m pretty happy with how this worked out, and think that it makes the code a lot cleaner. Hope this helps you on your projects!

The Cross-Country RV Trip I Didn't Take

At HealthPro, we recently started having some company culture discussions. The general idea is that a company will always have some culture, and that you can influence what it becomes by being aware of it and proactively discussing it. After a general brainstorming session together, we started discussing things in Slack (“Work As If Remote”, right?)

The value that we were discussing was “Try new things / Don’t be afraid to make mistakes”. One of Kyle’s questions that seemed to get a lot of response was “What’s the most spectacular failure you’ve been a part of? What did you learn from it?”

I liked my response, so thought I would share it with everyone now.

My Story

One thing that I did in 2011 that was a pretty big failure in my mind was unsuccessfully trying to go on a cross-country RV trip.

I had some money saved up and the startup I was working for had folded (which is probably a failure story itself), and thought it would be good to get away from everything and travel out west and try to play a lot of Ultimate. I thought that I would need a small RV to make living work (and could maybe code or something in there), so traveled several hours to Kentucky based on a Craigslist ad to buy a 22-foot 1987 Winnebago Minnie Winnie. My (now) wife drove me there and followed me back in one very long day.

When driving back, we stopped at a gas station, and the RV wouldn’t start! We waited a bit and it started up again, but I was loath to stop it again. When we got home after about ten hours of driving round-trip in the summer, I was pretty exhausted.

I am not mechanically inclined, so considering driving across the country in a 25-year-old RV was not really that exciting. I kind of freaked out and sat in the basement for a day or two. I signed up for a conference in Colorado that I was going to drive to that week, but didn’t end up going and so lost out on the money that I put into it.

The RV sat in the driveway through the winter. After nine months I put it up for sale.

The woman who ended up buying it has the foresight to have a mechanic appraise it, which seems like a smart thing to do. They said it had some roof damage, so I ended up selling it to her for about $2000 less than what I bought it for. The mechanic also said that there was essentially no oil in the thing, so it was lucky that I didn’t burn up the engine with the several hour drive from Kentucky. I was glad to be finally rid of it, but it was a fairly expensive mistake.

Obviously in the grand scheme of things, it was not that big of a deal. Instead of going on the trip I used the rest of the savings to start some independent work, which launched my consulting business. If that is the worst thing that happens to me, then I would consider myself very fortunate.

Lessons Learned

First, get vehicles checked out if you aren’t sure about them.

I realized that there were easier and cheaper ways to make the trip that I wanted to take. I could have taken my pretty well-functioning car and bought a tent and camped out. The gas certainly would have been cheaper (30 MPG vs 10 MPG.) Plus, I would have saved the capital outlay and potentially loan interest of purchasing another vehicle.

Do the simplest thing that could possibly work. Instead of making the big trip first, make smaller trips to figure out if I like it or not and I can come back easily if there are any complications. While having the conference as a deadline pushed me to action, I might have made worse decisions because I tried overcomplicating things early.

Last, play to your strengths. Buying a super-old RV that I didn’t know much about and that I would need to maintain was not in my wheelhouse.

Work As If Remote

At Haven, one of the unwritten values we had was “work as if remote”. In this post I’ll explain what this means and why it is important.

What does it mean?

“Work as if remote” means we always pretend that there are people working remotely, and behave accordingly. Even if everyone on the team is in the same room, or at the same meeting, we operate like there are people that are across the country. We do this by documenting:

  • the plans that we have
  • the decisions we make
  • the things we do
  • the things we learn
  • meetings or conversations we have

The tools

We implemented this at Haven by using Slack for most communication, and Trello for capturing story-specific details in line with the cards that they were related to.

Also, MeetingHero (now the inferiorly named WorkLife) allowed us to record meetings in a collaborative way, although a shared Google Drive document could achieve the same goal.

Benefits

There are usually people working remotely, even if you don’t think they are. First, there may actually be remote people that you have just forgotten about. :) We had a designer working in San Francisco, and while he didn’t chime in often, writing as much as we could likely gave him more context for designs.

You might think that everyone who cares about a given subject is in the current room, but often there are other people that would benefit from having conversations written down.

Working as if remote allows us to bring new people up to speed more quickly, because we document what we are doing and how we do it. Asking a question in a shared channel enables anyone to answer it without interrupting everyone. Everyone can search back through history for the discussion and resolution of problems.

Writing out what we are doing forces our thinking to be sharper and our decisions more explicit. We get a chance to look back at the decisions we make along the way and introspect when things go well or go poorly. We make it clearer what we are planning on doing and can hold ourselves accountable. Coworkers understand what we do on a daily basis and where they might be able to help.

In today’s software development environment, being able to work remotely some of the time is more and more common. I doubt that I will consider future work that isn’t partially remote, and there are or will likely be more people like me. Most people expect to be able to run errands or take care of their kids or have a more flexible schedule, but our communication patterns need to change if we are to be successful and have this be a possibility. To this end, I would argue that working as if remote is one of the foundations of a healthy culture around taking vacations and traveling. In my opinion, it should not matter if I am across the street or across the country if we are getting work done effectively as a team.

Tradeoffs

Communicating in this way may seem like a lot of extra work. In reality, it doesn’t take much more time than having the conversations that we are already having. Also, it can actually save us time. When I try to remember something I did yesterday and I wrote something about it, it saves time and effort.

Four people getting together in a room for two hours is an expensive thing. We have an obligation to make sure that meeting time is well spent and that we are clear on what comes out of the meeting. By typing up good notes, we give people who weren’t in the meeting the benefit of being in the meeting without needing to devote the entire time to be there.

Good practices

Be asynchronous

From code review to meetings to doing standups, there are many activities that can be made asynchronous or location-independent. You may start a process that works synchronously (perhaps a recurring meeting), but once the team understands the parameters, consider how can it be distributed over time and location.

Overcommunicate

This is probably a good principle in general, but write more and about more topics than seems necessary. If you feel like someone might say, “TMI (too much information)!”, then you are probably headed in the right direction.

Record synchronous communication

If you have a useful conversation with someone, post a summary so that others can learn from it. Also, this helps document what you talked about to ensure that you actually heard it right.

Embrace the firehose

Posting everything that happens can be a little overwhelming. Personally, I’d rather have more information than less, so I think this is worth it. But when everything is a priority, nothing is. It is useful to mark things as “FYI” or “important” so that others understand the priority and can effectively filter the firehose of information. You also need to set up your channels and policies to be responsive but also not get overwhelmed.

Don’t be afraid to sync up

Even when remote, be willing to have a Skype / Hangout to sync up. Synchronous communication allows you to hash things out much more quickly. Then, of course, write down what you talked about and the main decisions or clarifications made. :)

Conclusion

I wanted to share this because I think it was a really useful philosophy that we had. I will definitely be trying to do things along these lines going forward.

Debugging An Issue With Should.js

Quick post today about something that I was debugging that I thought might help someone out.

We were using should.js for Mocha assertions. In some of the tests, I was suddenly getting warnings like:

WARN Strict version of eql return different result for this comparison
WARN it means that e.g { a: 10 } is equal to { a: "10" }, make sure it is expected
WARN To disable any warnings add should.warn = false
WARN If you think that is not right, raise issue on github https://github.com/shouldjs/should.js/issues

I tried adding the should.warn = false but that did not seem to have an effect. Plus, this code would have been required in each file that had the problem, so it is an unsatisfactory solution.

The solution is to change things like the following:

response.status.should.eql('404');

in the tests to:

response.status.should.equal(404);

Basically should.js tries to ensure that we are doing what we expect when matching equality. Since the types don’t exactly match, it issues a warning so that we take a cloer look. This behavior must have changed recently when I saw it, since it previously seemed to work. Happily, the new way of doing this is more correct as a side effect.

Coffeebot

I wanted to be alerted via Slack when the coffee was finished brewing so I could be sure that I would get coffee. Also, I wanted to work on a small hardware project to have some fun. I had a few hardware pieces from RobotsConf 2014, so figured I’d try getting something working that would fix my coffee notification problem.

The end result is a piece of hardware that sends a Slack message when the coffee has started brewing, and sends another one when the coffee is probably done:

What the bot looks like in action

Here is what the hardware ended up looking like in the end:

The hardware

I’m pretty happy with the process and the result, and wanted to share how I thought about it.

How I went about making it

I figured that I would need some of the following hardware capabilities to make this happen:

  • microcontroller for programming the logic
  • ability to hit the internet (possibly with a wireless adapter)
  • built in breadboard helpful to wire this up as a prototype

One of the boards I had laying around was a Particle Core (formerly Spark Core), and after understanding what it did, it seemed to fit the bill. It has a built-in breadboard, and, even better, an onboard Wi-Fi module. You can set it up to talk with your wireless network through the command line or with a phone app, which is kind of neat. It also has an interesting multicolor LED which signals the current system and networking state.

Early steps

My first step was just to get the thing blinking an LED. I had some experience with Arduino programming, and the Particle Core is programmed in a similar way.

I downloaded some sample code from somewhere, and got a basic LED blink working. The Particle Core has some integrated LEDs, so this was fairly straightforward (just write a certain pin HIGH.)

I didn’t want to have to download an IDE or use a web-based tool to compile the firmware. I read through the Particle docs and figured out how to compile the Particle firmware through a command-line interface by installing the particle-cli NPM package. After some setup hurdles it overall worked pretty well.

Networking

Next, I wanted to hit the network, since that was critical. If I couldn’t do that, I couldn’t send anything to Slack.

I initially had some problems connecting to my local network. The instructions did not seem helpful for resolving the issue, and I even tried the phone app to get things connected. I ended up needing to use 2.4 GHz channel, since the Particle could not talk to a 5 GHz channel. Once that was squared away, I was ready to try to send an actual request.

I pulled in an open-source HTTP library for Particle Core that seemed like it might work. My goal was to hit example.com and print out a response. After some finagling, I was able to log out the response, which meant the network connection was successful.

It took a little while to hook the hardware button I had up so that when I pressed it, a network request would be initiated. I think this was due to the button being small and so there was no documentation or even a readable serial number. I had to search around for a bit until I figured out how the switch was working internally.

A minor setback

At this point, I tried integrating directly with the Slack API. The issue that I ran into was that the HttpClient library doesn’t support HTTPS. This is an important detail because Slack’s API is only available over HTTPS. The issue that I linked to above mentions that a workaround is to post an intermediate server that you control.

At first, I didn’t really like this idea, but then I thought that I could more easily make changes and deploy when the payload and security configuration were on a server that I could control rather than the hardware. The hardware could remain basic and the server could contain more of the sending logic.

So I spun up a Heroku instance and pointed the Coffeebot at a simple Node server running on it. I tested by creating a private Slack room that I didn’t invite anyone else to, so that people wouldn’t get annoyed by the testing I had to do to get it working.

I ended up using some emoji the Coffeebot’s image and called it “Drippy”. I think this gave it a bit of a fun feeling. :)

Source code

There are two repositories:

My memory is not that strong, so thankfully I documented things pretty well in the spark-coffee repository. Also, having decently atomic commits even when I knew I was going to throw the code away was useful when tracing the evolution of the project. In my opinion, people underrate commits as project documentation and history.

Schematic

I don’t have a shareable schematic, but if there is an easy way to make something like this, I would consider spending a few minutes on it.

Overall impressions

It was easier than I thought to get everything working. I really like having integrated Wi-Fi support; it was nice to not have to deal with the complexity of integrating with a Wi-Fi USB module.

It would be interesting to be able to send commands to Coffeebot from Slack. We could integrate more sensors like temperature or volume to figure out remotely whether we will need to brew more coffee soon or not.