Debugging An Issue With Should.js

Quick post today about something that I was debugging that I thought might help someone out.

We were using should.js for Mocha assertions. In some of the tests, I was suddenly getting warnings like:

WARN Strict version of eql return different result for this comparison
WARN it means that e.g { a: 10 } is equal to { a: "10" }, make sure it is expected
WARN To disable any warnings add should.warn = false
WARN If you think that is not right, raise issue on github https://github.com/shouldjs/should.js/issues

I tried adding the should.warn = false but that did not seem to have an effect. Plus, this code would have been required in each file that had the problem, so it is an unsatisfactory solution.

Read on →

Coffeebot

I wanted to be alerted via Slack when the coffee was finished brewing so I could be sure that I would get coffee. Also, I wanted to work on a small hardware project to have some fun. I had a few hardware pieces from RobotsConf 2014, so figured I’d try getting something working that would fix my coffee notification problem.

The end result is a piece of hardware that sends a Slack message when the coffee has started brewing, and sends another one when the coffee is probably done:

What the bot looks like in action

Here is what the hardware ended up looking like in the end:

The hardware

I’m pretty happy with the process and the result, and wanted to share how I thought about it.

How I went about making it

I figured that I would need some of the following hardware capabilities to make this happen:

  • microcontroller for programming the logic
  • ability to hit the internet (possibly with a wireless adapter)
  • built in breadboard helpful to wire this up as a prototype

One of the boards I had laying around was a Particle Core (formerly Spark Core), and after understanding what it did, it seemed to fit the bill. It has a built-in breadboard, and, even better, an onboard Wi-Fi module. You can set it up to talk with your wireless network through the command line or with a phone app, which is kind of neat. It also has an interesting multicolor LED which signals the current system and networking state.

Read on →

Setting Up RuboCop on an Existing Rails Project

I recently set up RuboCop on an existing Rails project. I’ll share how I approached it and what could have gone better. Specifically, I’ll help you fit RuboCop to your project’s needs, and not the other way around.

What is RuboCop?

RuboCop is a Ruby linter. It has a bunch of rules about how Ruby should be formatted. It calls these formatting units cops, and has many built in. You can define your own cops if you want to get crazy. Most of the built-in cops have configuration, and at least the ability to disable the cop.

Why use a Ruby linter?

Having a style guide of some sort saves the software team time. It’s nice to have a canonical guide on what the source code should look like. Having a guide saves the team from wasting time formatting and lets them get on to more productive things. It also makes reading the code easier because everything is in a consistent format.

But having a style guide is not enough. Developers end up either unwittingly violating the style guide, or feeling like they are making passive-aggressive comments in code review. With a linter and continuous integration, you can ensure that the project automatically points out style violations. It stops wasting time and effort and lets you focus on the things that matter. It takes your documentation of what the software should look like and turns it into an executable specification.

Avoiding poor decisions

The built-in cops are based closely on the Ruby style guide. However, those guidelines probably don’t line up with the current code your project has. Sometimes the cops are overbearing. Sometimes they don’t make much sense. Sometimes they are just too strict.

The first thing to do–which I did a poor job of this time–is to ask the team that you are working with which things in the guide they disagree with. I spent a little too much time thinking on my own and changing things that eventually needed to be reverted.

Read on →

GitHub Pull Request Workflow Labels

While at Haven, I thought we had a pretty good system for tracking the current status of a given pull request. In this post I’ll document some of the labels that we used, what they meant, and how they helped us collaborate.

Labels For A Leaner Workflow

One of the challenges of working with pull requests is that they can sometimes take a long time to get merged in. Some of this can be mitigated by keeping pull requests small, some by reviewing pull requests as soon as possible. However, a lot of time passes while waiting for someone to review or respond to comments, or even know that there is an issue or question that needs to be addressed.

Why do we care about reducing cycle time of pull requests? It is work that is almost finished. If we can finish it, we reduce the overall work inventory in the system. It helps us get business value more quickly and learn what doesn’t work.

Labels can clarify where a pull request stands. I see the clear next step and whether I am responsible. If something needs review, I review it. If one of my pull requests can be merged or needs changes, I do these.

The Labels

I’ll cover the basic labels that I thought were most helpful, and how we thought of them. We didn’t start with all of these; we just built them up over time and revised as necessary.

“work in progress”

“This isn’t ready for review, but I want to make it public.” It might be just to get code off of my machine in case something bad happens to my laptop, or it might be work that I want help or have questions on. By pushing code up early, we can communicate with actions rather than words. Instead of “I’m working on X”, you can let commits do the talking.

Read on →

How I Did 5580 Pushups In 23 Weeks

My wife and I lived in San Francisco for a year starting in the summer of 2013 and the summer of 2014. One of the very best things about living there was the fantastic Ultimate scene. Some of our friends had a weekly track workout on Mondays that we participated in, and I felt that it helped me be a better competitor on the field by being in better shape. Some of them were participating in a fitness challenge, and it sounded like fun.

The next year, starting around October, they announced that they were expanding the fitness challenge to include anyone that wanted to join. To join, you put in $10 and most of the proceeds went to the Bay Area Disc Association. About a hundred people signed up, and I was one of them.

The Challenge

The challenge was fairly easy to grasp: every day and every week there are certain fitness requirements to be done, with one off day for the daily requirements. If you miss two daily exercises (not done by midnight) or miss the weekly requirement by midnight on Sunday, you are out. All participants get a weekly email on Monday discussing the week’s requirements and explaining any new exercises if there are any. Deciding whether you were in or out was just based on the honor code (Spirit of the Game, in Ultimate terms.) The exercises were all body weight, so no special equipment was really needed.

As they say on the website, one week’s challenge might be:

Below is an example of a hypothetical workout routine for a given week with daily exercise requirements of ten push ups and ten squats and a weekly exercise requirement of one hour of non-Ultimate cardio.

  • Monday: ten push ups, ten squats
  • Tuesday: ten push ups, ten squats
  • Wednesday: ten push ups, ten squats
  • Thursday: ten push ups, ten squats, one hour non-Ultimate cardio
  • Friday: ten push ups, ten squats
  • Saturday: OFF DAY
  • Sunday: ten push ups, ten squats

Ramping Up

The challenge started out very simple, and could be done in about two minutes per day. There was no weekly requirement to start. Really the challenge at this point was just remembering to do the exercises. A few people were knocked out because they forgot to do the exercises.

Read on →