Why I Review Code ASAP

When our development team pushes up new code to be reviewed, I like to review it quickly. Why?

Personal reasons

Generally reviewing code has little cost of delay for me, especially if I am already interrupted or have not started new work. Reviewing code is a good mental shift after I push up my own features.

I like being able to give feedback on code. I think that it improves the quality and consistency of our project.

I also like seeing what everyone is working on. It gives me better context for when I am working on new features or fixing bugs. I get a really good sense of what everyone likes to do and how they like to do it. Sometimes I realize that a teammate is working in a similar (or the same!) area, and I need to communicate more to not squash what they are working on.

If I’m deep in focus of developing or otherwise occupied, I might wait to review. But generally if I see that there is an open pull request I will take a look and comment on what I see.

Github pull requests of feature branches are our current method of pushing code up for review. A typical review takes me maybe a couple of minutes. Some are longer, and some one-line fixes are very quick.

Reviewing just about every pull request is currently feasible. Our development team size is four full-time developers and one half-time developer. At some point it possibly doesn’t scale, but I am willing to at least look at the pull requests that roll through at a high level to get a better understanding of what is happening.

Code review is a skill that improves with practice. It is also a really good way to learn from what your teammates are doing. What takes hours to do, a reviewer might be able to learn from in a short time.

Team reasons

It excites me to get feedback soon after I release something, and I can only assume it is the same for others on my team. If I want to boost someone’s morale, one thing that I can do is take a look at the work that they labored over and give it an honest view. Not letting pull requests to stagnate gives us momentum.

There is a relatively high cost of delay for the reviewee. Every hour that we delay reviewing code is one more hour that the code might be getting out of sync with the master branch.

The very best time to respond to review comments and make any desired changes would be right after the code is released. Every hour we delay, the person who did the work loses some degree of context, focus, or energy. So the quicker I can review something, the better work we can do as a team.

One important side-effect is that we get the code that is closest to being released totally finished, so we can pull new items forward. In Lean terminology, we lower our WIP. If I can choose reviewing something that is finished over starting something new, the team can deliver value more quickly.

The Gateway Drug to Continuous Integration

Sometimes I come onto a project and there is no continuous integration or tests. I like working with tests and using a continuous integration setup, especially when working with others. I also think that a project’s quality will increase and the speed also increases when we get automated feedback from continuous integration.

Typically a good first step for me is to get some basic tools running. Before I even write any tests, I get some basic sanity checks in place. If I am running in a Ruby environment, for example, I may run some static analyzers and fail the build if they fail. I might also check that every Ruby file that we have at least passes ruby -c (which checks for syntax errors. In JavaScript-land, this might take the form of jshint or another linter. In compiled languages, just ensuring that you can compile your code might be enough.

I ensure that the tools work locally first, fixing any issues needed and relaxing overzealous rules. Then I make a script that runs whatever tools that I want to run. This enables the team to run it locally and makes it easy to run on a server. Then I would get a basic CI environment running that script.

Approaching CI in this manner demonstrates value early. These checks ensure a base level of quality, and retain value even after tests are added as a smoke test. If your files have syntax errors, they are likely to not be right, so I can fail the build early and get faster feedback and use fewer testing resources. If the build fails at this point, there are some potential process problems or inefficiencies that can be sorted out.

Going small early de-risks setting up a CI server and writing tests. Once you have a basic setup, it is fairly easy to add other checks, including automated tests. Also, CI vendors encourage a low cost of change to move to a different CI environment, so there is little lock-in.

I like running these kinds of tests before committing or pushing code in case I missed something obvious while writing or rebasing. These checks are typically pretty fast, so the extra time is not a big deal. I will generally think of what I want my commit message to be or consider if I missed something. There are tools to run checks like these continuously or when files change as well.

The event that prompted this post was realizing that I had various important JSON files that held either system configuration or managed dependencies. By at least ensuring that they were valid JSON, we prevent a whole class of problems from ever happening.

What simple issues could a basic CI setup help you prevent on your project?

Express Route Parameter Ordering

I was trying to use two Express param functions in one route, and one of the param functions needed the other param to be loaded. For example, if I have the route file:

    // route file
    app.route('/api/v1/toolboxes/:toolboxId/tools/:toolId')
    ...
    app.param('toolboxId', controller.loadToolbox);
    ...
    app.param('toolId', controller.loadToolAndEnsureFromToolbox);

And an associated controller:

    exports.loadToolbox = function(req, res, next, id) {
        // load toolbox from Mongoose into req.toolbox
        req.toolbox.maker = req.toolbox.maker || 'Stanley';
    };

    // controller
    exports.loadToolAndEnsureFromToolbox = function(req, res, next, id) {
        Tool.findAsync({
            _id: id,
        }).then(function(tool) {
            if (!tool) {
                res.status(404).send("Tool not found");
            } else if (tool.maker !== req.toolbox.maker) {    // <<<<<
                res.status(403).send("This tool isn't from the right maker!");
            } else {
                req.tool = tool;
                ...
                next();
            }
       });
       ...
    };

I wasn’t positive that toolbox would always be loaded when I wanted to check it in tool. (This is kind of a strange example, but it resembles a problem that I was trying to solve.)

The tricky thing was that the different param calls were coming from different route files, so I wasn’t sure if this would affect the results either.

My mental model was: params get resolved in some order, then middleware runs, then the controller runs. If anything in the chain sends a response or does not call next with a non-error, then execution stops. So another way of loading the params in a deterministic way would be to have one middleware that loads both in a specific order. This seemed undesirable since it was extra code and I already had some working param functions.

I tried searching online for a solution, but didn’t see anything that clarified what order the params load in.

I asked about the problem in our internal Slack #dev channel.

Craig wrote back:

I wouldn’t be surprised if it was left to right, or if it was the order of the params being registered, if it acts like other parts of express.

Some other parts of Express would be middleware if used in router.use().

Experiment

I wanted to experiment to figure out whether the params were deterministic, and if so, what the rules were.

Experimentation showed: app.param(...) calls are resolved in the order that you put the param in the route. Params get resolved left to right (as you read). I figured this out by seeing which order the calls were done in and then reversing the parameters. It does not appear to matter what order the params are declared in your routes files (I switched these around as well, with no effect.)

This is nice because it: 1. is deterministic 2. is fairly logical 3. saves me from having to add in a middleware or an app.all for a route when this should be handled with param

Testing MEAN Apps With Javascript

On March 18th, I presented “Testing MEAN Apps With Javascript” to the Indy.JS meetup group. The room was packed as we covered the basics of why we do developer testing, and how to test apps that are built with Mongo, Angular, Express, and Node. Most of the code was devoted to showing how to test Mongoose models, Express routes and controllers, and Node functions with Mocha and Sinon. Here is the slide deck (written with reveal.js):

Full size slides

Some supporting comments

Some comments from the slides (in my super-secret speaker deck…):

What is testing?

You already do testing, probably on at least a manual basis.

For the purposes of this talk, testing is automated developer testing. These are tests that you write to verify the code that you write and to move more quickly.

I won’t be talking much about QA-style testing, although you could possibly use the tools that are covered here, especially protractor.

Why Test?

Verify code works as expected

Document expectations and assumptions

Code as documentation of what you expect to receive.

Ensure code continues to work as expected

Protect the functionality that you intend from package updates, teammates, and future you.

Imagine you are working with a team of twenty, how likely would it be that everyone knows how to not break the app?

Increased feedback

Instead of finding out that you broke something a week from now when you deploy to production, find out in one minute locally before anyone else finds out.

Increased confidence in changes

Ever feel scared to push the deploy button or feel like you are probably breaking something? With solid code coverage, you can feel more confident in making even sweeping changes.

Better Design

Potentially controversial.

Writing tests, especially test-first, forces you to think about how to make your code more testable. Typically this means isolating dependencies, making modules do just one thing, focusing on interfaces between modules, etc.

If anything, it will mostly end 100 line functions with ten conditionals.

Development speed

Potentially controversial.

Like touch typing, it is slower at first, and then you get increased speed and accuracy.

I personally think this is true, especially when you take maintenance into account.

Help Debugging

If you have code that is doing something funny, you can write a test that exposes the behavior to ensure that you can reproduce it, and that when it passes that you have fixed the issue. Also, prevents regressions.

You can also test things that are hard to set up like workers.

Toward Continuous Deployment

If you are to continuously deploy code, having solid tests is critical.

Automate manual testing process in order to be able to deploy automatically with confidence.

Know when to stop

When the tests that are reasonable all pass consistently, you are done. Writing tests helps ensure we don’t gold-plate code and just have it do what it needs to do today.

Why Not Test? (Challenges in Testing)

Must be valuable

It must be something that is valued by your team and done well.

The ROI of tests needs to be high, or it is not worth it.

Think to yourself, is this test valuable enough to run 1000 times? If it breaks, what does that mean?

Learning curve

Part of the cultural issue is that people are resistant to change, and especially when there is a learning curve with a change.

Touch typing as example

It might take me a few months of going more slowly to be able to produce higher quality code more quickly.

Culture

Many challenges in testing come down to culture. If you are the only person on your team writing tests, there will be issues. If you are the only person not writing tests, there will be issues.

Intermittent Tests

One of the hardest problems to solve.

Hard because of ownership. Who takes ownership for a test that seemed to pass but now fails because another test leaks state?

Need high reliability. Fortunately, easier to do with JS test frameworks than other frameworks that I have used.

Test fragility

It is possible and even sometimes easy to write tests that fail when the production code is changed in small ways. Writing robust tests is a skill.

We’re not sure what we are building…

I would say to really try to figure out what you are currently trying to build before trying to build it. If it not clear, keep asking questions.

At some point between when you deliver the code and now, you will need to figure out what you think it should do, so when you figure that out, you know what your tests are. You can always change, add, or remove tests.

Spike and stabilize - write working code first and then the tests later if the spike is viable or useful (or throw away the code after you have written it and try again with tests first.)

Cannot prove a negative

Your code will never be bug-free. But tests may give you a reasonable level of confidence.

Debugging and Fixing Local Karma PhantomJS Issue

Debugging and Fixing Local Karma PhantomJS Issue

Using Karma for a client AngularJS project. Technically there is only one Karma test (a stub success test) since we haven’t gotten around to using that yet. But it is set up and part of our test build chain. Using Mocha for Node tests (controllers, models) and Protractor for end-to-end testing.

So I am running Karma locally, and all of a sudden getting some PhantomJS issues. The results look something like:

$ grunt test:karma --debug
Running "env:test" (env) task
[D] Task source: myproject/node_modules/grunt-env/tasks/env.js

Running "karma:unit" (karma) task
[D] Task source: myproject/node_modules/grunt-karma/tasks/grunt-karma.js
INFO [karma]: Karma v0.12.23 server started at http://localhost:9876/
INFO [launcher]: Starting browser PhantomJS
WARN [launcher]: PhantomJS have not captured in 60000 ms, killing.
INFO [launcher]: Trying to start PhantomJS again (1/2).
WARN [launcher]: PhantomJS have not captured in 60000 ms, killing.
INFO [launcher]: Trying to start PhantomJS again (2/2).
WARN [launcher]: PhantomJS have not captured in 60000 ms, killing.
ERROR [launcher]: PhantomJS failed 2 times (timeout). Giving up.
Warning: Task "karma:unit" failed. Use --force to continue.

Aborted due to warnings.

It would try again after a bit and continually fail. The continuous integration server and other developers’ machines are working just fine. Usually you hear “works on my machine”. Well in this case, it didn’t work on my machine, but worked everywhere else.

Strangely enough, I could access http://localhost:9876 through a browser even without the grunt task running (it said “Not Found” instead of the typical Chrome server not running message.) So I figured maybe something else was listening on that port. I killed all node and grunt and karma and phantomjs-like things, but it was still not working.

So on my Mac:

$ lsof -n -i4TCP:9876 | grep LISTEN
Emacs-10. 30723 anthonypanozzo    9u  IPv4 0x866a47118fd21843      0t0  TCP 127.0.0.1:sd (LISTEN)

Emacs?! What?

I have been dipping my toes into Emacs again after realizing the unparalleled power of org-mode. Turns out it was a plugin I recently added to Emacs called org-trello that listens on the same port. Specifically org-trello uses elnode which listes on that port. So I disabled org-trello for now (just trialing it currently anyway) until I have time to figure out how to change the port that org-trello listens on. The bottom of the org-trello migrations page is promising, but my Emacs-fu is not up to snuff.

This took me longer than I expected to take on it. I think one of the issues is that Karma does not complain if something else is listening on the same port. I figured it was a problem with PhantomJS instead.

After this change, Karma was back and working again. Hope this shows how I think about debugging problems like this.