Setting Up RuboCop on an Existing Rails Project

I recently set up RuboCop on an existing Rails project. I’ll share how I approached it and what could have gone better. Specifically, I’ll help you fit RuboCop to your project’s needs, and not the other way around.

What is RuboCop?

RuboCop is a Ruby linter. It has a bunch of rules about how Ruby should be formatted. It calls these formatting units cops, and has many built in. You can define your own cops if you want to get crazy. Most of the built-in cops have configuration, and at least the ability to disable the cop.

Why use a Ruby linter?

Having a style guide of some sort saves the software team time. It’s nice to have a canonical guide on what the source code should look like. Having a guide saves the team from wasting time formatting and lets them get on to more productive things. It also makes reading the code easier because everything is in a consistent format.

But having a style guide is not enough. Developers end up either unwittingly violating the style guide, or feeling like they are making passive-aggressive comments in code review. With a linter and continuous integration, you can ensure that the project automatically points out style violations. It stops wasting time and effort and lets you focus on the things that matter. It takes your documentation of what the software should look like and turns it into an executable specification.

Avoiding poor decisions

The built-in cops are based closely on the Ruby style guide. However, those guidelines probably don’t line up with the current code your project has. Sometimes the cops are overbearing. Sometimes they don’t make much sense. Sometimes they are just too strict.

The first thing to do–which I did a poor job of this time–is to ask the team that you are working with which things in the guide they disagree with. I spent a little too much time thinking on my own and changing things that eventually needed to be reverted.

Another poor decision was when I disagreed with the linter, but instead of listening to my experience and judgment, acquiesced to the tool’s demands. I think that linters should serve the project’s goals, not the other way around. If you find yourself rewriting or restyling swaths of code, consider if you could make the cops less picky or disable them entirely. Hopefully this post will help with understanding the tool’s settings well enough to change them.

The first run

I’d recommend installing RuboCop and then just running it and seeing what happens. You will likely get a lot of errors. Some basic checks to start putting in your RuboCop configuration (.rubocop.yml):

  • does it at least finish the run without crashing? :)
  • do I have the right Ruby files being linted?
  • do I have the right files excluded?

Sizing up the suggestions

Now that you have a list of suggestions from RuboCop, it’s time to whittle them down. But finding exactly what you need to do for each cop can be tough. What I would recommend at this point is to enable two RuboCop settings, either on the command-line or in your configuration.

Printing cop names

The first setting I recommend prints the full name of cops when there is a failure. This helps you learn more about them and to know the right name for disabling and configuring the cop.

On the command-line:

$ rubocop --display-cop-names

This turns the output from something like:

lib/foo.rb:42:10: C: Prefer single-quoted strings when you don't need string interpolation or special symbols.

to:

lib/foo.rb:42:10: C: Style/StringLiterals: Prefer single-quoted strings when you don't need string interpolation or special symbols.

I generally prefer the long form of flags for scripts since I’ll have to type it once and it’s self-documenting. On the command line I’ll usually use the shorter version of the flag. Another good approach is changing the .rubocop.yml file to have this be the default configuration, so you just need to invoke rubocop and it uses the settings there.

In the second example above, you can see that the cop category is Style, and the name is StringLiterals. So if you want to disable this check, you can add the following to your rubocop.yml:

Style/StringLiterals:
  Enabled: false

Understanding cops

The second setting I recommend is --display-style-guide. This setting is useful for seeing what RuboCop wants and if you want to actually follow that rule or not. It links to an explanation of the rule, typically in the Ruby style guide.

For example, you might get:

lib/foo.rb:39:39: C: Use def with parentheses when there are parameters. (https://github.com/bbatsov/ruby-style-guide#method-parens)

Sometimes the style guide is less than helpful: it just says this pattern is bad, with no explanation of why. So you need to use discretion. But linking to the docs is better than needing to guess what format RuboCop wants your code to be in.

Getting everything working

So you’ve got 47 errors. Where to begin?

You might want to go through and fix any common style violations in bulk, or disable cops that are overly picky. This might cut the errors down to something more manageable.

If your project is big, you might work on a subset of the project. For example, if you have many Ruby files in lib, and many others under app, do each of these subdirectories separately. Then when you lint the whole project, it will be clean.

Overwhelmed with errors and want to work on one change at a time? Try the --fast-fail flag to stop the RuboCop run after the first issue. Then fix the issue and continue running, and it should give a different error, hopefully later in the lint process. Another approach is to remember how many violations were spotted on the last run and the number should go down by one after each fix.

Linting on Rails

If you are on a Rails project, there is a flag for Rails-specific style checks. You can enable this with --rails. I recommend getting all of the Ruby checks working first, and then you can nail down the Rails-specific things. I learned about some interesting deprecations or recommendations, which were useful since I hadn’t worked on a Rails project in a little while.

Handling complexity

RuboCop has some interesting defaults for code complexity. While we can all agree that less complex code is good, it is labor-intensive to retrofit an existing codebase to follow code complexity guidelines. You are probably going to get errors like:

lib/foo.rb:39:3: C: Method has too many lines. [47/20]
lib/foo.rb:39:3: C: Perceived complexity for `bar` is too high. [8/7]

At first I tried cleaning some of these up, but there are a few issues with this:

  1. I’m new on the project
  2. the project may not have solid test coverage
  3. the code works now, and if I modify it, it might not actually make it much better and might introduce bugs
  4. who is to say what the ideal complexity should be?
  5. it’s just going to take a lot of time that could be better used at this point on the project lifecycle

However, lint errors cause our continuous integration to fail, so we need to address them somehow. Rather than disable the complexity cops, I think a balanced approach is to agree on a reasonable upper limit for a method’s length in our codebase and then fix any offenders. Since measures like function or module length tend to follow a power law, there should be only a few very complex areas.

For the rest, we set the limits high enough that they don’t fail, and if the module or function then goes over the limit, RuboCop will warn us and we will have increased feedback that our design is unsustainable. Generally code review should filter out egregious examples, but the fact that we added twenty lines to an already 200+ line file is often lost unless we use a tool that is more objective. Basically if the linter fails on complexity when the limits are high, then we know it is a useful failure to report and “stop the line” on.

An interesting approach would be to make the limits high and make a plan to scale down over time. Say our goal is a maximum of 100 lines in a particular file, but right now we have many that are over 200 lines long. We set the limit high at 250 to start, and then every week decrement it by ten until it gets to 100 lines. We could do this with a calender reminder, a bot that rewrites the configuration file, or actually encode it in the .rubocop.yml file or an environment variable, depending on how RuboCop reads in the configuration file.

(Again, just because your project passes linting doesn’t mean that it is well-written. It is just a tool to try to help code quality and save the team time.)

Don’t let your hard work go to waste

So you’ve gotten down to zero style suggestions. Congratulations!

But just because the cops are passing now doesn’t mean they will stay that way. Unless you put it on your continuous integration, the project will quickly gain style violations. Partially because there may still be some wrinkles in our RuboCop configuration, partially because we are humans and sometimes do things in different or suboptimal ways.

When the codebase is under CI, team members get quick feedback when they have angered RuboCop. Putting RuboCop on CI is pretty easy once you have fixed the issues.

Some handy aliases

At the beginning I commonly mistyped rubocop, so I made these aliases for ZSH to prevent me from having to retype it:

alias rubycop="rubocop"
alias rubocopy="rubocop"

Found this post helpful?

If you found this post helpful and want more like it, check out my guide to Starting on an Existing Rails Project, where I cover how to quickly come up to speed on a Rails project and make an impact.

GitHub Pull Request Workflow Labels

While at Haven, I thought we had a pretty good system for tracking the current status of a given pull request. In this post I’ll document some of the labels that we used, what they meant, and how they helped us collaborate.

Labels For A Leaner Workflow

One of the challenges of working with pull requests is that they can sometimes take a long time to get merged in. Some of this can be mitigated by keeping pull requests small, some by reviewing pull requests as soon as possible. However, a lot of time passes while waiting for someone to review or respond to comments, or even know that there is an issue or question that needs to be addressed.

Why do we care about reducing cycle time of pull requests? It is work that is almost finished. If we can finish it, we reduce the overall work inventory in the system. It helps us get business value more quickly and learn what doesn’t work.

Labels can clarify where a pull request stands. I see the clear next step and whether I am responsible. If something needs review, I review it. If one of my pull requests can be merged or needs changes, I do these.

The Labels

I’ll cover the basic labels that I thought were most helpful, and how we thought of them. We didn’t start with all of these; we just built them up over time and revised as necessary.

“work in progress”

“This isn’t ready for review, but I want to make it public.” It might be just to get code off of my machine in case something bad happens to my laptop, or it might be work that I want help or have questions on. By pushing code up early, we can communicate with actions rather than words. Instead of “I’m working on X”, you can let commits do the talking.

“fast track requested”

“This is time-sensitive or important. If between tasks, strongly consider reviewing this first.” This label gets high priority work through the system more quickly.

“dependent on other open PR” and “has dependent PR”

In the course of developing a feature, I might need to make changes A, B, and C that build off of each other. I push up A and am working on B and finish that as well. When I make the pull request for B, it is hard to see the changes I made, since they are usually mingled with the changes from A as well.

So these labels indicate that work is stacking up and it would be nice to review faster. They also lead the reviewer to look at them in order. Reviewing the first pull request makes the diff for the second one easier.

“don’t merge”

“This PR has some sort of critical issue or I still need to do something important on it, so please don’t merge this.” It is nice to be able to keep the pull request open but still have this agreement.

“blocked”

“We are waiting on something that we can’t resolve just as a development team.” Maybe we need feedback from marketing. Maybe an external vendor’s API is not documented well enough, so we can’t proceed until they get back to us with a clarification. It’s nice to know when something can’t move forward, because we can move on to other things and check back at a regular interval.

“has conflicts”

“This PR has some merge conflicts.” This fact is not always apparent by looking at the pull request overview page, so it just makes it a little clearer that there is some work the author needs to do to make it ready to merge.

“has test failures”

“The build appears to be broken on this branch.” It might be an intermittent, but more often we have looked at the build and there appears to be something related to the changes made.

“help wanted”

“I might need some help or assistance with this pull request.” Maybe I am going on vacation and will be gone for a week, so it would be nice for someone else to take up this cause. Maybe I’m not sure how to proceed with a challenge that I am facing.

“low priority”

This might indicate a development environment experiment or something else that is lower than average priority.

“question”

Someone has a question about this PR. :) Useful for indicating that help or feedback is needed to move the ball forward.

“ready for review”

“I think this is ready for another set of eyes.” If you might have pull requests that are in progress or being modified by the author, it’s helpful to know that something is officially ready for review.

“merge at will”

At least one person thinks this code is mergeable when the committer hits the merge button.

I like leaving merging in the hands of the original author because it’s kind of fun pressing the merge button, and also because there have been at least a few times where I think late at night: “wait, what about <this crazy edge case that I just thought of>” and want to fix it before it is merged.

“recycling changes”

“I’m working on addressing the issues you raised and responding to questions you asked.” Letting the reviewer know that they have been heard and replying to their comments and pushing up again and marking as “ready for review” when the changes are implemented.

“reviewed”

I have looked at this as much as I will probably look at it for now, and it has a few comments to recycle.

“tests are needed” and “tests would be nice”

The first is a strong opinion, the second means that it might be nice to do, but not required. I find this helpful since it gets people thinking about writing tests.

“small :)” and “micro! :)”

“This thing is small and you will feel good about looking at it!”

It’s nice to indicate that something will be a quick win and easy to review. The mindset of tackling a few hundred line diff is much different than that of approaching a ten line diff. I generally used “micro” to indicate a line or two of changes, and “small” to be around ten lines of changes.

I still like pull requesting even small changes because it helps ensure fewer mistakes and keeps everyone involved with what is going on. Which leads us to…

“cowboy merged”

“It was critical to get this code into master to fix an issue or I needed it to do additional work.” As just mentioned, the default is to pull request all changes. This label means this PR was merged directly to master without review, and you can peruse it at your convenience.

Comments are still useful!

Just setting a label and leaving is often not the best team interaction pattern. Making a comment about why you are changing the status or state of a pull request is often helpful. When marking as “merge at will”, giving a thumbs up and/or words of encouragement are still good form. :)

Comments are also easier to see than label changes. Usually GitHub comments will come across on chat applications like Slack or email, but label changes are less likely to do so. So this also helps communicate more effectively.

How I Did 5580 Pushups In 23 Weeks

My wife and I lived in San Francisco for a year starting in the summer of 2013 and the summer of 2014. One of the very best things about living there was the fantastic Ultimate scene. Some of our friends had a weekly track workout on Mondays that we participated in, and I felt that it helped me be a better competitor on the field by being in better shape. Some of them were participating in a fitness challenge, and it sounded like fun.

The next year, starting around October, they announced that they were expanding the fitness challenge to include anyone that wanted to join. To join, you put in $10 and most of the proceeds went to the Bay Area Disc Association. About a hundred people signed up, and I was one of them.

The Challenge

The challenge was fairly easy to grasp: every day and every week there are certain fitness requirements to be done, with one off day for the daily requirements. If you miss two daily exercises (not done by midnight) or miss the weekly requirement by midnight on Sunday, you are out. All participants get a weekly email on Monday discussing the week’s requirements and explaining any new exercises if there are any. Deciding whether you were in or out was just based on the honor code (Spirit of the Game, in Ultimate terms.) The exercises were all body weight, so no special equipment was really needed.

As they say on the website, one week’s challenge might be:

Below is an example of a hypothetical workout routine for a given week with daily exercise requirements of ten push ups and ten squats and a weekly exercise requirement of one hour of non-Ultimate cardio.

  • Monday: ten push ups, ten squats
  • Tuesday: ten push ups, ten squats
  • Wednesday: ten push ups, ten squats
  • Thursday: ten push ups, ten squats, one hour non-Ultimate cardio
  • Friday: ten push ups, ten squats
  • Saturday: OFF DAY
  • Sunday: ten push ups, ten squats

Ramping Up

The challenge started out very simple, and could be done in about two minutes per day. There was no weekly requirement to start. Really the challenge at this point was just remembering to do the exercises. A few people were knocked out because they forgot to do the exercises.

I think this approach was very similar to the Tiny Habits that I talked about in my detailed habit forming post. You start a very small success spiral (almost laughably easy) and slowly build up the challenge over time. Most of the battle is starting small enough and being consistent enough. At some point, it becomes easier to do the challenge then to not do it. That is when you know you have a solid habit.

Another positive quality of starting easy was to ensure everyone started somewhere manageable. If they started with the week five or ten challenge, there would have been people that dropped out just due to the physical nature of it. I think by starting slow you give everyone a chance to get further and have better fitness.

I liked that you could pick one day to take off for the daily exercises. It made the challenge a lot more bearable since there was often one day a week that was difficult to find the time to do the exercises on. I think that I learned that one key for challenges is to give yourself a decent grace period. Instead of no time on social media, what about fifteen minutes or an hour a week?

Planning

The keys to doing well in the fitness challenge in my opinion were:

  1. having a positive mental attitude
  2. not getting injured
  3. doing the exercises without fail

At some point, I realized that to succeed I would just have to be doing the exercises every day. If I skipped a day, I was committing myself to absolutely needing to do the rest of the week to stay in the challenge. So it was taking a risk if I got stuck in bad weather, got slightly injured or was too tired or busy to do the exercises. So one goal was just to do it and not think about it. The more that I could automate doing this, and not have to devote willpower, the better off I would be.

My wife was doing the challenge as well, but needed to drop out because the pushups were giving her shoulder pain. She ended up needing to do physical therapy and take time off. It was unfortunate, but a good reminder to try to stay ahead of injuries and try to do the exercises with good form and not push myself past the limit. When you need to do something nearly every day, injuries will quickly compound.

Doing the exercises was straightforward when at home, but more challenging when on the road, especially when flying. It was fun exercising on the beach, but airports were pretty hard. I also was known to just start exercising randomly in social situations to make sure I got my exercises in.

At some point, realizing you have to just do it was the key to success. Tired? Doesn’t matter. Sore from yesterday? Doesn’t matter. Don’t want to do it? Doesn’t matter. JFDI and by the time you get rolling, you’re already basically done and feel pretty good.

Setting up an exercise routine

I found that the best way to remember the daily exercises–since they eventually numbered in the dozens–was to make a routine. Each week, I would plan out how I was going to do the exercises. Generally, I would slot in an exercise where it made sense relative to the other exercises.

I found that by keeping a relatively consistent routine of:

  1. arm/jumping exercises
  2. pushup related exercises
  3. core/ground work
  4. legs
  5. stretching

that I could progress through the exercises with greater ease and not accidentally forget an exercise. Forgetting an exercise would be bad because I would then miss a day while still doing most of the work.

It also helped me get in a flow where I knew what the next exercise generally would be. I wouldn’t need to go from a floor exercise to a jumping exercise, and so forth. I generally did the hardest exercises first, and then progressed to easier ones. This helped combat mental and physical fatigue.

Also, there were a number of timed core exercises, so I found it easier to do these in a circuit of thirty seconds per exercise. I might have to do two or four circuits to get them all done, but it was easier than doing two minutes straight of forward planks, two minutes straight of right arm plank, etc.

I enjoyed learning the different exercises and can do most of them without much thinking now. I think there is value in doing the same good exercises to continue building strength. I had a probably-unrelated-to-the-fitness-challenge back injury a few months ago, so I am slowly working back to doing some of the exercises.

Indiana winter

One additional challenge that was special to our situation was living in Indiana in the winter. While most people were based somewhere in California and had good to decent weather, we needed to do things in the cold and snowy winter or otherwise plan for being inside.

Some exercises we could do at our weekly indoor Ultimate match (like stairs). An hour of weekly cardio was a bit harder, so we joined the local YMCA and did swimming. Sometimes if the weather wasn’t too bad, we could do interval running outside at a local track. Overall, it was probably more challenging, but it was pretty good to get outside.

Nearing the end and dropping out

The challenge ended up being very time consuming and physically strenuous. At the end, I was spending an hour a day on the daily exercises, and had a few hours combined of non-Ultimate cardio, yoga, stair workouts, and unfortunately, prancercise. It ended up being probably around ten hours total per week, in addition to any Ultimate training I was doing or games I was playing.

Another negative was the routine of the exercises. I was starting to worry that I was overdeveloping certain muscles and not focusing enough on other muscles. I did swimming for my hour of cardio, but maybe this wasn’t enough to balance things out. Sometimes I would reverse my daily routine to try to get a slightly different load on the muscles and to alleviate boredom. I also resorted to watching documentaries on Netflix to have something in the background to keep myself entertained.

With the club Ultimate season starting to ramp up, I didn’t have enough time or energy to keep things up. It is really hard to travel to a tournament or practice and also be doing an hour of exercise afterward when being cooled down.

I ended up finishing fourth in the competition. It was a relief to be done, but I continually would get nervous that I wasn’t doing the exercises. :) I likened the feeling to wearing a watch for a long time and then not wearing it and missing it. The remaining competitors didn’t finish for over a month after I stopped though, so I was glad to be done when I finished.

The Aftermath

After it was all over, I was pretty physically tired, but in one of the better shapes of my life. I feel that mentally I was on top of my game as well due to the physical shape that I was in and the fact that I was usually tired enough to sleep very well.

I loved the challenge and joy of being consistent. This challenge showed me that I could do something basically every day for almost six months.

I logged how much of each exercise I did, so you can see a decent summary in the workout compilation I made. I didn’t list what the exact exercises were, but could put them in a new post or you can probably find them by searching online. I calculated the 5580 number by tallying the number of pushups and burpees (which are basically dynamic pushups with some jumping thrown in there.)

Extensions

I’d like to compile a list of many body weight exercises and make a program to randomly generate workouts that are of the challenge level that I am at. I think this would be a good way of keeping things interesting. Also, the body loves homeostasis, so some way of continually randomly shocking it would be useful.

Give me feedback

I ended up doing a couple more challenges inspired by this that I may write up: the social media challenge and the writing challenge. Would you want to read about them?

What did you like about this post, and is there anything I can do to help you start a challenge of your own like this? If I started a local fitness challenge, would you be interested?

If Something Is Hard, Do It More Often

How do you deal with work that is challenging, time-consuming, or risky?

If something is hard, the typical approach to reducing the pain is to do the activity at hand less often. The logic is, this thing should be done as little as possible so it consumes fewer resources or exposes us to less risk. I see this option being used often in software development.

However, I think that a generally more effective – but counter-intuitive – option is to do difficult things more often.

DevOps

One of my favorite practical examples of this is modern development and deployment of software products. In some organizations, deploys happen monthly, quarterly, or annually. The pain of manually testing for regressions and getting all of the pieces aligned and documentation updated just makes it too difficult to do often. We wouldn’t want to put out a product that is less than perfect, so let’s make sure that it works correctly before releasing. And who can get it to build and deploy correctly? We need three engineers to take three hours to deploy the changes and make sure that everything works as expected.

The costs, however, are high. Features are shipped in batches instead of when they are finished. Integration is done at the end, and manually tested. Customers wait months for bug fixes. When there is a major bug in the final release, instead of being constrained to a small set of changes to fix it, the technical team might need to stay up late looking through weeks worth of changes.

The problem might be solved another way. Instead of considering how long we can defer the pain, what if we try to take the pain early and as often as possible? What if we deploy all code that has been peer reviewed and accepted at the end of every week? What if we did the same every day? What if we did it after every merged pull request? What would need to change to make deploying multiple times per day possible?

With the weekly deployment example, the team would likely quickly document the deployment procedure to ensure that it could be done reliably each week by anyone on the team. They might invest resources to start automating small parts of the deployment process to reduce some of the need for documentation and to make the process less brittle. Maybe with several hours of effort, the team gets the deployment time down to one engineer taking two hours.

Then, they move on to the next challenge, deploying what is believed to be the best code available every day. I have certainly heard opposition to this strategy. Well, we need time to test out the code. We need to make sure it is ready for prime time. But I would ask: when are you more sure about something being right than when you just worked on it? When could you fix any issues quickly, now, or a week or month from now? If you delay today, how will you be more sure tomorrow?

When the team incrementally writes automated tests for the important software they write, we enable faster testing. Instead of manually testing the software weekly or monthly, we can run all of the tests every time we make a change. Then we immediately know whether we’ve changed something for the worse and revert the change. The more our tests are automatic, the higher quality our product and the more confidently we can deploy, which promotes our changes to customers faster.

Reviewing Code

Taking the pain early on reviewing code is another example. It is possible to batch up pull request review to try to make reviewing them more “efficient”, but that results in pull requests growing ever larger. Instead, by continuously quickly reviewing code, we actually save review time and encourage higher quality work.

Someone at a recent Indy.rb meetup mentioned that pairing can replace code review. I think this illustrates that increasing the feedback loop can actually replace or remove a part of the process that we previously considered time-consuming or difficult.

Giving Feedback

Giving performance feedback and guidance can be a struggle. One approach is to do it less often. Maybe move the monthly one-on-ones to quarterly, or our semi-annual evaluation to yearly.

But giving feedback less often just exacerbates the problem. Now we move from a less formal and more open process to one where people feel guarded. When we couple feedback with discussion of salary changes, this also results in fearful feelings.

Instead, by giving feedback more quickly, we encourage people to bring up problems when they encounter them and work through them together. It results in feeling more connection with each other. In the increasingly virtual world we live in, getting face-to-face time is an important investment.

One example that I can give is from our one-on-one process. We had a great policy of weekly tactical one-on-ones at Haven to talk about what was on our minds and how we can keep improving as individuals and as a company. This periodic check-in was really helpful to stay engaged.

We had an intern that was part-time because he is still in school. Initially I thought, “OK, half the time, so he and his mentor should do these every two weeks, right?” But then I thought of this principle, and thought: “if giving feedback and making sure people are happy is important to us, why not continue doing it weekly and then we can make sure that everyone is happy and learning as quickly as possible?” Twice as much “overhead”, but we get a much more engaging experience, give the intern the most learning, and give the company a good way to clarify direction. It seemed to be a good arrangement, and lined up with the rest of the company’s cadence as well.

Aside: daily or weekly for meetings works much better than every-other-day or bi-weekly, purely for psychological reasons. Further aside: the word “efficient” typically misleads people into suboptimization. It is extremely rare that having a quality conversation with someone about the work they are doing is “overhead” or non-value adding.

One idea we’ve had and would like to explore in the future are continuous feedback mechanisms. How can we automatically ask small questions on an ongoing basis to pull feedback rather than push it in large batches at feedback meeting time? Could this lead to leading indicators of issues instead of learning after the fact?

Other Categories

Is hiring a challenge? What if you were always in hiring mode? What would you need to change about your processes, perspectives, or prospects to make continuous hiring feasible? Would the result be a competitive advantage for your organization?

Exercise hard? What if you did it every day instead of every week? What about continuous exercise, where your computer randomly shouts “do 10 push-ups!” a few times per week? :)

Hate dancing? What if you took classes and got pretty good at one style of dancing? There is certainly something to be said for doing a lot to get past the suck threshold and then you actually enjoy it more. Any skill is like this. By doing it more, you get better at it and will enjoy it, which will make you want to do it more, so you continue to get better at it. Taking 10 classes in a month is more effective than ten classes in a year.

Take a minute right now to think about the things you most dread, are least fun, or generally are very time-consuming in work and life and think about how you might actually do some of them more often to make your life better.

Hedging

“More” or “less” is an oversimplification of the options and the trade-offs (did you spot the dichotomy?) Sometimes it does make sense to do things with less frequency. If you are in heavily regulated environment or releasing an app in an ecosystem with long review cycles, there may not be a way to do something more often. But the thought remains the same, how can we possibly it more often, and will that reduce the overall pain?

Hat tip to Kyle Shipley for bringing up this concept in past conversations. And apparently a bunch of other people also internalize this concept, based on the comments on the Twitter post of this article:

Must-Have Vim JavaScript Setup

In this post I’ll cover the plugins and configuration I have found to be the best for editing and navigating Javascript projects with Vim.

Background

I have had around a year’s worth of experience at this point. I tried out various plugins, uninstalled some, tweaked others, and found a decent setup. There are probably some plugins out there that I am missing, so please let me know if I should investigate one. Primary stack has been Mongo, Angular, Express, Mongo. I’ll update if I find more good Vim JavaScript setup.

The Best Vim JavaScript Plugins

I use the following plugins on my current project. I listed them roughly based on importance, how much I use them, and how surprising they were.

vim-javascript

The biggest advantage of this plugin is how much better the default indentation is. I’m sure there are others, but indentation was unbearable before using it, and pretty good afterward.

vim-node

Although it has other utilities, I primarily use vim-node for gf on a relative require to go to that file. Following Vim conventions, gF opens that file in a split. This really aids in navigation, especially when paired with Ctags (discussed below.) It might not sound all that helpful, but when you have:

var UrlFinder = require('../../../server/util/other/urlFinder');

I can just type gF to view the file in a new split window to figure out what functions I might want to call on it. Much faster than other methods of getting there. Generally Vim would choke on the relative file path since I typically have my current working directory as the project root.

jsctags setup

I had not used Ctags much before, but have found them very useful on the project that I am working on. Ctags works by creating an index of tags, which are things that you want to be able to jump to. Vim supports Ctags-style tags.

The value is being able to jump to the definition of a function when your cursor is over it. So being able to say Ctrl+] and then jumping right to where the function is. I can easily get back with Ctrl+T (or perhaps Ctrl+O since that will take me out to the edit stack.)

This gets us closer to many IDEs, and avoids running the function name through grep/ack/ag and/or opening the file manually.

The core issue is that the current Exuberant Ctags JavaScript support is not very good. Often it would not be able to find a function declaration. There is an old Mozilla project, but it had one basic, but large issue with installation that cannot be solved by forking/patching (to my knowledge.) So I made a custom ZSH install function:

function install_jsctags {
  npm install jsctags
  # https://github.com/mozilla/doctorjs/issues/52
  gsed -i '51i tags: [],' ./node_modules/jsctags/jsctags/ctags/index.js
}

This is helpful because I sometimes switch node environments with nvm, or do an npm prune which removes it since it is not in the package.json file.

Then I can manually run something like:

$ jsctags -o tags server test admin

This takes a few seconds and then spit out Ctags-compatible output of processing the function names for the server, test, and admin directories to a tags file. Vim automatically reads the tags file. I don’t need to restart Vim, it just works after regenerating the file. So a little bit of setup, but I think the time savings is worth it. It would be nice to automate this so that the tags don’t get stale, but I usually only run it when I jumped to the wrong place or Vim couldn’t find the tag, which happens maybe once a week. So I am happy enough with how it works.

A Vim-alias would be something like:

nnoremap <leader>jt :! jsctags -o tags server test admin<CR>

html5-syntax.vim, html5.vim

html5-syntax.vim seems to handle HTML5 syntax highlighting. html5.vim seems to handle HTML5 autocomplete. So I installed them and I am happy with the indentation and syntax highlighting with HTML5 code, so this is about all I can ask for.

vim-less

We use less for having a higher level stylesheet language than CSS. vim-less is useful for working with these files to get good indentation.

Also, you’ll probably want the following in your ~/.vimrc:

autocmd BufNewFile,BufRead *.less set filetype=less
autocmd FileType less set omnifunc=csscomplete#CompleteCSS

The first line says to set the syntax type of .less files to use the less filetype. The second says to use the CSS autocomplete function for autocompletion. So if you are typing:

display:

and momentarily draw a blank on what options are possible, you can type Ctrl+X, O to see the list of standard options:

CSS Autocomplete

javascript-libraries-syntax.vim

This plugin has some syntax highlighting for common JavaScript libraries like Underscore (Lo-Dash), Angular, React, etc. This might be helpful for spotting incorrect function names:

Syntax Highlighting Shows An Error

There is a little configuration necessary, check out the plugin’s page for more details. Basically you’ll need something like this to get the full benefit, either in ~/.vimrc or by using some kind of local vimrc setup:

let g:used_javascript_libs = 'underscore,angularjs,jasmine,chai'

Musing: It might be nice if there was a way to automatically audit what manual configuration I need to do when installing a plugin to get the most out of it. I only found that I hadn’t set it up when I was writing up this post! :)

tern_for_vim

This was a bit hard to get properly set up (doubly so if you use nvm, since tern_for_vim wants a globally/system installed node executable…). And when I finally did get it working, it didn’t blow me away. However…

There are at least two useful Vim extensions that it provides:

:TernRename renames the variable under the cursor but only in this scope and then shows you a list of the changed references.

:TernRefs will just show you references to the variable under the cursor (similar to above, but with no rename.)

So I’d recommend this plugin just for the rudimentary refactoring support it provides. If you’re willing to go the extra mile it might be able to provide some more intellisense / autocomplete functionality.

Other Useful Non-JavaScript Specific Plugins

syntastic

Real-time syntax checking. Fantastic plugin, well documented and maintained, works well with no configuration. Works even better with some configuration. For JavaScript / HTML projects:

" use jshint
let g:syntastic_javascript_checkers = ['jshint']

" show any linting errors immediately
let g:syntastic_check_on_open = 1

I also found a handy gist to have Syntastic use my project’s .jshintrc.

There is a bit of munging I do to decrease spurious errors. Specifically, HTML Tidy is the worst when dealing with things it doesn’t understand (HTML5 custom elements / attributes, for example):

" Set up the arrays to ignore for later
if !exists('g:syntastic_html_tidy_ignore_errors')
    let g:syntastic_html_tidy_ignore_errors = []
endif

if !exists('g:syntastic_html_tidy_blocklevel_tags')
    let g:syntastic_html_tidy_blocklevel_tags = []
endif

" Try to use HTML5 Tidy for better checking?
let g:syntastic_html_tidy_exec = '/usr/local/bin/tidy5'
" AP: honestly can't remember if this helps or not
" installed with homebrew locally

" Ignore ionic tags in HTML syntax checking
" See http://stackoverflow.com/questions/30366621
" ignore errors about Ionic tags
let g:syntastic_html_tidy_ignore_errors += [
      \ "<ion-",
      \ "discarding unexpected </ion-"]

" Angular's attributes confuse HTML Tidy
let g:syntastic_html_tidy_ignore_errors += [
      \ " proprietary attribute \"ng-"]

" Angular UI-Router attributes confuse HTML Tidy
let g:syntastic_html_tidy_ignore_errors += [
      \ " proprietary attribute \"ui-sref"]

" Angular in particular often makes 'empty' blocks, so ignore
" this error. We might improve how we do this though.
" See also https://github.com/scrooloose/syntastic/wiki/HTML:---tidy
" specifically g:syntastic_html_tidy_empty_tags
let g:syntastic_html_tidy_ignore_errors += ["trimming empty "]

" Angular ignores
let g:syntastic_html_tidy_blocklevel_tags += [
      \ 'ng-include',
      \ 'ng-form'
      \ ]

" Angular UI-router ignores
let g:syntastic_html_tidy_ignore_errors += [
      \ " proprietary attribute \"ui-sref"]

Sometimes I get the following message on startup or loading JavaScript files: “syntastic: error: checker javascript/jshint: can’t parse version string (abnormal termination?)”. This error indicates that Syntastic can’t find the jshint executable. For me this means I need to exit vim and nvm use to load up the right node version. Then when I start Vim again, Syntastic can find jshint.

OK, enough about Syntastic for now.

vim-projectionist

If you’ve ever used Rails.vim, and loved the :A functionality to jump to the alternate file (typically the test file), but want to do that with arbitrary projects, then this is the plugin for you.

This has been extremely useful on my current project. We had a Rails-ish directory structure in an Express server, with unit tests in reasonable places. So something like the following in a projections.json file gives me quick access to the test file of the project file that I’m looking at (or vice versa):

{
    ...
    "server/models/*.js": {
        "type": "model",
        "alternate": "test/mocha/models/{}.spec.js"
    },
    "server/controllers/*.js": {
        "type": "controller",
        "alternate": "test/mocha/controllers/{}.spec.js"
    },
    ...
    "test/mocha/models/*.spec.js": {
        "type": "modeltest",
        "alternate": "server/models/{}.js"
    },
    "test/mocha/controllers/*.spec.js": {
        "type": "controllertest",
        "alternate": "server/controllers/{}.js"
    }
    ...
}

Can also open up a model file with :Emodel user and it does the right thing.All of the navigation commands also let you open the associated file in a new split or tab. Highly recommended.

Ultisnips

If you’re running a straight JavaScript project, there can be a lot of typing. Ultisnips is the most versatile text expander plugin for Vim. You can make new snippets for any file type. Here is my JavaScript Ultisnips configuration. A snippet of useful snippets:

snippet use
'use strict';
endsnippet

Type use<tab> and save a few keystrokes.

snippet clv
console.log('$1: ', ${1});
endsnippet

Handy for quick debugging. Basically log any variable while printing what it is, so you don’t need to type it twice.

snippet reqlo
_ = require('lodash')
endsnippet

snippet reqmonbb
BBPromise = require('bluebird'),
mongoose = BBPromise.promisifyAll(require('mongoose'))
endsnippet

Since we use these in 80% of our files, saves me quite a bit of time.

Just a few examples, there are other things that I constantly use in there. Basically if you have a pattern in your codebase, encode it in snippets and then always do the right thing in a consistent way with minimal typing / thinking.

Wrapping up

Prerequisites

Before you do anything else, get a Vim plugin manager. I use Pathogen. There are many other options.

More info

Here is my full configuration.

If you liked this post, check out my book on Writing With Vim, where I cover everything you need to know about writing prose in Vim.

Thanks for reading, hope this was helpful!