Using a Redlock Mutex to Avoid Duplicate Requests

I somewhat recently ran into an issue where our system was incorrectly creating duplicate records. Here’s a writeup of how we found and fixed it.

Creating duplicate records

After reading through the request logs, I saw that we were receiving intermittent duplicate requests from a third-party vendor (applicant tracking system) for certain webhook events. We already had a check to see if records exist in the database before creating them, but this check didn’t seem to prevent the problem. After looking closely, the duplicate requests were coming in very short succession (< 100 milliseconds apart) and potentially processed by different processes, so the simple check would not reliably catch the duplicate requests.

In effect, we were seeing the following race condition:

t1: receive request 1: create new record (id: 123)
t2: receive request 2: create new record (id: 123)
t3: process request 1: does record 123 already exist? no, so create it
t4: process request 2: does record 123 already exist? no, so create it  <-- race condition
t5: process request 1: create record 123
t6: process request 2: create record 123  <-- duplicate record created

We could not determine whether this webhook behavior was due to a customer misconfiguration or some bug in the applicant tracking system’s webhook notifications. But it was impacting our customers so we needed to find a way to handle it.

Fixing the problem

I decided that using a mutex would be a good way to handle this. This way we could reliably lock between processes.

I found Redlock, a distributed lock algorithm that uses Redis as a data store for mutex locks. Since we’re already using Redis and Ruby in our system, I decided to use the redlock-rb library.

The basic algorithm would be:

redlock_client.lock('unique_id_from_request', 5_000) do |lock|
  if lock
    # we could successfully acquire the lock, so
    # process the request...
  else
    # we couldn't acquire the lock, so
    # assume that this is a duplicate request and drop it
    return
  end
end

When we receive a request, we check to see if we’ve seen the same request recently by using a unique identifier from the request. If so, discard the current request. If not, we acquire a lock and process the request. Once the request is processed, we release the lock.

I made this change and deployed it, and it seemed to successfully reduce the number of duplicate requests!

We ended up seeing this issue for other applicant tracking systems, so implemented this in their webhook handlers as well.

Side quest

I will often look through the issues and pull requests of a new project before adopting it to see how active the project is and whether there are any known issues with it. As I read through the Redlock issues list, I found an issue where the lock would potentially not be acquired if there was a problem with the Redis connection.

Thinking about it, this would be a problem for us because it could lead to requests being dropped if our Redis connection had issues. We would think that another process already had the lock, but in fact, this was a different kind of issue.

This was a rare enough and recoverable instance that I thought continuing to use the mutex was worth the risk, but I wanted to see if I could fix the issue.

I responded to the issue with a specific case that illustrated the issue and asked if the maintainers would be open to a pull request to fix the issue. I got some positive feedback and then dove into the code and submitted a pull request to fix the issue.

The issue took a little while to merge, due to the review process and probably because it changed the behavior of the library. Instead of returning false when a connection error occurred, we would raise the connection exception. It’s possible that someone would be relying on the previous behavior, but it seemed more correct to raise an error for an exceptional case than to have the same value as a lock not being able to be acquired. So the change was approved and merged and released in version 1.3.1 of the library. I then updated our code to use this new version (we were previously pointing to my fork of the changes since it seemed correct and to test it out more.)

Conclusion

Overall, I thought this was a good approach. I first made sure to understand the underlying cause of the problem, and then I found a solution that would work for us and fixed a small issue that could potentially cause data loss. The maintainers of the library were very accommodating and communicative throughout the process.

Using iTerm Automatic Profile Switching to Make Fewer Mistakes In Production

Today I will tell you some stories of how I made mistakes in our production environment, and how I am trying to help prevent future mistakes using iTerm.

Mistakes were made

At work we are mid-journey to having more automation around our deployments, provisioning, backups, monitoring, and so forth. But at the moment, we have some things that are typically done manually. Within recent memory, I was SSHed into our QA (staging) box and for some reason wanted to rename the database. A few minutes later, someone came down and said “production’s down!” 1 (Production is the end-user visible environment, the one thing that we don’t want to be down.) I was thinking, “hmm, we haven’t changed anything recentl… wait, was I actually on the QA box?” Sure enough, what I renamed was the production database on the production environment! A minute later service was restored, but this was the most downtime this quarter during the day (a handful of minutes.)

As part of our postmortem on this issue, we identified that switching my terminal profile whenever I thought I would be in a production-like environment would be useful. For example, if I am going to be SSHing into a QA box, I might create a new profile that has a different background color. This would help disambiguate the two environments.

The other day after hours, I was switching back and forth between QA and production SSH environments to try to debug a problem on the QA side. I again thought that I had SSHed into the QA environment but I didn’t read my SSH command well enough when cycling between those environments (using Ctrl+r in the terminal will give you previous commands2). I turned off the production load balancer. Fortunately it was after hours, so I could easily revert it, but I needed a better solution.

Enough is enough

There are two problems with the profile switching approach: I need to remember to switch profiles when I am SSHing, and I need to be SSHing into the right environment for the given profile. These are error-prone enough that I don’t think the manual profile switching approach is workable long-term. Again, in a perfect world, we would have everything already automated and some way of making all of our changes through well-tested or peer-reviewed means. But there has to be a stopgap solution.

I had read a bit about automatic profile switching in iTerm after the database rename debacle. This iTerm feature provides the ability to know when we have changed servers and change the profile accordingly. At first, it seems to require shell integration, which means that you curl a script to each of your boxes to be able to use it. This seemed both potentially insecure and cumbersome as we add more servers to our environment, so I didn’t want to use it.

Triggers and automatic profile switching

Digging a bit deeper, it seems that you can also use triggers and automatic profile switching to mostly accomplish the same thing. There are two components we can work with to make this happen.

The first is a trigger. Triggers look at your terminal output and run actions when the output matches a given regular expression. There are a variety of interesting actions you can take based on a trigger, but we’ll use them to set the internal iTerm variables for username and hostname. Basically iTerm keeps track of these somewhere and you can use it to switch your profile automatically when it changes.

When the iTerm hostname or username changes you can use automatic profile switching for each profile to say when that profile should be used. If we change to a production host, then we should activate the production iTerm profile. Of course, when we exit out of that, we’d like to return to the default profile.

An example setup

Here’s a high level view of what we want to do. When we recognize something that means we are on:

  • QA box, we switch to the QA profile (dark blue background)
  • production box, we switch to the production profile (dark red background)
  • localhost, we switch the default profile (black background)

I set up the following profiles, with rationale:

Default

Triggers
  1. Set the iTerm username and host for either QA or production when we see it in an SSH prompt. The regex would match username@host-name directory_name $. If that were the prompt, this trigger would set username to username and host to host-name (the \1 pulls back the first match group of the regex.) Typically you’d have qa-web or prod-web or something like that as your hostname. You would want to match those for the next two parameters since you need the QA and production profiles to be based on the hostname (see below.)

    • Regular Expression: ^(\w+)@([\w\-]+):.*\$
    • Action: “Report User & Host”
    • Parameters: \1@\2
    • Instant: yes (explanation in its own section below)
  2. Set the iTerm host to a QA-host when we recognize that it is a QA Rails prompt:

    • Regular Expression: ^Loading qa environment \(Rails [\d\.]+\)$
    • Action: “Report User & Host”
    • Parameters: @some-qa-host
    • Instant: not needed
  3. Set the iTerm host to a production host when we recognize that it is a production Rails prompt. Similar to the previous trigger, but substitute production for instances of QA.

Automatic Profile Switching

Automatically switch to this profile when the hostname changes to our local host (hydration, in the case of my computer.3)

QA / Production

These are basically identical to each other, except for the automatic profile switching hostname. I copied these from the default profile and then changed the background color and name. The specific colors you use are not important as long as you can clearly differentiate the colors between environments and the production color strikes some sort of fear into you when you see it.

Trigger

When you see my special local prompt character (♦), set the iTerm host to the local machine name (hydration), since we want to switch back to the default profile at that point.

  • Regular Expression:
  • Action: “Report User & Host”
  • Parameters: @hydration
  • Instant: yes (explanation in its own section below)

Note: Having some sort of special local prompt is important to being able to use this approach. My guess is that you have customized your local prompt in some way so that you can either see the hostname in it or have some characters or patterns that are not typically encountered.

Automatic Profile Switching

Automatically switch to this profile when the iTerm hostname changes to the environment that we want. We would use qa-web for the QA profile, or prod-web for the production profile.

Testing

I usually work slowly and try to get one environment working first, and then try to get switching back to my default environment after that. You’ll know when you have things hooked up correctly when the colors change.

At first I was testing by actually SSHing into the boxes, but this was a bit slower than needed. Since iTerm does this matching based on looking at your terminal output, you can just echo a test string and you should be able to see the profile change (or flash for a little bit if you have switching back to the default profile configured.)

Instant or not?

“Instant” in the trigger definition refers to whether iTerm will wait for a newline before checking the output or not. Generally if something is in an interactive prompt, you probably want instant. If you don’t have instant enabled, then your profile won’t change until the second time the prompt is loaded because a newline won’t be provided until you press return/enter to finish inputting your command. I’d imagine that using instant is slightly slower since it constantly looks at the output, so I’d recommend not using it unless you are in an interactive prompt situation.

Wrapping up

I think that the iTerm documentation is not yet perfect for this feature, so setting this up for my environment took a little time. But now that it’s written up, hopefully you can see how a setup like this works and can customize it for your environment with less effort. It’s not a perfect solution, but it has already been helpful. Also, it’s just cool to see your background color change when you run a command. I’d say the fifteen minute investment is worth the effort to not do something silly in a live server.


  1. See earlier note about having insufficient monitoring. If someone physically tells you your service is down or broken before you know about it, you don’t have enough monitoring in place! 

  2. Searching through previous history is especially awesome with fzf. I highly recommend it. 

  3. It subtly reminds me to drink more water. 

Squashing Intermittent Tests With ntimes

Today I want to share a tool that I have found indispensable for finding and fixing intermittent tests in test suites. It’s a little script I wrote, called ntimes.

Based on the commit logs to my dotfiles repository, until about 2014, to run the same command many times, I would press up in my terminal and press enter. While effective, this approach has the disadvantage of requiring me to be present at the machine and do manual work. I thought: there must be a better way.

So, probably by cribbing from somewhere and adding my own extensions, I made a script that could run an arbitrary command-line command multiple times and report a summary at the end. To use it, I would use something like:

$ ntimes 100 rspec spec/models/user_spec.rb:42

This would run that specific RSpec test or block one hundred times. At the end, the script prints how many times it succeeded and how many times it failed.

I also use Mac’s say command to get some audio feedback during test runs. It is a bit annoying to have it say “succeeded” whenever it successfully ran, but it can be useful to know when there was a failure (“guess I didn’t fix the issue…”). So I have it say a quick “failure” immediately when there is a failure (non-zero exit code of the command) and either “Success!” or “At least some failed…” at the end depending on the overall status. While it couples this script to Mac, you could probably extend it to use a more cross-platform approach.

Since it just operates based on the exit code, this command could be used on other programs to run them multiple times as well. If you don’t care about the successes or failures, you can still use it to run the same command multiple times.

Combining git bisect with ntimes as the command allows us to see when a test likely started being flaky. It is helpful to have a small scope of what could be causing the test to be intermittent. If it is because of a test setup issue in another directory, then you might have to run your entire test suite. (This would take much longer, so you might have to run it overnight.) ntimes is also quite helpful when taking over an existing code base that might have intermittent tests.

Sometimes if I’m worried about a test that I just wrote, I’ll do a proactive ntimes 100 on it just to be sure that I am not committing a test that will soon fail. I generally try to do this if I have really complicated before/after blocks or if I might be polluting global state.

To install ntimes on your machine, download it, make it executable, and then put it somewhere on your PATH. Please let me know if you found it useful!

Softening Statements With Parentheticals

In our Slack organization and Github pull request review, I have noticed a small pattern of using parentheses to soften or clarify the statements that we make. Sometimes it used by someone in a position of authority to emphasize that a comment is an idea, not a directive. Other times, it represents that things are not critical to address, but might be something that we want to look into.

I originally wanted to call this post “The Shipley Parenthetical” since Kyle uses it all of the time. Didn’t know if he thought of it this way / approves though. :)

In this post I’ll give a few examples of how this works and some thoughts on it.

Example 1

Just capturing a thought that would be helpful or is on the commenter’s mind, but wanting to be clear that there is no specific action item around this.

Will avoid further churn, IMO. (Down the road, we may even want a copywriter with clinical knowledge on or near the product team.)

Example 2

Hedging a comment about what might be the best with a bit of YAGNI:

That is probably my preferred order overall, with my additional comment that making it data instead of code might sidestep the matching problem entirely if it is worth it. (Not sure if it is yet.)

I think this is a good way of resolving the need to point out something that could be better while not committing us to doing it.

Example 3

Comment by author of pull request after a comment by reviewer to remove some code:

This interactor is actually part of the original code. Are we ready to start removing it?

(Not planning to restore here for that reason unless there are objections.)

I like this because it specifies a reason for not removing the code and states what the default behavior will be if an argument against it is not raised. It might be even better if the timeline for when the option to respond expires.

Example 4

In a review, after commenting on a specific issue, the next comment the reviewer makes is on the same issue in a different place. They said:

(Same thing here, sorry.)

I like this as a way of indicating that the reviewer is being empathetic while still showing that there are a few places where the same problem exists. This approach is better than “WRONG AGAIN” or equivalent statements.

Example 5

Interjecting into a pull request conversation to potentially clarify reviewer’s comment while still admitting imperfect knowledge:

I think @shipstar was saying for the right hand side of the expression, is there a foo field on baz? Or should it be bar? (At least that is how I read his comment.)

Example 6

Indicating that a comment is non-blocking, but that would be nice to have:

(Could also use .reject instead of .select if the intentional is filtering. Seems infinitesimally more semantic.)

Any thoughts?

This is kind of a new format of a post for me. Basically I’m harvesting artifacts for free blog posts from Slack and Github. Not sure that I would try it again, but it might be helpful / interesting to someone. (What did you think?)

Night Working Computer Setup

Although I wrote about how automatically turn the internet off at night, sometimes my schedule shifts later or I am trying to get some side project work done and want to burn the midnight oil. In this post I’ll cover what I consider the best tools for an evening computer work environment.1 So what can you do besides changing your text editor colors?

Chrome

I use Chrome for my browser since it has many extensions and doesn’t seem to eat up memory at this point. For the nighttime setup I’m using one extension to make the new tab page black, and another to make most other pages dark.2

Dark Reader

To make most pages inverted and dampened, I highly recommend the Dark Reader Chrome extension.3 It is excellent. Sites look as good or better than their brighter counterparts. Github diffs in particular look really good. Here’s a screenshot of it in action:

Github with Dark Reader
Dark Reader Settings

Dark Reader has several hue and brightness settings that you can change (see right image), and you can toggle it globally or for a particular website. Generally I just turn it on globally with alt+shift+d at night and turn it off in the morning with the same shortcut.

It also handles images well. Unlike other plugins, it does not invert the colors, it just mutes them. Not inverting avoids blinding you with what are normally dark images.

I like using Dark Reader and it always makes me nostalgic for a dark theme that I set up on a personal wiki back in the day. I encourage you to install it and try it out on this page. I think it looks pretty cool!

Blank tab plugin

Dark Blank New Tab Page Extension

Generally when you open a new tab, Chrome presents you with your bookmarks or sites you have visited. I prefer to have a more minimal new tab page to avoid distracting me and to load more quickly. Generally I opt for Empty New Tab Page to simplify the new tab page down to just a blank page. However this plugin produces a blinding white background so it is less desirable for evening working. Fortunately, there is a similar and well-named plugin called Empty New Tab Page - Black which solves this problem by making the new tab black. So I generally have both installed and can disable whichever one I am not using at the moment.

Dark console

You can also change the default Chrome DevTools window from the default light background to a dark background. This makes web app debugging at night a bit more palatable.

Chrome DevTools Dark Theme

To change yours:

  • open the DevTools window
  • click on the dots in the upper right corner of the window
  • select “Settings”
  • choose the dark theme from the list of themes

PDF - MuPDF

I was reading Programming Elixir and Programming Phoenix for a side project that I am working on. The key features that I wanted in a night-time Preview (native OS X PDF viewer) replacement were:

  • Vim-like navigation (j/k to move up/down, etc.)
  • inverted viewing mode

I looked for a bit and found MuPDF. It is open source and available on Homebrew (mupdf package.) It gives Vim-like navigation and its inverted mode is quite good. Here is a side-by-side view of normal and inverted mode:

MuPDF normal mode MuPDF inverted mode

Actually running it is a bit tricky since it is normally a Linux program. The invocation that I found useful is:

mupdf-x11 <filename> &

This opens the file in the X11 version of the program and gives you back shell control (& launches it as a daemon). So not entirely easy to get running, but I have found it useful. To toggle night mode, just press i and it quickly inverts the colors.

Monitor setup

There are a few considerations to make with your displays.

Generally turning down the brightness is useful. The built-in laptop display has keyboard shortcuts so is easy to change. Most external monitors are a little trickier, but some have modes that you can configure so you can more easily toggle between day and night modes.

If your external monitor flashes you with a full screen of blue pixels whenever it is unplugged like mine does, then I would advise turning it off before unplugging to avoid this. Blue light is most harmful to melatonin production, which aids sleep and is a powerful antioxidant.

f.lux

Since we are on the topic of blue light, I will mention f.lux. This program shifts the color palette of the monitors to a more red tint automatically based on the time of the day. I am guessing that most people reading this have heard of f.lux, so I won’t cover this much further.

Any other tools?

Do you have a night-time setup for your computer? What tools have you found useful? Thanks!


  1. I use a Macbook Pro, but similar strategies would apply to other computers. 

  2. There is still a flash of white when opening a new tab, when conducting the first URL change (generally a search), or when navigating to a tab that was previously loaded before it applies the dark styles. But overall, they are a great improvement. 

  3. Some pages that are actually Chrome-specific windows won’t be inverted. For example, the Chrome settings tab or any Chrome store pages. This made testing dark plugins a bit trickier because at first I tried testing on the dark plugin’s page instead of a normal browser page.