RR for Test Doubles Presentation

Here is a presentation that I gave to the Indy.rb Ruby user group in Indianapolis. It covers the advantages of using RR (double Ruby) for concise mocking and stubbing and gives some real-life use cases to inspire thinking about testing using test doubles.

(Having trouble seeing the slides? Try here.)

form was removed from Rails and is now available as a plugin.

I got this strange error when in my Rails application:

DEPRECATION WARNING: form was removed from Rails and is now available as a plugin. Please install it with rails plugin install git://github.com/rails/dynamic_form.git.

What ended up happening was that I forgot to pass in a local ‘form’ to a partial that was expecting it to be passed in. Rails then didn’t know how to handle the form that I was trying to set, and so it fell back to some default behavior of saying that the way I was using form was deprecated.

Signs You Aren't Really Building a Minimum Viable Product

With the popularization of lean startups, minimum viable products (MVPs) have recently entered into business and software lexicon. Who can argue with building more than you actually need?

Many people seem to interpret MVP as the first iteration of their product. Once they build that version, they can add more features, and users of the product will be even happier than before. Businesspeople sometimes talk about needing to build an MVP so they can launch and raise more funding.

If you are building out a half of a product as your first stab, you might as well just call it version one or iteration zero or something like that. No sense in polluting the MVP term.

In this article, I will argue that most so-called “MVPs” are not really MVPs because they are not focused on the process of learning, and as a result, wasteful. I think that there is a lot of value in not trying to build too much. This low-hanging fruit likely accounts for the proliferation of the term. But I think that a lot of the value of an MVP is testing the risky assumptions every startup has.

Definition of minimum viable product

Well, what is a minimum viable product, anyway?

A Minimum Viable Product has just those features that allow the product to be deployed, and no more. The product is typically deployed to a subset of possible customers, such as early adopters that are thought to be more forgiving, more likely to give feedback, and able to grasp a product vision from an early prototype or marketing information. It is a strategy targeted at avoiding building products that customers do not want, that seeks to maximize the information learned about the customer per dollar spent. “The minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” The definition’s use of the words maximum and minimum means it is decidedly not formulaic. It requires judgment to figure out, for any given context, what MVP makes sense.

A MVP is not a minimal product, it is a strategy and process directed toward making and selling a product to customers. It is an iterative process of idea generation, prototyping, presentation, data collection, analysis and learning. One seeks to minimize the total time spent on an iteration. The process is iterated until a desirable product-market fit is obtained, or until the product is deemed to be non-viable.

Wikipedia on MVPs, all emphasis mine

The reason landing pages are so popular as a form of MVP is not because they are the easiest thing to build. Often times they are very easy to build, but that is not the whole reason. The reason is that they often give a good bang for the buck (or time spent, ROI, etc.) for your current assumptions. With a landing page, you can test whether people understand the idea you have, collect metrics on the best ways to attract users, and whether anyone at all will sign up.

Yes, at certain points, your MVP might actually be a landing page with a value proposition and a way of learning from it. It might be going to a bus stop and convincing people to get in your car to test a new carpool web app idea. Sometimes it’s a super-limited version of your product, meant to test a set of assumptions. It could be a paper prototype that you show to earlyvangelists to talk about your value proposition. It might be you just pretending to be a magical algorithm that solves your supposed customer needs.

You should start with the riskiest assumptions that you can test and try to make them fail. Here is why you should start at the bottom of the risk validation pyramid.

What do you want to learn?

Here are my concerns when the term MVP is used loosely:

  • there is little emphasis on what assumptions the MVP seeks to [in]validate,
  • there are no clear success or failure criteria, and
  • there might be an easier way to learn as a result.

Here’s how Eric Ries frames this anti-pattern:

Most entrepreneurs approach a question like [“how many customers will sign up for a free trial given what we believe is enough information?”] by building the product and then checking to see how customers react to it. I consider this to be exactly backward because it can lead to a lot of waste. First, if it turns out that we’re building something nobody wants, the whole exercise will be an avoidable expense of time an money. If customers won’t sign up for the free trial, they’ll never get to experience the amazing features that await them. Even if they do sign up, there are many other opportunities for waste. For example, how many features do we really need to include to appeal to early adopters? Every extra feature is a form of waste, and if we delay the test for these extra features, it comes with a tremendous potential cost in terms of learning and cycle time. The lesson of the MVP is that any additional work beyond what was required to start learning is waste, no matter how important it might have seemed at the time.

Eric Ries, The Lean Startup pages 96-97

Let’s pretend you have an idea for a software product. You think through all of the different features and what you think people would most like, and select what you consider to be the most valuable, easy to make, and coherent subset of those features to build in a month. Then you build those features. You launch the product, and no one seems to be interested. What do you do?

If you create something and don’t have a good way of learning from what you are doing, your options boil down to:

  1. Retry: Change the product in some way and try again. Maybe it was that non-essential feature that you left out of the last release.
  2. Travel: Pivoting (another often imprecisely used term) is moving in a slightly different direction with one foot grounded in learning. Traveling is heading in some direction with the product or feature without having validated your hypothesis.
  3. Fail: Quit without having learned much. Try another idea.

(I originally thought of this in terms of abort, retry, fail, but as the failure of that error message centered around the confusing nature of the words, decided to make it a bit clearer instead.)

All of these are outcomes are undesirable due to the amount of waste involved (some sum of human energy, money, and time spent without much learning.) Again, this probably stems from not testing some risky hypotheses at a small scale.

Poorly defined expectations lead to fuzziness at the time you most need clarity. When done with an experiment, you should have a clear sense of “is this the outcome that I wanted to see or not?” If the answer is a clear no, you can think about what you might need to do to get a different outcome. If the answer is yes, or better than you expected, then you can continue with confidence. If you don’t say up-front what customer actions you expect from a certain action, you’re left with lukewarm results that anyone can interpret in any way.

The overhead of learning

MVP, despite the name, is not about creating minimal products. If your goal is simply to scratch a clear itch or build something for a quick flip, you really don’t need the MVP. In fact, MVP is quite annoying, because it imposes extra overhead. We have to manage to learn something from our first product iteration. In a lot of cases, this requires a lot of energy invested in talking to customers or metrics and analytics.

Eric Ries on MVPs

I like this quote because it introduces the idea that thinking about what we want to learn is critical when we build. The build-measure-learn (BML) loop is how things play out in time. However, we should first focus on what we want to learn, and then how we are going to measure it to dictate how we should build what we are going to build. The BML loop should be thought through in reverse to ensure that the experiment results in learning. The quicker we can get through that cycle, the faster our startup moves. Without learning, we aren’t really going through the cycle, and as such, are cutting out the feedback portion of the feedback loop.

The key questions

So here are my new questions for MVPs. If someone says they intend to “build an MVP” (the build part itself might be a tell), I am going to ask:

  • What are you trying to learn with this particular MVP?
  • What data are you collecting about your experiment?
  • What determines the success or failure of the experiment?

What are Some Great Posts on Debugging Tough Problems?

Even if you are not running the same technology as someone else, you can gain insight into how they solve hairy problems by reading through their summaries of strange fixes.

Today there was a great post on debugging CSRF problems in Rails. I thought it was interesting and had run into something similar but not nearly as convoluted. It was useful to see the steps that the post author took to figure out what was the root cause of the problem, tracking back to what the change in the Rails code base was that caused him to have invalid assumptions.

What are some great debugging posts that you have read in the past? (Maybe even something on Reddit or Hacker News) Share them in the comments!

Rails Raw SQL Insert -- Time Wrong

If the time is incorrect on something that you insert directly into the database when using Rails (off by several hours), try ensuring that you are using the correct modifier to get it into the right time zone. For example, instead of doing DateTime.now, try DateTime.now.utc if you are using UTC as your default timezone.