Heroku - You don't have permissions to manage processses for this app

I was happily manually running my cron script on Heroku, when all of a sudden I got

 !   You don't have permissions to manage processses for this app

Note the misspelling of “processes” above. I haven’t gotten this before or since. Was wondering if anyone else got something similar and what the solution might be.

Install pandoc from source on Ubuntu

Here are some basic notes on how I installed from source:

Installing onto my Ubuntu x86_64 machine

Generally followed the build from source instructions

sudo apt-get install ghc6 # this bootstraps so we can build ghc 7

get latest ghc

wget http://haskell.org/ghc/dist/7.0.3/ghc-7.0.3-src.tar.bz2 tar -jxvf ghc-7.0.3-src.tar.bz2 cd there ./configure make sudo make install

get cabal

wget http://lambda.galois.com/hp-tmp/2011.2.0.1/haskell-platform-2011.2.0.1.tar.gz tar xvfz haskell-platform-2011.2.0.1.tar.gz cd there ./configure make sudo make install

get pandoc

edit ~/.cabal/config to uncomment user-install and set the value to False (so that we get global installation) sudo cabal install pandoc

pandoc -v

:)

My Lean Startup Machine Boston Experience Report

Lean Startup Machine weekends are a chance to apply the lean startup methodology by trying to build a startup in 48 hours. One of the key goals is demonstrating a process of learning. Learning takes various forms, and might include testing hypotheses through conversations, collecting qualitative and quantitative indicators of interest through surveys, creating MVPs to test acquisition methods and value propositions, getting signed letters of intent to buy a conceptual product, and maybe getting some cold hard cash in hand. Throughout the weekend, teams iterate and pivot as they learn more about their idea and the market through interacting with their customers. A panel of mentors works with the teams when they get stuck, and judge presentations at the end of the weekend to determine a winner.

When I first heard about the LSM in Boston, I signed up. I asked Wes Winham from Indy to join me, and we set off to Boston. This post talks about what I did and saw, and what I learned in the process.

My experience

I had been reading the various resources of the lean startup community for over a year, and was excited to actually put the principles to the test and test out my skills. Doing is a different beast from reading. My personal goal was to work with the team I joined up with to demonstrate that we could work through various difficulties and learn more about the lean startup methodology. I thought whatever team I joined had a chance to win, although it was not my focus.

Our final presentation (PDF) gives a good overview of what we ended up doing over the course of the weekend.

The energy and intelligence of the people at this event was a little intimidating at first. There were fifty extremely smart and capable people present. I was a bit out of my element in a city halfway across the nation, but then just started talking with people and things seemed to go well. I enjoyed the networking before the event began. It was clear that everyone was feeling each other out a bit to figure out who they wanted to work with. A weekend is not a huge time commitment, but everyone wanted to have a good experience.

After a round of pitches, everyone cast votes for the different ideas. The top ten idea presenters were chosen as temporary team representatives, and everyone walked around and tried to form into teams. It was a bit chaotic. :)

There were a few ideas and people that seemed most interesting to me, so I walked around for a minute or two getting a sense of the room. I settled on a pitch that was billed as a last minute appointment filler. This could be something that slotted people in for busy service providers, or something that filled up empty slots for not so busy service-providers. It might have been something for service providers, or something more for end users. It seemed open-ended and valuable enough that we could explore some different solutions in the space throughout the course of the weekend. I was excited to work with the team of guys that were interested in the problem as well. It ended up that I was the only “developer” on the team. The rest of the guys billed themselves as “marketers”. (LSM also had the distinction of “designer”. These roles were pretty open and just served to try to get a range of skillsets at the conference.)

[In]validating our initial assumptions

We immediately sat down to meet each other, clarify the idea in our head, and try to explore the general space around the problem. We came up with some assumptions about the problem space and started coming up with ways to test them. Our goal for the night was to come up with a survey that we could send to end users (clients of dentists, doctors, spas, etc.) of a last minute appointment filler system to see what their problems were with scheduling appointments. We mapped assumptions to questions in a way that reminded me of software requirements traceability.

I was out the next morning due to a short-term illness and the fact that my production server at work wasn’t feeling that great either. When I felt good enough to fix the server problems, I headed back to the meeting place. The team had learned in the meantime that clients of service providers did not perceive the problem of scheduling appointments with service providers to be a large pain point, and that they were likely unwilling to change doctors or other differentiated service providers for a quicker appointment turnaround time or for a discount (these were some related ideas that we toyed with.)

So our assumption at this point was that the service providers would need to drive the adoption of this tool, and we sought out to find what problems they had with the appointment process. We accomplished this by calling people we knew in fields that had appointments, and by cold-calling various dentists and spas because we guessed they might be open on a Saturday.

One of our initial assumptions on the service provider side was that our idea would probably only be useful if doctors had 95% or less utilization, and they cared about utilization. If these were true, then we had a problem that needed fixing and an angle to sell a solution. What the specific solution was would have been determined later in our process. However, we learned two final things…

It turned out most service providers did not have a problem with people cancelling at the last moment. Most people give at least eight hours cancellation notice, partly because of late cancellation penalties and partly because of common courtesy. Also, basically everyone we talked to had a manual system in place to put people down on a list and call them in the event that an appointment filled up. So they actually have really high utilization. Further, the two or three unfilled appointments–per week–don’t really concern the people at the office. We proposed an email or test messaging system to one provider who had a few openings per week, and she replied that they actually had that feature in the schedule management software that they used, but didn’t care enough about the problem to figure out how to use it! This was pretty damning evidence that we were not on the right track.

We decided to take what we learned and pivot to a different market: high-end salons. We reasoned that these places might have appointment woes similar to what we ruled out for doctors and dentists. However, it was much the same, and was compounded by a slightly different problem. Most high-end salons actually employ independent contractors that are responsible for driving in their own work.

Reeling at this point, we talked with a few people around home base. The team decided to leap. While a pivot would be grounded in the learning that we had already obtained, a leap implies that there is not much salvageable from our initial exploration of the problem space. We had a moment of despair, then started a huge brainstorming session.

At this point, the weekend gets a bit fuzzier. The sheer volume of ideas that we had was very large, and we took action on a few of them to try to see if there was any easy viability. Time definitely played a role at this point. We didn’t want to show up to the final presentation with the worst presentation by showing little learning. We discussed a lost key retrieval service, used facebook to explore an online university lead generation idea, walked around MIT asking people about their caffeine preferences for a new product based on green tea, and called a couple of unsavory businesses. For me as an admittedly extroverted software person, I definitely needed to get out of my comfort zone to do this. I think coming in with a mindset of exploring people’s problems and genuinely trying to help enabled me to get over any hangups about talking to people about ideas that I had no intention of implementing that weekend. After the leap point, we mostly stuck to the customer discovery portion of the customer development process.

Toward the end of the weekend, I talked with a few people and tried to understand the process that they took and the key things that they learned. It would have been interesting to have a networking session before the final presentation. Our presentation went well overall, with LSM judges tweeting out some key findings we had. Our team won the “Old Yeller” award for taking our initial idea out in the back and shooting it. :)

So that was the narrative, next comes the reflection.

My Biggest takeaways

Steve Blank says that no business model survives first contact with the customer. I now more clearly see the reason for this: people in the business have mental models that probably conflict with reality. Assumptions build a structure to view the basic details of a business. With a given unvalidated mental model, like a Platonic ideal, a number of valid businesses seem possible. However, once inaccuracies in the model itself are revealed, companies must learn how to adapt to this new knowledge. The key is figuring out invalid assumptions as quickly as possible to learn as quickly as possible.

I realized that validating ideas quickly was useful for staying detached. If I stay inside the building and do nothing but think or talk to others, I just get further from reality while becoming more certain of my idea. Boyd posited the spiraling confusion in his work on the OODA loop, and this idea clicked for me this weekend. Getting out and trying to quickly invalidate an idea through experiments seems to be a good way to let go of small ideas that I have every day with a clean conscience. “Kill your darlings”, I guess.

Seeking failure gave me unnatural feelings. There were several times throughout the weekend that were very high, and several that were very tough. At one point, we realized that our initial idea had very little validity, and so we chose to start from basically square one after doing some massive brainstorming. The emotions probably resembled a startup on a micro scale. I liked that the teams were pretty laid back though; everyone seemed to have a good sense of humor while learning a whole lot.

We ended up exploring about six ideas to some degree throughout the weekend, with many more than that killed during the idea generation process. With our total waking time of 24 hours, this meant the cycle time of our ideas was about four hours each. The way we accomplished this was by having a lot of parallelism after our initial idea did not seem to work out. This was partly because of the time format of the weekend. When we chose to leap, we only had half the weekend left. It was also partly because we didn’t really have a strong direction to head in.

Everyone agreed though–it was much better that we spend less than a day figuring out that our initial idea didn’t make much sense because of lack of demand then to spend months or years developing a solution that nobody actually wanted. This was the kind of success story that I went to Boston to get. We had an idea that, on paper to us, seemed great, but when exposed to reality broke down.

Toward the end of the weekend when we did a bit of retrospection, I realized that the quick exploration process seems to be a useful way to start a company. You start with a few seed ideas and some smart people that all want to use the process to find a viable idea, and you put the ideas through a customer discovery and customer development pipeline. Ideas that seem squashed are discarded and replaced with other ideas. Others are iterated and pivoted until they become more viable. In this way, you quickly learn what doesn’t work, and try to find some pain. It seems that everyone should stick to the same general process. The nice thing about the parallelism is that most of the time it takes a little while to get feedback that is actionable. So instead of waiting around, you can explore another idea at the same time.

Useful Tactics

Calling the west coast or Hawaii is a good way to “extend” customer development time when businesses on the east coast are closed. This is something that we wouldn’t have thought of without the time constraints imposed by the weekend format.

Finding the ultimate purchaser of the service has a lot of value. If you are talking with someone that does not know the value of a potential service to the business, then you are likely wasting time. We got value out of talking to receptionists and friends in the industries we targeted, but had we progressed much further we would have needed to talk to the ultimate purchasers of the system. I think Steve Blank talks about this in Four Steps to the Epiphany, but it was one of those things that I forgot about until it was a problem for us.

The next LSM

In retrospect, it might have been valuable to choose some of the related ideas that we generated during our initial approach of validating the ideas. When we came to the time when we leaped, we might instead have chosen another angle of attack on the problem. Some ideas included more general inventory expiration problems: unused machinery rental, donuts that the donut seller wants to get rid of, a table in a restaurant that they are willing to fill up for a 10% discount.

The team right next to us (Wes’s team) seemed to have a really organized process. One of the team members was an agile meeting facilitator, and they used pomodoros and personas and use-case mapping to work through the process. They actually ended up making $60 by the end of the weekend with no actual software product, with an additional $120 that came in by the end of Sunday! But I’ll let him tell that story.

I think there were times when we could have been more organized in our process of writing down assumptions, exactly how to test them, and what we would look at as a validated or invalided assumption. Having clear assumptions with clear ways to measure seems important. If you talk to fifty people and get a fantastic response to your problem, it’s a clear green light to continue with the idea. What’s harder is making that decision when the data is fuzzy. At that point, you need to try to change how things are worded in order to calibrate the questions asked with. Talking with people directly gives really quick feedback on what people respond to and what things they don’t seem to care about, as I found out when wandering MIT’s campus and talking to people.

I felt like the presentations at the end were potentially the most interesting portion of the weekend, but in Boston’s case they were cut short. While typically six minutes with three minutes of Q&A, each team only had six minutes of combined presentation and Q&A. I didn’t fully understand why the presentations were cut short when it was the best time for each team to present what they learned and the process that they took to get that knowledge. The process was important, and I would imagine that some of the teams came up with some really useful insights they didn’t have time to share. Oh well, I guess that’s how it goes.

On the whole, I was glad that we got a chance to do some idea validation / invalidation over the course of the weekend. I’m quite glad that I went.

If you read through this article, thanks! This is the kind of experience report that I wish everyone gave when they go to interesting conferences. At the least, hopefully it’s a good starting off point for conversations that we have in the future.

Fixing Sporadically Failing Specs

When developing things with a large suite of unit tests or automated specifications, inevitably tests that actually pass fail for some reason. More difficult is the case where the test or spec fails only intermittently. I’ve taken the approach of late to keep a file around that records when I run into a problematic spec. When I get to five times that the spec failed, it’s time to refactor it to be less troublesome.

The first thing I check is to look at the spec and the code it executes and ensure that I think nothing is actually broken. Next, I ensure that the spec is actually adding value. If it’s slightly broken and is actually useless, we might as well get rid of it now and not spend more time on it. Generally though, the functionality of a sporadically failing test will be correct, and the spec will add some value.

At this point, I run the test enough times to see that it fails consistently, or where it fails. With rspec, I set up spork and configure rspec to look for an external drb server. If you want clarification of this process, please leave a comment. I’m just trying to get this out of my head at the moment.

spork allows me to run the file with the problematic spec much more quickly. I’m talking several orders of magnitude. So then, you can run the spec via your shell in a loop to get the results. The snippet below runs the troublesome spec fifty times, and appends each run to a file.

$ for i in {1..50}
      rspec spec/models/troublesome_model_spec.rb >> temp.txt

Then you can easily inspect the problem by searching for “1 fail” in temp.txt.

The next part is to actually fix the problem in the spec. When you believe that you have fixed the problem using your normal development workflow (changing the spec, and running to see that it passes correctly), you can invoke the shell loop again. The output file should have a different name or after deleting the previous file so you don’t get confused by previous failures. This process ensures that the problem is indeed taken care of and won’t pop up again.

Obviously if you’re doing continuous deployment, failing once every fifty times might be problematic. :)

Run Local Scripts on Heroku

In order to run an arbitrary script on Heroku, you can use the Heroku console. They state that “you can use heroku console as a stand-in for Rails’s script runner”, but one key component of the script runner is that you can run files. You can run files in your git repository that are deployed to the instance that you want to run them on by using the method advocated by Steve Wilhelm.

What if the file is not checked in or you need to make rapid modifications for prototyping? If you have a lot of code that you want to write and you want to be able to easily modify it, try the solution below. Manually typing in many multi-line statements at the console (and retyping when you make an inevitable mistake) is frustrating.

First, create a file called something like semicolonify.rb, and make it executable:

#!/usr/bin/env ruby
puts File.read(ARGV[0]).split("\n").delete_if{|line| line =~ /^\s*#/}.compact.join(';')

Then you can run semicolonify.rb on your ruby file, and pipe the output to a temporary file:

$ ./semicolonify.rb your_actual_ruby_file.rb > temp.rb

After that, you can run something like:

$ heroku console --app APPNAME `cat temp.rb`

This will run the commands you specify through the heroku console, one line at a time. You can add your own functions, etc. Based on my understanding, you need to run semicolonify.rb because the console gets confused when it sees blocks that are not terminated. The only limitation (based on my implementation) is that trailing comments cause things to get mucked up. Also, make sure you don’t have a long-running (> 30 second?) script, or Heroku will cut you off. There are probably additional things that I need to add to the semicolonify.rb script, but this seems to work for now.