How to Sleep Four and 1/2 Hours a Day

Warning: I have revised my thoughts on this subject after reading Pietr Wozniak’s excellent breakdown of polyphasic sleep and my own experiences. While my article (this blog post) was not a strong promotion of polyphasic sleep, I thought I would comment that I no longer really agree with cutting out sleep. I think quality sleep is vital to creativity, short- and long-term happiness, and, most importantly, one’s health. I am thankful that I had this experience, but would not recommend it to others.

What could you do if you only needed to sleep four hours a day? Assuming you sleep eight hours a day right now, your average day would be 25% longer!

A few summers ago, I tried sleeping 4.5 hours a day while holding a full-time job. Here’s what I experienced as the result of doing this.

Preparation

Most people get their sleep in one big chunk, called monophasic sleep. Some get a consistent sleep pattern with two intervals. This biphasic sleep might include slightly shorter nighttime sleep and a siesta during the day. Polyphasic sleepers, on the other hand, split their sleep up into smaller chunks and spread them out throughout the day.

I first found out about polyphasic sleep by reading Steve Pavlina’s excellent and detailed series on polyphasic sleep. The idea of getting much greater awake time was compelling to me. Steve used the “uberman” pattern of sleep, which was six naps of twenty minutes stretched in equal parts throughout each day. I researched various polyphasic patterns, and figured out what I thought would be the best pattern based on my schedule. For over a month I tried to stick to the following sleep schedule:

  • Core sleep: 1a to 4a
  • Nap: 9a
  • Nap: 2p
  • Nap: 9p

The naps were all timed to be exactly twenty-five minutes long, meaning I got around four and a half hours of sleep per day. The core sleep block was chosen to coincide with when I normally would always be sleeping.

Actual Quotes

Here are some actual quotes from the experience, recorded at the time.

How I started:

Woke up yesterday at 6:15 am to go to work, and then did not sleep until 1:00 am ostensibly because I was doing laundry. Slept until 4:00 am, then woke up and started working on some audio stuff with the csound library. Then I napped from 9:00-9:25. Pretty cool overall, this schedule, I don’t really feel tired at this point. There is kind of a nagging feeling every now and then and I wonder how long it is until the next nap, but I think that could be because my brain knows that my body is not used to the schedule yet and does not want to miss a nap or something inadvertently. On the ride to work, I felt somewhat foggy. I will say that when I laid down to nap at 9 that I felt mostly awake and then my brain just about seized when it realized that it was going to sleep. Like I could feel the REM sleep firing up like it sometimes does when I am really tired and lie down to rest and have a dream immediately. I rested with minimal dreaming for about 10 minutes (once laughing at the way my brain was just sputtering) and then dreamt hard….

Overall, I can’t really see any negatives of this routine, as it allows for only 4 hours of sleep per day with quite a bit of flexibility. I did feel a little groggy after being woken up by the alarm both times. The first was probably just because I was legitimately tired, but the second was no worse than normal. That could be because I basically only hard napped for 10 minutes or so, so I didn’t finish the cycle. When I was in the shower I felt like the dream consciousness was still present. At the current time, I kind of feel like I am floating a little, could be due to the somewhat increased amounts of water that I have been drinking. I am only marginally concerned about my 2:00 nap. At this point, I am trying for 1-4, 9, 2, 9 as the core and nap times. Overall, I feel that I have a ton of time!

So basically, I kicked off the sleep deprivation process to get my body ready to take naps by first staying up a long time. I think this helped get in a rhythm earlier than I might have if I went at it normally. Strangely, I didn’t write down too many other experiences with this sleep method.

Experiences

I happened to do this at the height of summer, and with not sleeping much during the night, days felt extremely long. I quickly shifted over to using military time (24 hour clock) because it made equal sense to be up at five AM or midnight as it did to be up at five PM or noon. I found this helpful in keeping days a little straighter, although they started to run together a bit. Time generally seemed more continuous than before.

There were two social aspects I noticed. One was that the world (rightfully) shuts down at night. There’s not a whole lot of social interaction possible. At the time I was living alone, so it was kind of spooky. Having the nap at 2100 wasn’t quite as difficult as I thought it would be at first, although there were still times that I was glad to leave somewhere or needed to crash.

Exercising at full strength was a bit tough. The body needs rest time to recover from sprint workouts and the like.

Napping at work was pretty straightforward, just needed to allocate some time and find an empty conference room or head out to my car. It seems controversial to write this, but at the time it was critical.

Generally I felt at least somewhat tired during the day because I needed to have high amounts of focus for work. There were some oversleeps (a definite no-no), which caused the adoption cycle to be longer than necessary. If I were doing a line of work other than software development, I might have been able to afford not being able to focus consistently.

As a result, I’m not sure that I saved much time overall, although I did get quite a bit more uninterrupted time at night. I was able to read quite a bit, and it would have probably been a good time to learn something that didn’t require intense mental focus. Learning to play the guitar might have been a good choice of something to do when I didn’t feel like thinking and wanted to stay awake. Steve also recommended cooking or baking as low thought things. I filled my time up, and it seemed useful at the time. However, in retrospect, I’m not sure how much I got done.

Stopping

As I did more research on altered sleep schedules, I came upon literature that suggested that night workers and people who are more nocturnal have significantly higher incidences of cancer than “normal” sleepers. This is because they are exposed to light when their bodies are supposed to be in the dark. While light exposure might seem innocuous, our bodies produce a compound called melatonin that can only get created in darkness. Melatonin production cannot occur when light, specifically blue wavelengths (due to intensity), is absorbed in the eyes.

As a result, I was pretty paranoid about having a lot of light around during the night, and it was hard to stay awake without some degree of light. Computer time was limited at night, leaving reading or other activities of the sort.

I also stopped because I worried about permanently messing up my circadian rhythm. Throughout my life, I had a stable sleeping schedule, generally getting enough sleep and keeping a consistent schedule. I felt pretty flaky at times, and it was a fair amount of work to keep up with everything.

In summary, I didn’t want to trade good health and a good sleeping pattern for poor health, unpredictable sleep schedules, and a bit more time that was unusable because of my energy levels.

Afterwards

Afterwards I can sleep for twenty-five minutes basically on demand by going through calming thoughts as I did when I had trouble napping. I naturally wake up right about at the twenty-four minute mark, and feel a lot better. I was not able to do this beforehand, but was conditioned to do this by the sleep schedule. I didn’t really like naps before I tried the experiment, so I consider this a win.

Every summer since the experiment, when the days are quite long, I get a longing to do the experiment again. I think this is a mixture of nostalgia and some sort of hormonal memory.

Heroku - You don't have permissions to manage processses for this app

I was happily manually running my cron script on Heroku, when all of a sudden I got

 !   You don't have permissions to manage processses for this app

Note the misspelling of “processes” above. I haven’t gotten this before or since. Was wondering if anyone else got something similar and what the solution might be.

Install pandoc from source on Ubuntu

Here are some basic notes on how I installed from source:

Installing onto my Ubuntu x86_64 machine

Generally followed the build from source instructions

sudo apt-get install ghc6 # this bootstraps so we can build ghc 7

get latest ghc

wget http://haskell.org/ghc/dist/7.0.3/ghc-7.0.3-src.tar.bz2 tar -jxvf ghc-7.0.3-src.tar.bz2 cd there ./configure make sudo make install

get cabal

wget http://lambda.galois.com/hp-tmp/2011.2.0.1/haskell-platform-2011.2.0.1.tar.gz tar xvfz haskell-platform-2011.2.0.1.tar.gz cd there ./configure make sudo make install

get pandoc

edit ~/.cabal/config to uncomment user-install and set the value to False (so that we get global installation) sudo cabal install pandoc

pandoc -v

:)

My Lean Startup Machine Boston Experience Report

Lean Startup Machine weekends are a chance to apply the lean startup methodology by trying to build a startup in 48 hours. One of the key goals is demonstrating a process of learning. Learning takes various forms, and might include testing hypotheses through conversations, collecting qualitative and quantitative indicators of interest through surveys, creating MVPs to test acquisition methods and value propositions, getting signed letters of intent to buy a conceptual product, and maybe getting some cold hard cash in hand. Throughout the weekend, teams iterate and pivot as they learn more about their idea and the market through interacting with their customers. A panel of mentors works with the teams when they get stuck, and judge presentations at the end of the weekend to determine a winner.

When I first heard about the LSM in Boston, I signed up. I asked Wes Winham from Indy to join me, and we set off to Boston. This post talks about what I did and saw, and what I learned in the process.

My experience

I had been reading the various resources of the lean startup community for over a year, and was excited to actually put the principles to the test and test out my skills. Doing is a different beast from reading. My personal goal was to work with the team I joined up with to demonstrate that we could work through various difficulties and learn more about the lean startup methodology. I thought whatever team I joined had a chance to win, although it was not my focus.

Our final presentation (PDF) gives a good overview of what we ended up doing over the course of the weekend.

The energy and intelligence of the people at this event was a little intimidating at first. There were fifty extremely smart and capable people present. I was a bit out of my element in a city halfway across the nation, but then just started talking with people and things seemed to go well. I enjoyed the networking before the event began. It was clear that everyone was feeling each other out a bit to figure out who they wanted to work with. A weekend is not a huge time commitment, but everyone wanted to have a good experience.

After a round of pitches, everyone cast votes for the different ideas. The top ten idea presenters were chosen as temporary team representatives, and everyone walked around and tried to form into teams. It was a bit chaotic. :)

There were a few ideas and people that seemed most interesting to me, so I walked around for a minute or two getting a sense of the room. I settled on a pitch that was billed as a last minute appointment filler. This could be something that slotted people in for busy service providers, or something that filled up empty slots for not so busy service-providers. It might have been something for service providers, or something more for end users. It seemed open-ended and valuable enough that we could explore some different solutions in the space throughout the course of the weekend. I was excited to work with the team of guys that were interested in the problem as well. It ended up that I was the only “developer” on the team. The rest of the guys billed themselves as “marketers”. (LSM also had the distinction of “designer”. These roles were pretty open and just served to try to get a range of skillsets at the conference.)

[In]validating our initial assumptions

We immediately sat down to meet each other, clarify the idea in our head, and try to explore the general space around the problem. We came up with some assumptions about the problem space and started coming up with ways to test them. Our goal for the night was to come up with a survey that we could send to end users (clients of dentists, doctors, spas, etc.) of a last minute appointment filler system to see what their problems were with scheduling appointments. We mapped assumptions to questions in a way that reminded me of software requirements traceability.

I was out the next morning due to a short-term illness and the fact that my production server at work wasn’t feeling that great either. When I felt good enough to fix the server problems, I headed back to the meeting place. The team had learned in the meantime that clients of service providers did not perceive the problem of scheduling appointments with service providers to be a large pain point, and that they were likely unwilling to change doctors or other differentiated service providers for a quicker appointment turnaround time or for a discount (these were some related ideas that we toyed with.)

So our assumption at this point was that the service providers would need to drive the adoption of this tool, and we sought out to find what problems they had with the appointment process. We accomplished this by calling people we knew in fields that had appointments, and by cold-calling various dentists and spas because we guessed they might be open on a Saturday.

One of our initial assumptions on the service provider side was that our idea would probably only be useful if doctors had 95% or less utilization, and they cared about utilization. If these were true, then we had a problem that needed fixing and an angle to sell a solution. What the specific solution was would have been determined later in our process. However, we learned two final things…

It turned out most service providers did not have a problem with people cancelling at the last moment. Most people give at least eight hours cancellation notice, partly because of late cancellation penalties and partly because of common courtesy. Also, basically everyone we talked to had a manual system in place to put people down on a list and call them in the event that an appointment filled up. So they actually have really high utilization. Further, the two or three unfilled appointments–per week–don’t really concern the people at the office. We proposed an email or test messaging system to one provider who had a few openings per week, and she replied that they actually had that feature in the schedule management software that they used, but didn’t care enough about the problem to figure out how to use it! This was pretty damning evidence that we were not on the right track.

We decided to take what we learned and pivot to a different market: high-end salons. We reasoned that these places might have appointment woes similar to what we ruled out for doctors and dentists. However, it was much the same, and was compounded by a slightly different problem. Most high-end salons actually employ independent contractors that are responsible for driving in their own work.

Reeling at this point, we talked with a few people around home base. The team decided to leap. While a pivot would be grounded in the learning that we had already obtained, a leap implies that there is not much salvageable from our initial exploration of the problem space. We had a moment of despair, then started a huge brainstorming session.

At this point, the weekend gets a bit fuzzier. The sheer volume of ideas that we had was very large, and we took action on a few of them to try to see if there was any easy viability. Time definitely played a role at this point. We didn’t want to show up to the final presentation with the worst presentation by showing little learning. We discussed a lost key retrieval service, used facebook to explore an online university lead generation idea, walked around MIT asking people about their caffeine preferences for a new product based on green tea, and called a couple of unsavory businesses. For me as an admittedly extroverted software person, I definitely needed to get out of my comfort zone to do this. I think coming in with a mindset of exploring people’s problems and genuinely trying to help enabled me to get over any hangups about talking to people about ideas that I had no intention of implementing that weekend. After the leap point, we mostly stuck to the customer discovery portion of the customer development process.

Toward the end of the weekend, I talked with a few people and tried to understand the process that they took and the key things that they learned. It would have been interesting to have a networking session before the final presentation. Our presentation went well overall, with LSM judges tweeting out some key findings we had. Our team won the “Old Yeller” award for taking our initial idea out in the back and shooting it. :)

So that was the narrative, next comes the reflection.

My Biggest takeaways

Steve Blank says that no business model survives first contact with the customer. I now more clearly see the reason for this: people in the business have mental models that probably conflict with reality. Assumptions build a structure to view the basic details of a business. With a given unvalidated mental model, like a Platonic ideal, a number of valid businesses seem possible. However, once inaccuracies in the model itself are revealed, companies must learn how to adapt to this new knowledge. The key is figuring out invalid assumptions as quickly as possible to learn as quickly as possible.

I realized that validating ideas quickly was useful for staying detached. If I stay inside the building and do nothing but think or talk to others, I just get further from reality while becoming more certain of my idea. Boyd posited the spiraling confusion in his work on the OODA loop, and this idea clicked for me this weekend. Getting out and trying to quickly invalidate an idea through experiments seems to be a good way to let go of small ideas that I have every day with a clean conscience. “Kill your darlings”, I guess.

Seeking failure gave me unnatural feelings. There were several times throughout the weekend that were very high, and several that were very tough. At one point, we realized that our initial idea had very little validity, and so we chose to start from basically square one after doing some massive brainstorming. The emotions probably resembled a startup on a micro scale. I liked that the teams were pretty laid back though; everyone seemed to have a good sense of humor while learning a whole lot.

We ended up exploring about six ideas to some degree throughout the weekend, with many more than that killed during the idea generation process. With our total waking time of 24 hours, this meant the cycle time of our ideas was about four hours each. The way we accomplished this was by having a lot of parallelism after our initial idea did not seem to work out. This was partly because of the time format of the weekend. When we chose to leap, we only had half the weekend left. It was also partly because we didn’t really have a strong direction to head in.

Everyone agreed though–it was much better that we spend less than a day figuring out that our initial idea didn’t make much sense because of lack of demand then to spend months or years developing a solution that nobody actually wanted. This was the kind of success story that I went to Boston to get. We had an idea that, on paper to us, seemed great, but when exposed to reality broke down.

Toward the end of the weekend when we did a bit of retrospection, I realized that the quick exploration process seems to be a useful way to start a company. You start with a few seed ideas and some smart people that all want to use the process to find a viable idea, and you put the ideas through a customer discovery and customer development pipeline. Ideas that seem squashed are discarded and replaced with other ideas. Others are iterated and pivoted until they become more viable. In this way, you quickly learn what doesn’t work, and try to find some pain. It seems that everyone should stick to the same general process. The nice thing about the parallelism is that most of the time it takes a little while to get feedback that is actionable. So instead of waiting around, you can explore another idea at the same time.

Useful Tactics

Calling the west coast or Hawaii is a good way to “extend” customer development time when businesses on the east coast are closed. This is something that we wouldn’t have thought of without the time constraints imposed by the weekend format.

Finding the ultimate purchaser of the service has a lot of value. If you are talking with someone that does not know the value of a potential service to the business, then you are likely wasting time. We got value out of talking to receptionists and friends in the industries we targeted, but had we progressed much further we would have needed to talk to the ultimate purchasers of the system. I think Steve Blank talks about this in Four Steps to the Epiphany, but it was one of those things that I forgot about until it was a problem for us.

The next LSM

In retrospect, it might have been valuable to choose some of the related ideas that we generated during our initial approach of validating the ideas. When we came to the time when we leaped, we might instead have chosen another angle of attack on the problem. Some ideas included more general inventory expiration problems: unused machinery rental, donuts that the donut seller wants to get rid of, a table in a restaurant that they are willing to fill up for a 10% discount.

The team right next to us (Wes’s team) seemed to have a really organized process. One of the team members was an agile meeting facilitator, and they used pomodoros and personas and use-case mapping to work through the process. They actually ended up making $60 by the end of the weekend with no actual software product, with an additional $120 that came in by the end of Sunday! But I’ll let him tell that story.

I think there were times when we could have been more organized in our process of writing down assumptions, exactly how to test them, and what we would look at as a validated or invalided assumption. Having clear assumptions with clear ways to measure seems important. If you talk to fifty people and get a fantastic response to your problem, it’s a clear green light to continue with the idea. What’s harder is making that decision when the data is fuzzy. At that point, you need to try to change how things are worded in order to calibrate the questions asked with. Talking with people directly gives really quick feedback on what people respond to and what things they don’t seem to care about, as I found out when wandering MIT’s campus and talking to people.

I felt like the presentations at the end were potentially the most interesting portion of the weekend, but in Boston’s case they were cut short. While typically six minutes with three minutes of Q&A, each team only had six minutes of combined presentation and Q&A. I didn’t fully understand why the presentations were cut short when it was the best time for each team to present what they learned and the process that they took to get that knowledge. The process was important, and I would imagine that some of the teams came up with some really useful insights they didn’t have time to share. Oh well, I guess that’s how it goes.

On the whole, I was glad that we got a chance to do some idea validation / invalidation over the course of the weekend. I’m quite glad that I went.

If you read through this article, thanks! This is the kind of experience report that I wish everyone gave when they go to interesting conferences. At the least, hopefully it’s a good starting off point for conversations that we have in the future.

Fixing Sporadically Failing Specs

When developing things with a large suite of unit tests or automated specifications, inevitably tests that actually pass fail for some reason. More difficult is the case where the test or spec fails only intermittently. I’ve taken the approach of late to keep a file around that records when I run into a problematic spec. When I get to five times that the spec failed, it’s time to refactor it to be less troublesome.

The first thing I check is to look at the spec and the code it executes and ensure that I think nothing is actually broken. Next, I ensure that the spec is actually adding value. If it’s slightly broken and is actually useless, we might as well get rid of it now and not spend more time on it. Generally though, the functionality of a sporadically failing test will be correct, and the spec will add some value.

At this point, I run the test enough times to see that it fails consistently, or where it fails. With rspec, I set up spork and configure rspec to look for an external drb server. If you want clarification of this process, please leave a comment. I’m just trying to get this out of my head at the moment.

spork allows me to run the file with the problematic spec much more quickly. I’m talking several orders of magnitude. So then, you can run the spec via your shell in a loop to get the results. The snippet below runs the troublesome spec fifty times, and appends each run to a file.

$ for i in {1..50}
      rspec spec/models/troublesome_model_spec.rb >> temp.txt

Then you can easily inspect the problem by searching for “1 fail” in temp.txt.

The next part is to actually fix the problem in the spec. When you believe that you have fixed the problem using your normal development workflow (changing the spec, and running to see that it passes correctly), you can invoke the shell loop again. The output file should have a different name or after deleting the previous file so you don’t get confused by previous failures. This process ensures that the problem is indeed taken care of and won’t pop up again.

Obviously if you’re doing continuous deployment, failing once every fifty times might be problematic. :)