Review: Managing The Design Factory

Title: Managing the Design Factory Author: Donald Reinertsen Length: 288 pages Published: 1997 ISBN-10: 0684839911 ISBN-13: 9780684839912

This book analyzes product development processes from a lean perspective. The author starts by introducing the concept of a “design factory”, which shows the differences between lean principles applied to manufacturing and lean principles applied to creating new innovations. The key differences include information arrival processes and the repeatable versus non-repeatable.

One key takeaway from reading this book is that the goal of creating new things is not to reduce the variability of creating them. Having waste is actually often the most effective way to create something new because the “waste” generates information, which has value. If you are doing something that has a known problem statement and a known solution, you are essentially turning the design crank. Seek to increase throughput and increase flow before eliminating waste.

Much like Goldratt’s book, The Goal, Reinertsen focuses on viewing the company profitability as the lens to view business decisions through. He advocates modeling of projects and their intended ROI, preferring simple and useful models over opaque ones. He discusses many subjects from the perspective of how to optimize for development expense, unit costs, performance, or speed of development, which are mostly at odds with each other.

He has an interesting explication of queueing and information theory, and how these impact product development. One takeaway from the information theory topic is that information is inversely proportionate to the probability of an event occurring. This coincides with my views on generating models (similar to Popper’s views on the subject.) Essentially, tests should be written to have the maximum value if they fail. I believe Reinertsen would be a proponent of high level tests. He also contends that if your tests would cause a competitor’s product to fail, you are likely testing too much.

Overall, I found this book to be a compelling read with insights clearly stated and a strong overall theme.

Full outline

A Tool For Your Toolbox: Risk Poker

This is an idea that I read about in Managing the Design Factory (detailed outline). Around page 226, Reinertsen says:

Let us start with the first source of technical risk, the high-risk subsystem. Which subsystems have high technical risk? To assess this we must perform our project-level analysis to determine how each program objective (expense, cost, performance, and speed) will impact profits. Then we assess each subsystem to determine how it might impact each of these factors. The easiest way to do this is to use a team meeting in which members estimate the downside risk for each subsystem in terms of magnitude and probability. This can be done by having each member assess risks independently, having a discussion on why different team members have rated risk differently, and then having team members reassess risks. The output of such a meeting is a surprisingly good understanding of project risk. Contrary to the common view that unknown risks are most important, most teams are surprisingly aware of where they are likely to fail in a program.

This reminds me so much of the Agile practice of planning poker that I’m dubbing it “risk poker”. Both practices make sense to me because they use crowdsourcing to solve the problem. I think software teams are more aware of the risks on the project than they typically give themselves credit for, leading to the paradoxical value of this practice. By doing this practice, teams make explicit the knowledge that they already have but are often hesitant to act on for one reason or another.

While doing some basic research to see if this term had been employed yet, I also stumbled across protection poker, which deals more with security risks.

Perhaps I’m reinventing the wheel and risk poker is a well-known concept with a different name. Has anyone employed something like this on a project?

Signal and Meaning

On a long enough time line, the survival rate for everyone drops to zero.

Chuck Palahniuk, Fight Club

In the long run, every signal dies. Paper rots, genes mutate, forests burn, files corrupt. Error correcting codes help, but they aren’t enough. Perfect preservation of effort is not the way of the universe. Human languages evolve and break down meaning.

Will my personal journal be lost in an accident following the poisson distribution within my lifetime? Will my grandchildren care enough to translate my life’s work to the technology of the day before it is unreadable? There is a difference between preservation and the ability to understand, as Robert Scoble points out.

Software is a signal. It stops when the cost of maintaining it exceeds the value derived. Strangely, software has a longer signal than usually envisioned.

Some signals stay stronger longer. I know more about Plato than I do about most of the people on my street. Millions of people have come before me that I will never know anything about, billions living right now that I will never hear about. Does this imply a mediocre life? People are still riveted by JFK. Elvis lives. Surely there is a high signal strength for them. Much replication, remarkable, revelatory about the human condition. But most people will not be remembered outside of their family tree.

Spawning children spreads gene and life information with some lossiness.

Linus Torvalds said “Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.” In the long run, though, all signals are lossy.

Before a signal starts, I imagine it looks like this:

From nothing comes something, if only briefly. Most signals looks something like this:

A typical blog has a few posts, topical, uninteresting, no replication. The typical newsletter has a few editions and then fades to nothing. The signal returns to zero quickly.

Some signals, however, look more like this, and they are rather exceptional:

Over the span of the universe, though, all signals that we can comprehend look like this:

The strength and length of any signal is too small to stand out from the nothingness and noise. It’s on that line somewhere, but too short to be meaningful.

My thoughts on what to do about this:

Spread signals. Great ideas will replicate faster, great works will be preserved longer.

Start signals. The best measure of effectiveness is how long that signal lasts. Writing, music, software, companies, groups, buildings. Life is inherently a signal, something finite, an exception to entropy. One’s life might be definable by the signals started, the external manifestation of internal capacities.

How to Write a Work Journal

My friend Tyson works at a major insurance company. Recently he shared with me a technique that he uses there: He keeps a work journal to write down domain-specific things that he learns.

Journaling could include writing down a technique or rule of thumb used, a page of a book that was helpful in a certain area, or who to contact in a different department in case of questions in a specific area. Sometimes it could be something that went particularly well so that he can look back when times are tough and remember a time when he persevered. Other times it is something that didn’t go as well as he hoped. He also does this to clarify his own knowledge so that when he needs information he has a place to look, and for potentially transferring that knowledge to other people.

I recently tried experimenting with this technique. I suppose that I like writing, so this came somewhat naturally. I liked separating this from other writing that I do because it seems useful to have it all in one place. I am just capturing knowledge that I have gained from working on a project. Writing this down regularly is really helpful in writing documentation later because you explicitly state what you have recently learned. This prevents me from being blinded to what I learned and taking it for granted. It might be obvious to me at the end of a project why the build works the way that it does, but along the way I needed to learn a lot of things and make various decisions. So this seems to be a good way of documenting decisions for later understanding and analysis of results.

Here’s an example of something I wrote (sanitized):

20100203 - 1531

While a PDF is printing to file (which is an excellent way
to save paper), let me recount how I added a new
HelpPDFBuilder to the application.  I modified the
HelpApplication.exe.config file pursuant to the rest of the
file.  I added references in the code to the new class.
Then, I expected everything to work.  However, the changes
in the HelpApplication.exe.config file did not get picked up
by the application.  The problem was two-fold.  First, I had
commented out the post-build steps, which meant that the
config file did not get reexamined.  I uncommented these,
and it still didn't work.  The next problem was that I did
not then copy the newly touched config file to the bin
directory.  This process is clumsy, and it would be nice if
it was improved somehow.

I would say that this document should probably contain things that I would be fine with anyone reading. If I have a beef with a coworker, I should probably talk with them about it or write it somewhere more private. This just ensures that I am writing things down that are actually useful and won’t bite me in the rear some time later.

Documenting things moderately well also helps me because the next person knows more and has fewer questions. The insurance company mentioned actually has a process for transferring responsibilities on projects. Also, at the end of someone’s career there, they have a process for sitting the person down and doing a knowledge transfer exercise by having different people interview them. This seems quite interesting. It seems like a really good way to not lose information and has the side effect of validating the person’s career there and allowing them to tell stories that will live on for some time to come.

The key here seems to be writing things down that seem obvious in retrospect but are difficult to acquire. It doesn’t need to be in a complicated format (mine’s a text file) or take a long time. Maybe five or ten minutes a day of the most important things you learned would be of immense value when you or someone else comes back to a project in a year’s time.

Does anyone else use something like this? What did I miss?

Guilty Developer Syndrome

I’ve noticed that when developers have worked on a project and then someone else takes it over, they seem to feel guilty about decisions made on the project. When I ask them why certain decisions were made, they might sheepishly say, “Yeah… I know it’s not the best way to do this, and it’s not the way I would do it now.” Some might get defensive, or cite external constraints like schedule pressure. But my belief is that developers should not feel strongly negative about older projects.

Experience

I’ll admit that I got a mulligan. It was a Ruby on Rails project for an internal tool. I hadn’t worked with that technology stack much before. I kind of hacked something together based on the requirements, and it worked fine. There were few tests, and the design definitely did not use best practices. But it worked.

Next, I worked on a six-month-long Rails project using mostly TDD. After that, an opportunity arose to clean up the internal tool and add some features.

It felt really good. I felt like I knew the environment so much better, and could see where problems in the code could be easily solved using better Rails or Ruby techniques. It was quite exciting. At times, I was surprised that the code actually worked at all. I think that most developers rarely get this type of opportunity unless they are on a maintenance project, and I do think it was a valuable experience to see the aftermath of my own coding.

Synthesis

But afterwards I came to the realization that developers shouldn’t feel guilty about producing software. There are always going to be new technologies and practices to learn, tradeoffs to be made, and hindsight clearer than the present view. Should I refactor this class now or later? Should I make this easily extensible, or are we truly not going to need it? What should we work on first to reduce the technical risk on this project?

After every book I read on a subject, I learn new techniques and ways of thinking about problems. But this shouldn’t stop activity in the present. I will never have one hundred percent of the knowledge I need, and the solutions that I come up will only satisfice the problem.

I’m assuming that developers are really putting their best effort forth. This doesn’t absolve the developer from learning from their mistakes or preemptively learning.

I’m just trying to say that developers shouldn’t feel bad because they didn’t know enough to prevent every problem or solve every dilemma in the best way possible. Seeing mistakes in the past for what they are just indicates growth. Getting it right every time implies skill stagnation or perfection. Which one is more likely?

Have you had an experience where you wish you could go back and change something on a software project? A time when you read your own code and cringed? Where’s the balance between getting things right and getting things done? Consider leaving a comment!