In previous blogs I introduced the concept of antifragile with a focus on how antifragility helps us deal with risks during agile development. I also blogged about how having flexible scope makes us antifragile. In this post, I explain why limiting work in process (WiP) and having small batch sizes is also antifragile.
Why is antifragile a desirable property? Because in the presence of disorder, things that are antifragile tend to get better rather than get worse. In contrast, fragile things are more likely to be harmed than helped by disorder. Most people believe that robust is the opposite of fragile—it is not. Robust things are not harmed by disorder nor do they benefit from it.
For example, with respect to stock price volatility, stock options are antifragile because we have more to gain (virtually unlimited upside when the stock price increases) than to lose (can only lose what we paid to exercise the options even if the stock price goes to zero).
Time is in the Disorder Family
Disorder, according to Nassin Taleb (author of the book Antifragile: Things that Gain from Disorder) is a family of related concepts including: uncertainty, variability, imperfect, incomplete knowledge, chance, chaos, volatility, entropy, time, the unknown, randomness, turmoil, stressor, error, dispersion of outcomes, and unknowledge.
Note that time and volatility function similarly: the more time, the more harmful events can occur, and therefore more disorder. I will illustrate shortly how time is an enemy of having a lot of WiP.
Work in Process (WiP) – Batch Size
Work in process (or WiP) refers to the work that has been started but not yet finished. For example, a detailed requirements document produced very early on during a project, perhaps during an analysis phase, represents WiP. We might also call that requirements document inventory, since it is similar to the parts on the floor of a manufacturing plant that are waiting to be put into finished goods that can be sold to customers. By analogy, until a requirement is fully developed, tested, and deployed it is really no different than a physical part sitting on the floor.
The amount of WiP we have on hand can be thought of in terms of the batch size. So, if we use a phased-based approach like waterfall, then the goal is to produce all of the requirements during the analysis phase. This results in a requirements batch size of 100%, since at the end of the analysis phase we have effectively started working on every requirement (i.e., we invested in creating a document that details all of the requirements), but we have finished (e.g., design, built, tested, and deployed) none, or 0%, of them. So, the requirements in the requirements document represent a large amount of WiP, or inventory.
Having a lot of WiP on hand makes the development process more fragile in many ways. One obvious way is that when we have a lot of inventory on hand and something changes (e.g., customers change their mind) then we may have to update many or all of the items in inventory (i.e., the items in the requirements doc). And, of course, requirements are just one type of WiP that we might have. Other examples would be planning WiP, coding WiP, testing WiP, and so on.
Limiting WiP and Having Small Batch Sizes is Antifragile
A more antifragile approach is to limit WiP by deliberately working with small batch sizes. In his book The Principles of Product Development Flow, Don Reinertsen has done an excellent job of summarizing the benefits of small batch sizes.
|Reduced cycle time||
Smaller batches yield smaller amounts of work waiting to be processed, which in turn means less time waiting for the work to get done. So, we get things done faster.
|Reduced flow variability||
Think of a restaurant where small parties come and go (they flow nicely through the restaurant). Now imagine a large tour bus (large batch) unloading and the effect that it has on the flow in the restaurant.
Small batches accelerate fast feedback, making the consequences of a mistake smaller.
|Reduced risk||Small batches represent less inventory that is subject to change. Smaller batches are also less likely to fail (there is a greater risk that a failure will occur with ten pieces of work than with five).|
|Reduced overhead||There is overhead in managing large batches—for example, maintaining a list of 3,000 work items requires more effort than a list of 30.|
|Increased motivation and urgency||Small batches provide focus and a sense of responsibility. It is much easier to understand the effect of delays and failure when dealing with small versus large batches.|
|Reduced cost and schedule growth||When we’re wrong on big batches, we are wrong in a big way with respect to cost and schedule. When we do things on a small scale, we won’t be wrong by much.|
Earlier I mentioned that time can be viewed similarly to disorder and volatility. The more time that passes, the more “events” can transpire. Some of these events are harmful, and people often call these risks. Others events are beneficial; people tend to call these opportunities.
As the table above shows, when there is a lot of WiP, harmful events are magnified. Frequently the event has a linear cost. For example, when something changes, reviewing and updating a 3,000-item list is more or less linearly more expensive than reviewing and updating a 300-item list. However, sometimes the cost of an event might be non-linear (e.g., exponential). For example, flow disruptions caused by events affecting large batches could have tentacles that reach out and affect work on other projects, causing a domino-like cascade of consequences. In other words, disrupting the flow on a single project could significantly impact the work on many other projects.
Bottom line, large batches are fragile because we have more to lose than to gain in the presence of disorder. However, does that imply that small batches (limited WiP) are antifragile? Meaning would we actually get better in the presence of disorder with small batches as opposed to just being less fragile or more robust? Can we actually get real benefits/upside from limiting WiP?
Yes! Look back at the table above. If we reduce cycle time we deliver sooner and therefore generate revenue or reduce costs sooner. Money today is worth more than money tomorrow, since money has a time value.
Second, if we can reduce flow variability, outcomes become more predictable. My experience is that senior management value predictability very highly.
Third, small batches allow us to get fast feedback. That feedback can either prune a bad path quickly – meaning if we are doing the wrong thing we find that out faster and can pivot to doing the right thing leading to a better outcome. Also, fast feedback helps us identify and exploit emergent opportunities—events that represent opportunities and not risks, and which we can leverage and benefit from if we find them sooner rather than later.
An additional benefit of limiting WiP includes the ability to stay focused and not switch tasks as frequently. This makes a team better at delivering more value faster.
When developing a product or system, we don’t want to make ourselves fragile to the volatility in our environment (e.g., customers changing their mind, competitors taking unpredictable actions, etc.). Instead we would like to be antifragile—meaning we want to actually benefit, whenever possible, from the volatility in our environment. Limiting WiP and working in smaller batch sizes is one important way to make our development efforts more antifragile. One example of limiting WIP has to do with requirements: rather than generate large batches of requirements in the presence of poor information, we should instead apply a small-batch mentality of having just-enough requirements generated just in time. Limiting WIP and working in small batches not only makes our effort less fragile, it also allows us to benefit from volatility by generating revenue sooner, becoming more predictable, pivoting in response to feedback, and delivering more value more quickly.