Friday, July 15, 2011
It is rare that new software is developed from scratch; typically existing software is expanded and modified, usually on a regular basis. As a result, most software development shops that we run into are focused on the next release, and very often releases are spaced out at regular intervals. We have discovered that a significant differentiator between development organizations is the length of that interval – the length of the software release cycle.
Organizations with release cycles of six months to a year (or more) tend to work like this: Before a release cycle begins, time is spent deciding what will be delivered in the next release. Estimates are made. Managers commit. Promises are made to customers. And then the development team is left to make good on all of those promises. As the code complete date gets closer, emergencies arise and changes have to be made, and yet, those initial promises are difficult to ignore. Pressure increases.
If all goes according to plan, about two-thirds of the way through the release cycle, code will be frozen for system integration testing (SIT) and user acceptance testing (UAT). Then the fun begins, because no one really knows what sort of unintended interactions will be exposed or how serious the consequences of those interactions will be. It goes without saying that there will be defects; the real question is, can all of the critical defects be found and fixed before the promised release date?
Releases are so time-consuming and risky that organizations tend to extend the length of their release cycle so as not to have to deal with this pain too often. Extending the release cycle invariably increases the pain, but at least the pain occurs less frequently. Counteracting the tendency to extend release cycles is the rapid pace of change in business environments that depend on software, because the longer release cycles become a constraint on business flexibility. This clash of cadences results in an intense pressure to cram as many features as possible into each release. As lengthy release cycles progress, pressure mounts to add more features, and yet the development organization is expected to meet the release date at all costs.
Into this intensely difficult environment a new idea often emerges – why not shorten the release cycle, rather than lengthen it? This seems like an excellent way to break the death spiral, but it isn’t as simple as it seems. The problem, as Kent Beck points out in his talk “Software G Forces: the Effects of Acceleration,” is that shorter release cycles demand different processes, different sales strategies, different behavior on the part of customers, and different governance systems. These kinds of changes are notoriously difficult to implement.
Quick and Dirty Value Stream Map
I’m standing in front of a large audience. I ask the question: “Who here has a release cycle longer than three months?” Many hands go up. I ask someone whose hand is up, “How long is your release cycle?” She may answer, “Six months.” “Let me guess how much time you reserve for final integration, testing, hardening, and UAT,” I say. “Maybe two months?” If she had said a year, I would have guessed four months. If she had said 18 months, I would have guessed 6 months. And my guess would be very close, every time. It seems quite acceptable to spend two-thirds of a release cycle building buggy software and the last third of the cycle finding and fixing as many of those bugs as possible.
The next question I ask is: When do you decide what features should go into the release? Invariably when the release cycle is six months or longer, the answer is: “Just before we start the cycle.” Think about a six month release cycle: For the half year prior to the start of the cycle, demand for new or changed features has been accumulating – presumably at a steady pace. So the average wait of a feature to even be considered for development is three months – half of the six month cycle time. Thus – on the average – it will take a feature three months of waiting before the cycle begins, plus six months of being in development and test before that feature is released to customers; nine months in all.
Finally I ask, “About how many features might you develop during a six month release cycle?” Answers to this vary widely from one domain to another, but let’s say I am told that that about 25 features are developed in a six month release, which averages out to about one feature per week.
This leaves us with a quick and dirty vision of the value stream: a feature takes a week to develop and best case it takes nine months (38 weeks) to make it through the system. So the process efficiency is 1÷38, or about 2.6%. A lot of this low efficiency can be attributed to batching up 25 features in a single release. A lot more can be attributed to the fact that only 4 months of the 9 total months are actually spent developing software – the rest of the time is spent waiting for a release cycle to start or waiting for integration testing to finish.
Why not Quarterly Releases?
With such dismal process efficiency, let’s revisit to the brilliant idea of shortening release cycles. The first problem we encounter is that at a six month cadence, integration testing generally takes about two months; if releases are annual, integration testing probably takes three or four months. This makes quarterly releases quite a challenge.
For starters, the bulk of the integration testing is going to have to be automated. However, most people rapidly discover that their code base is very difficult to test automatically, because it wasn’t designed or written to be tested automatically. If this sounds like your situation, I recommend that you read Gojko Adzic’s book “Specification by Example.” You will learn to think of automated tests as executable specifications that become living documentation. You will not be surprised to discover that automating integration tests is technically challenging, but the detailed case studies of successful teams will give you guidance on both the benefits and the pitfalls of creating a well architected integration test harness.
Once you have the beginnings of an automated integration test harness in place, you may as well start using it frequently, because its real value is to expose problems as soon as possible. But you will find that code needs to “done” in order to be tested in this harness, otherwise you will get a lot of false negatives. Thus all teams contributing to the release would do well to work in 2-4 week iterations and bring their code to a state that can be checked by the integration test harness at the end of every iteration. Once you can reasonably begin early, frequent integration testing, you will greatly reduce final integration time, making quarterly releases practical.
Be careful, however, not to move to quarterly releases without thinking through all of the implications. As Kent Beck noted in his Software G Forces talk, sales and support models at many companies are based on annual maintenance releases. If you move from an annual to a quarterly release, your support model will have to change for two reasons: 1) customers will not want to purchase a new release every quarter, and 2) you will not be able to support every single release over a long period of time. You might consider quarterly private releases with a single annual public release, or you might want to move to a subscription model for software support. In either case, you would be wise not to guarantee long term support for more than one release per year, or support will rapidly become very expensive.
From Quarterly to Monthly Releases
Organizations that have adjusted their processes and business models to deal with a quarterly release cycle begin to see the advantages of shorter release cycles. They see more stability, more predictability, less pressure, and they can be more responsive to their customers. The question then becomes, why not increase the pace and release monthly? They quickly discover that an additional level of process and business change will be necessary to achieve the faster cycle time because four weeks – twenty days – from design to deployment is not a whole lot of time.
At this cadence, as Kent Beck notes, there isn’t time for a lot of information to move back and forth between different departments; you need a development team that includes analysts and testers, developers and build specialists. This cross-functional team synchronizes via short daily meetings and visualization techniques such as cards and charts on the wall – because there simply isn’t time for paper-based communication. The team adopts processes to ensure that the code base always remains defect-free, because there isn’t time to insert defects and then remove them later. Both TDD (Test Driven Development) and SBE (Specification by Example) become essential disciplines.
From a business standpoint, monthly releases tend to work best with software-as-a-service (SaaS). First of all, pushing monthly releases to users for them to install creates huge support headaches and takes far too much time. Secondly, it is easy to instrument a service to see how useful any new feature might be, giving the development team immediate and valuable feedback.
Weekly / Daily Releases
There are many organizations that consider monthly releases a glacial pace, so they adopt weekly or even daily releases. At a weekly or daily cadence, iterations become largely irrelevant, as does estimating and commitment. Instead, a flow approach is used; features flow from design to done without pause, and at the end of the day or week, everything that is ready to be deployed is pushed to production. This rapid deployment is supported by a great deal of automation and requires a great deal of discipline, and it is usually limited to internal or SaaS environments.
There are a lot of companies doing daily releases; for example, one of our customers with a very large web-based business has been doing daily releases for five years. The developers at this company don’t really relate to the concept of iterations. They work on something, push it to systems test, and if it passes it is deployed at the end of the day. Features that are not complete are hidden from view until a keystone is put in place to expose the feature, but code is deployed daily, as it is written. Occasionally a roll-back is necessary, but this is becoming increasingly rare as the test suites improve. Managers at the company cannot imagine working in at a slower cadence; they believe that daily deployment increases predictability, stability, and responsiveness – all at the same time.
Jez Humble and David Farley wrote “Continuous Delivery” to share with the software development community techniques they have developed to push code to the production environment as soon as it is developed and tested. But continuous delivery is not just a matter of automation. As noted above, sales models, pricing, organizational structure and the governance system all merit thoughtful consideration.
Every step of your software delivery process should operate at the same cadence. For example, with continuous delivery, portfolio management becomes a thing of the past; instead people make frequent decisions about what should be done next. Continuous design is necessary to keep pace with the downstream development, validation and verification flow. And finally, measurements of the “success” of software development have to be based on delivered value and improved business performance, because there is nothing else left to measure.