Friday, February 20, 2004

Incremental Funding (Book Review)

The customer wanted an on-line currency exchange capability added to their on-line financial service offerings. They figured it would take several months to implement. But the development team suggested a different approach: start by hiring a dozen telephone operators and implement the necessary software for these folks to execute currency trades. The company gave it a try, and in six weeks the first iteration was ready. With no more than an 800 number on their web site and a rudimentary interface to the currency market, new business was being transacted and profits being made.

Over the course of the next several months, the on-line trading capability was implemented around the core module originally used by the telephone operators. Not only did the company see early revenue, but the risk of failure disappeared once trading started. In addition, system requirements were defined by observing real trades.

In the book Software by Numbers – Low Risk, High Return Development (Prentice Hall, 2004) Mark Denne and Jane Cleland-Huang make the case for incremental delivery of software. Mark Denne developed the Incremental Funding Methodology (IFM) in the 1990’s to help land a large software development contract. In an attempt to distinguish his bid from the pack, he reorganized the deliveries into units of value, and adjusted the development sequence so that the customer would realize revenue faster and in the end, receive a greater return on their investment. The customer discovered that this approach dramatically reduced their need to borrow money and gave them earlier product release with lowered risk. In what seemed like hotly competitive bidding process, Mark’s company won the bid by emphasizing “time to value” instead of development efficiency.

Software by Numbers recommends dividing a project into Minimum Marketable Features (MMF’s). These are small feature sets which deliver some identifiable value to the customer. Each MMF should have its own return on investment (ROI). By laying out the potential ROI’s of various feature sets, an optimal development sequence for the MMF’s can be determined. Early deployment of key MMF’s reduces risk while generating revenue to help fund the remainder of the project.

When business people and software developers focus on identifying and valuing marketable features, their conversation is changed. Developers are exposed to ROI and stakeholders are confronted with the realities of software development. The entire team is focused on achieving business ROI early in the development cycle. Management sees continuously measurable progress, and the team benefits from the early and compelling feedback generated by real software being used in production.

With so many benefits, why wouldn’t IFM be the preferred approach for developing software? Denne and Cleland-Huang note that many practitioners view software architecture as a monolithic whole, requiring early definition because of the extensive impact that architectural changes can have on a system. On the other hand, they argue, it is not until the details of an architecture are implemented that one can tell if the architecture is viable. Thus architecture presents us with a chicken and egg problem.

The book recommends that architectural elements which support each set of MMF’s should be developed with their respective MMF’s. In other words, architectural development should be sequenced using the same financially driven priorities as feature development. While there may be times where architectural coherency may dictate early development of features not immediately related to the current feature set, in general it is not only possible but also preferable to defer implementation of architectural elements until the features requiring these elements is being developed.

As we discover that architecture not only can evolve, but in fact, in any deployed system, the architecture will evolve over time, the view that architecture must be fixed early in the development process becomes a liability. This is a view that creates architectures that are not tolerant of the change they inevitably must undergo. Once we accept that software architectures must be designed to be change-tolerant, the barrier to early deployment of high value features is lowered.

Returning to the currency trading example, we see that early deployment of marketable features is a compelling strategy for increasing return and reducing risk. At the same time the stage is set for a better understanding of the real system requirements and improved collaboration between developers and their customers.

Reference
Software by Numbers: Low-Risk, High-Return Development by Mark Denne, Jane Cleland-Huang, Prentice Hall, 2004

Screen Beans Art, © A Bit Better Corporation

Sunday, February 1, 2004

Toward A New Definition of Maturity

In the mid 90's, a company named Zeos assembled PC's in my home state of Minnesota. I was impressed when Zeos was named as a finalist for the Malcolm Baldrige Award, because competing for this quality award is more or less the equivalent of a software company trying to reach CMM Level 5. At the same time that Zeos was focusing on the Malcolm Baldrige Award, a similar company in Austin, Texas, called Dell Computer, was focusing all of its energy on two rather different objectives: 1) keep inventory as low as possible, because it is the biggest risk in the PC assembly business, and 2) deliver computers just as fast as possible after customers decide what they want. Dell could not possibly meet these goals unless it had the key Malcolm Baldrige capabilities in place, but that was not its focus. On the other hand, in order to be a finalist, Zeos had to spend huge amounts of executive and management attention on Malcolm Baldrige activities. So which of these two companies would you call the most mature?

A company does not rise to the top of its industry by perfecting its normative procedures. While General Motors was busy refining it’s four phase process for product development, Toyota and Honda were developing cars in something approaching half the time, for half the cost. The resulting cars were less expensive to produce and captured a much larger market share. Yet when looked at through Detroit’s eyes, the development approach of the Japanese companies was decidedly immature – why, they even let die makers start cutting dies long before the final drawings were released!

The problem with maturity models is that they foster a mental model of ‘good practice’ that can block out paradigm shifts that are destined to reshape the industry. This has happened in manufacturing. It has happened in product development. Can it be happening in software development today?

Why Do We Do This To Ourselves?
We human beings a tendency to decompose complex problems into smaller pieces, and then focus on the individual pieces, often, unwittingly, at the expense of the whole. One reason for this is that people can only hold a few chunks of information in short term memory at once; Miller’s law claims that this number is seven plus or minus two. Thus we have a strong tendency to take concepts which we cannot get our minds around and decompose them into parts, so we can deal with one part at a time. There is nothing wrong with our very natural tendency to decompose problems, except that we often forget to reconfigure the parts into a whole and take their interactions into account.

The problem is, decomposition only works if the whole is indeed equal to the sum of its parts, and interactions between the parts can be ignored. In practice, this is rarely the case; in fact, optimization of the parts has a tendency to sub-optimize the whole. For example, a manager might think that the best way to run a testing department is to make sure that every single person is working all of the time. In order to guarantee maximum productivity of each individual, he makes sure there is always a pile of testing waiting to be done. Down the hall, the operations manager knows better. She knows that if she runs the servers at high utilization, the entire system bogs down, just like rush hour traffic. If only the testing manager would realize that his policy of full utilization of testing resources creates the same kind of traffic jam in software development!

Decomposition of a problem into pieces is a standard problem solving heuristic, but it needs to be accompanied by regular aggregation of the parts into a whole. It is here that iterative development shines, because it forces us to develop, test, integrate, and release complete threads of the system early and often. Thus we reset our view of the big picture every iteration. However, iterative development requires that we decompose the problem space along dimensions that are orthogonal to those traditionally recommended. Instead of decomposing the problem into requirements, analysis, design, programming, testing, integration, deployment, we decompose the problem into features along the natural fault lines of the domain.

Decomposition vs. Abstraction
There is an alternative to the decomposition of a problem into components, and that is to abstract the problem to a higher level, rather than decompose it to a lower level. The result is the same in both cases – you have reduced the number of things you need to think about to 7 +/- 2. But when you move the problem up to a higher level of abstraction, the interaction of the parts is maintained, so abstractions would seem to be the better approach to solving complex problems.

There is a catch: people can’t create abstractions without understanding the domain, because correct abstractions require that the abstractor knows what is important, what fits together naturally, and where to find the natural joints or fault lines of the domain. For this reason, experts in the domain have a greater tendency to abstract to a higher level, while those who are not familiar with the domain tend to decompose the problem, and that decomposition will probably not be along the natural fault lines of the domain.

It is when we decompose a problem in a manner which does not fit the domain, and then drill down to the details too fast, that important things get missed. We would like to think that a lot of early detail will help us find all the hidden gremlins that might bite us later. This is only true if those gremlins are actually inside the areas we investigate – but the tough problems usually lurk between the cracks in our thinking. Unfortunately we only discover those problems when we integrate the pieces together at the end, and by that time we have invested so much in the details that change is very difficult.

It is safer to work from a higher level or abstraction, but this means we don’t drill down to detail right away. We take a breadth-first approach, keep options open and gradually fill in the details. With this approach we are more likely to find those gremlins while we still can do something about them, but some people might think that we don’t have a plan, aren’t tracking the project, and can’t manage requirements.

Assessment Rather than Certification
Quite frequently CMM is implemented as a certification process, decomposing ‘mature’ into separately verifiable capabilities. The danger is that we may lose sight of forest for the trees. For example, focusing on requirements management can be detrimental to giving users what they really want. Focusing on quality assurance separate from development can destroy the integrating effect of developers and testers working side-by-side every day. Focusing on planning as a predictive process can keep us from using planning as an organizing process.

A better approach to discovering the underlying competence of an organization is to use an assessment process rather than a certification process. Assessments present challenging situations that cannot be dealt with unless a host of capabilities are in place; if the challenge is properly handled, the presence of these capabilities can be inferred. For example, a pilot’s ability to fly a plane can be assessed by observing how the pilot lands a plane in a stiff cross wind.

Consider your hiring process. When a candidate lists Microsoft or PMI certifications, you take that into account, but you are really looking for a track record of success which demonstrates that the certifications were put to good use. One software company I know administers a one hour logic tests to job applicants, a test without a line of code in it. Yet the test does an admirable job of assessing the capability of the candidate to think like a good developer.

Measure UP
If we accept that we get what we measure then, as Rob Austin points out in “Measuring and Managing Performance in Organizations,” we do not get what we don’t measure. If we are not measuring everything, if something important just didn’t make it into our measurement system, then we are not going to get it. To counter this problem, we have a tendency to pile one measurement on top of another, every time we discover that we forgot to measure something. In CMM, for example, each KPA addresses something that caused a failure of some software project somewhere. As people discovered new ways for software projects to fail, a KPA’s was added to address the new failure mode. Nevertheless, all of those KPA’s still don’t cover everything that could go wrong, although they certainly try.

Once we admit that we just can’t measure everything, then we are ready to move to assessment-style measurements rather than a certification-style measurements. For example, in project management, we decompose the measurement system into cost, schedule, scope and defects, and we try hard to make these measurements work because we think they are the measurements we should use. But no matter how hard we try, we are often unsuccessful in measuring true business value by totaling up the yardsticks of cost, schedule, scope and defects.

What if we just measured business value instead of cost, schedule, scope, and defects? When I was developing new products at 3M, we did not pay much attention to cost, schedule and scope. Instead we developed a P&L which we would use to check out the impact of a late introduction date or a lower unit cost. The development team tried to optimize the overall P&L, not any one dimension. It may seem strange that a project team would concern itself with the financial model of the business, but I assure you it is far better decision-support tool than cost, schedule and scope.

The Measure of Maturity
I believe that our industry could use a simpler measurement of software development maturity, one that is not subject to the dangers of decomposition and does not attempt to defy Miller’s Law with a long lists of things to think about. I propose that we use a measurement that has been successfully used in countless organizations, and has proven to be a true indicator of the presence of the kind of capabilities measured by CMM. To introduce this measurement, let’s go back to Dell and see what it measured:
  1. The level of inventory throughout the entire system, and
  2. The speed with which the organization can repeatedly and reliably respond to a customer request.
Okay, you say, fine for manufacturing, but software development is different.

True, I counter, but these measurements still work.

The level of inventory in your system is the amount of stuff you have under development. The more inventory of unfinished development work you have sitting around, the more you are at risk of it growing obsolete, getting lost, and hiding defects. If you capitalized it, you also bear the risk of having to write it off if it doesn’t work. The less of that kind of stuff you have on hand, the better off you will be.

The speed with which you can responds to a customer is directly proportional to the amount of unfinished development work you have clogging up your system. In truth, the two measurements above are one and the same – you can deliver faster if your system is not clogged with unfinished work. But more to the point, you cannot reliably and repeatedly deliver fast if you do not have a mature organization. You need version control, built-in quality, ways to gather requirements fast and routinely translate them correctly into code. You need everything you measure in CMM, and you need all of those capabilities working together as an integrated whole. In short, the speed with which you can repeatedly and reliably deliver on customers requests is a better measure of the maturity of your software development organization.


Screen Beans Art, © A Bit Better Corporation