Showing posts with label Incentives. Show all posts
Showing posts with label Incentives. Show all posts

November 5, 2017

The Cost Center Trap

In the 1960’s, IT was largely an in-house back-office function focused on process automation and cost reduction. Today, IT plays a significant strategic and revenue role in most companies, and is deeply integrated with business functions. By 2010, over 50% of firms’ capital spending was going to IT, up from 10-15% in the 1960’s.[1] But one thing hasn't changed since the 1960’s: IT has always been considered a cost center. You are probably thinking "Why does this matter?" Trust me, cost center accounting can be a big trap.

Back in the mid 1980’s Just-in-Time (JIT) was gaining traction in manufacturing companies. JIT always drove inventories down sharply, giving companies a much faster response time when demand changed. However, accounting systems count inventory as an asset, so and any significant reduction in inventory had a negative impact on the balance sheet. Balance sheet metrics made their way into senior management metrics, so successful JIT efforts tended to make senior managers look bad. Often senior management metrics made their way down into the metrics of manufacturing organizations, and when they did, efforts to reduce inventory were half-hearted at best. A generation of accountants had to retire before serious inventory reduction was widely accepted as a good thing.[2] 

Returning to the present, being a cost center means that IT performance is judged – from an accounting perspective – solely on cost management. Frequently these accounting metrics make their way into the performance metrics of senior managers, while contributions to business performance tend to be deemphasized or absent. As the metrics of senior managers make their way down through the organization, a culture of cost control develops, with scant attention paid to improving overall business performance. Help in delivering business results is appreciated, of course, but rarely is it rewarded, and rarer still is the cost center that voluntarily accepts responsibility for business results.

Now let’s add an Agile transformation to this cost center culture. Let’s assume that the transformation is supposed to bring benefits such as faster time to market, more relevant products, better customer experiences. And let’s assume that the cost center metrics do not change, or if they do change, process metrics such as number of agile teams and speed of deployment are added. I’ll wager that very few of those agile teams are likely to focus on improving overall business performance. The incentives send a clear message: business performance is not the responsibility of a cost center.

Being in a cost center can be demoralizing. You aren’t on the A team that brings in revenue, you’re on the B team that consumes resources. No matter how well the business performs, you’ll never get credit. Your budget is unlikely to increase when times are good, but when times are tight, it will be the first to be cut. Should you have a good idea, it had better not cost anything, because you can’t spend money to make money. If you think that a bigger monitor would make you more efficient, good luck making your case. Yet if your colleagues in trading suggest larger monitors will help them generate more revenue, the big screens will show up in a flash.[3]

Let’s face it, unless there are mitigating circumstances, IT departments that started out as cost centers are going to remain cost centers even when the company attempts a digital transformation. What kind of mitigating circumstances might help IT escape the cost center trap?
  1. There is serious competition from startups.
    Startups develop their software in profit centers; they haven’t learned about cost centers yet. And in a competitive battle, a profit center will beat a cost center every time.
  2. IT is recognized as a strategic business driver.
    You would think that a digital transformation would be undertaken only after a company has come to realize the strategic value of digital technology, but this is not the case. IT has been treated as if it were an outside contractor for so long that it is difficult for company leaders to think of IT as a strategic business driver, integral to the company's success going forward.
  3. A serious IT failure has had a huge impact on business results.
    When it becomes clear exactly how dependent a profit center is on a so-called cost center, people in the profit center are often motivated to share their pain with IT. Smart IT departments will use this opportunity to share the gain also.
Many people in the Agile movement preach that teams should have responsibility for the outcomes they produce and the impact of those outcomes. But responsibility starts at the top and is passed down to teams. When IT is managed as a cost center with cost objectives passed down through the hierarchy, it is almost impossible for team members from IT to assume responsibility for the business outcomes of their work. When IT metrics focus on cost control, digital transformations tend to stall.

Every ‘full stack team’ working on a digital problem should have ‘full stack responsibility’ for results, and that responsibility should percolate up to the highest managers of every person on the team.  Business results, not cost, should receive the focused attention of every member of the team, and every incentive that matters should be aimed at reinforcing this focus.

The Capitalization Dilemma

Let’s return to the surprising assertion that in 2010, over 50% of firms’ capital spending was going to IT.[1] One has to wonder what was being capitalized. Yes, there were plenty of big data centers that were no doubt capitalized, since the movement to the cloud was just beginning. But in addition to that, a whole lot of spending on software development was also being capitalized. And herein lies the seeds of another undue influence of accounting policies over IT practices.

Software development projects are normally capitalized until they are “done” – that is they reach "final operating capability" and are turned over to production and maintenance.[1] But when an organization adopts continuous delivery practices, the concept of final operating capability – not to mention maintenance – disappears. This creates a big dilemma because it's no longer clear when, or even if, software development should be capitalized. Moving expenditures from capitalized to expensed not only changes whose budget the money comes from, it can have tax consequences as well. And what happens when all that capitalized software (which, by the way, is an asset) vanishes? Just as in the days when JIT was young, continuous delivery has introduced a paradigm shift that messes up the balance sheet.

But the balance sheet problem is not the only issue; depreciation of capitalized software can wreck havoc as well. In manufacturing, the depreciation of a piece of process equipment is charged against the unit cost of products made on that equipment. The more products that are made on the equipment, the less cost each product has to bear. So there is strong incentive to keep machines running, flooding the plant with inventory that is not currently needed. In a similar manner, the depreciation of software makes it almost impossible to ignore its sunk cost, which often drives sub-optimal usage, maintenance and replacement decisions.

Capitalization of development creates a hidden bias toward large projects over incremental delivery, making it difficult to look favorably upon agile practices. Hopefully we don't have to wait for another generation of accountants to retire before delivering software rapidly, in small increments, is considered a good thing.

To summarize, the cost center trap and the capitalization dilemma both create a chain reaction:
  1. Accounting drives metrics.
  2. Metrics drive culture.
  3. Culture eats process for lunch.
The best way to avoid this is to break the chain at the top – in step 1. Stop letting accounting drive metrics. Alternatively, if accounting metrics persist at the senior management level, then break the chain at step 2 – do not pass accounting metrics down the reporting chain; do not let them drive culture. When teams focus on improving the performance of the overall business, accounting metrics should move in the right direction on their own; if they don't then clearly something is wrong with the accounting metrics.

Beware of Proxies

This year Jeff Bezos's annual letter to Amazon shareholders[4] listed four essentials that help big companies preserve the vitality of a startup: customer obsession, a skeptical view of proxies, the eager adoption of external trends, and high-velocity decision making. These seem pretty clear, except maybe the second one: a skeptical view of proxies. Just what are proxies? Bezos explains:
“A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp.”
“Another example: market research and customer surveys can become proxies for customers – something that’s especially dangerous when you’re inventing and designing products.”
Here are some common proxies we find in software development:
Accounting metrics are proxies, and not very good ones at that, because they encourage local sub-optimization. 
Project metrics – cost, schedule, and scope – are proxies. Worse, these proxies are rarely validated against actual outcomes.  
“The Business” is a proxy for customers. Generally speaking, so is the product owner.
Proxies should be resisted, Bezos argues, if you want a vibrant startup culture in your company. But without proxies, how do you manage the dynamic and increasingly important IT organization? You make a habit of measuring what really matters - skip the proxies and focus on outcomes and impact.

In his excellent book, “a Seat at the Table,”[5]  Mark Schwartz proposes that IT governance and oversight should begin with strategic business objectives and produce investment themes that accomplish these objectives. IT leaders fund teams to produce desirable outcomes that will have impact on the strategic objectives. Note that these outcomes are not proxies, they are real, measurable progress toward the strategic objective. Regular reviews of teams’ progress -- quantified by these measurable outcomes -- provides leaders with insight, flexibility and an appropriate level of control. At the same time, detailed decisions are made by the people closest to customers after careful investigation, experimentation and learning.

Schwartz concludes: "this approach can focus IT planning, reduce risk, eliminate waste, and provide a supportive environment for teams engaged in creating value."[5] What's not to like?

______________________
Footnotes:

[1] From “What is Digital Intelligence” by Sunil Mithas and F. Warren McFarlan, IEEE Computing Edge, November 2017. Pg.9.

[2] The 1962 book “The Structure of Scientific Revolutions” by Thomas Kuhn discussed how significant paradigm shifts in science do not take hold until a generation of scientists brought up with the old paradigm finally retire.

[3] Thanks to Nick Larsen. Does Your Employer See Software Development as a Cost Center or a Profit Center?

[4] Jeff Bezos - Letter to Shareholders - April 12, 2017

[5] "A Seat at the Table" by Mark Schwartz

August 10, 2004

Team Compensation

The New Venture team had done an incredible job, and they knew it. Increment by increment they had built a new software product, and when the deadline came, everything that had to be operational was working flawlessly. The division vice president thanked everyone who had contributed to the effort at an afternoon celebration, and the team members congratulated each other as they relived some of the more harrowing moments of the last six months.

The next day the team’s Scrum Master was catching up on long-ignored e-mail when Dave, the development manager, called. “Say Sue,” he said, “Great job your team did! I’ve been waiting for the product launch before I bothered you with this, but the appraisal deadline is next week. I need your evaluation of each team member. And if you could, I’d like you to rank the team, you know, who contributed the most down to who contributed the least.”

Sue almost heard the air escaping as her world deflated. “I can’t do that,” she said. “Everyone pitched in 100%. We could not have done it otherwise.” “But Sue,” Dave said. “certainly there must have been a MVP. And a runner-up. And so on.” “No, not really,” Sue replied. “But what I can do is evaluate everyone’s contribution to the effort.”

Sue filled out an appraisal input form for each team member. She rated everyone’s performance, but found that she had to check the ‘far exceeded expectations’ box for each team member. After all, getting out the product on time was a spectacular feat, one that far exceeded everyone’s expectations.

Two days later Sue got a call from Janice in human resources. “Sue,” she said, “Great job your team did! And thanks for filling out all of those appraisal input forms. But really, you can’t give everyone a top rating. Your average rating should be ‘meets expectations’. You can only have one or two people who ‘far exceeded expectations’. Oh and by the way, since you didn’t rank the team members, would you please plan on coming to our ranking meeting next week? We are going to need your input on that. After all, at this company we pay for performance, and we need to evaluate everyone carefully so that our fairness cannot be questioned.”

Sue felt like a flat tire. In the past, when she had a particularly difficult problem she always consulted the team, and she decided to consult them once again. At 10:00 the next morning, the entire team listened as Sue explained her problem. The had always come up with a creative solutions to problems she presented before, and she could only hope they would be able to do it again. She thought she might convince them to elect an MVP or two, to help her put some variation into the evaluations.

Sue suspected that Dave and Janice might not approve of her approach, but she didn’t realize that when the team members heard her dilemma, they would deflate just as quickly as she had. The best they could do was insist that everyone had given 200% effort, they had all helped each other, and they had thought that every single person had done a truly outstanding job. They were not interested in electing a MVP, but they were willing to choose a LVP. It would the unnamed manager who was asking Sue to decide amongst them.

Now Sue really had a problem. She had no idea how to respond to Dave and Janice, and the New Venture team had turned angry and suspicious. Tomorrow they would have to start working together on the next release. How could something that was supposed to boost performance do such a through job of crushing the team’s spirit?

Deming’s View
Sue is not the only one who had trouble with merit pay evaluation and ranking systems. One of the greatest thought leaders of the 20th century, W Edwards Deming, wrote that un-measurable damage is created by ranking people, merit systems, and incentive pay. Deming believed that every business is a system and the performance of individuals is largely the result of the way the system operates. In his view, the system causes 80% of the problems in a business, and the system is management’s responsibility. He wrote that using exhortations and incentives to get individuals to solve management problems simply doesn’t work. Deming opposed ranking because it destroys pride in workmanship, and merit raises because they address the symptoms, rather than the causes, of problems.

It’s a bit difficult to take Deming at face value on this; after all, companies have been using merit pay systems for decades, and their use is increasing. Moreover Deming was mainly involved in manufacturing, so possibly his thinking does not apply directly to knowledge work like software development. Still, someone as wise as Deming is not to be ignored, so let’s take a deeper look into employee evaluation and reward systems, and explore what causes them to become dysfunctional.

Dysfunction #1: Competition
As the New Venture team instinctively realized, evaluation systems which rank people for purposes of merit raises pit individual employees against each other and strongly discourage collaboration. Even when the rankings are not made public, the fact that they happen does not remain a secret. Sometimes ranking systems are used as a basis for dismissing the lowest performers, making them even more threatening. When team members are in competition with each other for their livelihood, teamwork quickly evaporates.

Competition between teams, rather than individuals, may seem like a good idea, but it can be equally damaging. Once I worked in a division in which there were two separate teams developing software products that were targeting similar markets. The members of the team which attracted the largest market share were likely have more secure jobs and enhanced career opportunities. So each team expanded the capability of its product to attract a broader market. The teams ended up competing fiercely with each other for the same customer base as well as for division resources. In the end, both products failed. A single product would have had a much better chance at success.

Dysfunction #2: The Perception of Unfairness
There is no greater de-motivator than a reward system which is perceived to be unfair. It doesn’t matter if the system is fair or not, if there is a perception of unfairness, then those who think that they have been treated unfairly will rapidly loose their motivation.

People perceive unfairness when they miss out on rewards they think they should have shared. What if the vice president had given Sue a big reward? Even if Sue had had acknowledged the hard work of the team, they would probably have felt that she was profiting at their expense. You can be sure that Sue would have had a difficult time generating enthusiasm for work on the next release, even if the evaluation issues had not surfaced.

Here’s another scenario: What would have happened if the New Venture team was asked out to dinner with the VP and each member given a good sized bonus? The next day the operations people who worked late nights and weekends to help get the product out on time would have found out and felt cheated. The developers who took over maintenance tasks so their colleagues could work full time on the product would have also felt slighted. Other teams might have felt that they could have been equally successful, except that they got assigned to the wrong product.

Dysfunction #3: The Perception of Impossibility
The New Venture Team met their deadline by following the Scrum practice of releasing a high quality product containing only the highest priority functionality. But let’s try a different scenario: Let’s assume that the team was given a non-negotiable list of features that had to be done by a non-negotiable deadline, and let’s further speculate that the team was 100% positive that the deadline was impossible. (Remember this is hypothetical; surely this would never happen in real life.) Finally, let’s pretend that the team was promised a big bonus if they met the deadline.

There are two things that could happen in this scenario. Financial incentives are powerful motivators, so there is a chance that the team might have found a way to do the impossible. However, the more likely case is that the promise of a bonus that was impossible to achieve would make the team cynical, and the team would be even less motivated to meet the deadline than before the incentive was offered. When people find management exhorting them to do what is clearly impossible rather than helping to make the task possible, they are likely to be insulted by the offer of a reward and give up without half trying.

Dysfunction #4: Sub-Optimization
I recently heard of a business owner who offered testers $5 for every defect they could find in a product about to go into beta release. She thought this would encourage the testers to work harder, but the result was quite different. The good working relationship between developers and testers deteriorated as testers lost their incentive to help developers quickly find and fix defects before they propagated into multiple problems. After all, the more problems the testers found, the more money they made.

When we optimize a part of a chain, we invariably sub-optimize overall performance. One of the most obvious examples of sub-optimization is the separation of software development from support and maintenance. If developers are rewarded for meeting a schedule even if they deliver brittle code without automated test suites or an installation process, then support and maintenance of the system will cost far more than was saved during development.

Dysfunction #5: Destroying Intrinsic Motivation
There are two approaches to giving children allowances. Theory A says that children should earn their allowances, so money is exchanged for work. Theory B says that children should contribute to the household without being paid, so allowances are not considered exchange for work. I know one father who was raised with Theory B, but switched to Theory A for his children. He put a price on each job and paid the children weekly for the jobs they had done. This worked for a while, but then the kids discovered that they could choose amongst the jobs and avoid doing the ones they disliked. When the children were old enough to earn a paycheck, they stopped doing household chores altogether, and the father found himself mowing the lawn along side the teenage children of his neighbors. Were he to do it again, this father says he would not tie allowance to work.

In the same way, once employees get used to receiving financial rewards for meeting goals, they begin to work for the rewards, not the intrinsic motivation that comes from doing a good job and helping their company be successful. Many studies have shown that extrinsic rewards like grades and pay will, over time, destroy the intrinsic reward that comes from the work itself.

One Week Later
Sue was nervous as she entered the room for the ranking meeting. She had talked over her problem with Wayne, her boss, and although he didn’t have any easy solutions, he suggested that she present her problem to the management team. Shortly after the meeting started, Janice asked Sue how she would rank her team members. Sue took a deep breath, got a smile of encouragement from Wayne, and explained how the whole idea of ranking made no sense for a team effort. She explained how she had asked for advice from the team and ended up with an angry and suspicious team.

“You should never have talked to the team about this,” said Janice. “Hold on a minute,” Wayne jumped in. “I thought our goal in this company is to be fair. How can we keep our evaluation policies secret and expect people to consider them fair? It doesn’t matter if we think they are fair, it matters if employees think they are fair. If we think we can keep what we are doing a secret, we’re kidding ourselves. We need to be transparent about how we operate; we can’t make decisions behind closed doors and then try to tell people ‘don’t worry, we’re being fair’”

Sue was amazed at how fast the nature of the discussion changed after Wayne jumped to her defense. Apparently she wasn’t the only one who thought this ranking business was a bad idea. Everyone agreed that the New Venture team had done an excellent job, and the new product was key to their business. No one had thought that it could be done, and indeed the team as a whole had far exceeded everyone’s expectations. It became apparent that there wasn’t a person in the room who was willing to sort out who had contributed more or less to the effort, so Sue’s top evaluation for every team member was accepted. More importantly, the group was concerned that a de-motivated New Venture team was a serious problem. Eventually the vice president agreed to go to the next meeting of the New Venture team and discuss the company’s evaluation policies. Sue was sure that this would go a long way to revitalize the team spirit.

Now the management team had a problem of its own. The knew that they had to live within a merit pay system, but they suspected they needed to rethink the way it was implemented. Since changes like that don’t happen overnight, they formed a committee to look into various evaluation and pay systems.
The committee started by agreeing that evaluation systems should not be used to surprise employees with unexpected feedback about their performance. Performance feedback loops must be far shorter than an annual, or even a quarterly, evaluation cycles. Appraisals are good times to review and update development plans for an employee, but if this is the only time an employee finds out how they are doing, a lot more needs fixing than the appraisal system.

With this disclaimer in mind, the committee developed some guidelines for dealing with various forms of differential pay systems.

Guideline #1: Make Sure The Promotion System Is Unassailable
In most organizations, significant salary gains come from promotions which move people to a higher salary grade, not merit increases. Where promotions are not available, as is the case for many teachers, merit pay systems have a tendency to become contentious, because merit increases are the only way to make more money. When promotions are available, employees tend to ignore the merit pay system and focus on the promotion system. Of course this system of promotions tends to encourage people to move into management as they run out of promotional opportunities in technical areas. Companies address this problem with ‘dual ladders’ that offer management-level pay scales to technical gurus.

The foundation of any promotion system is a series of job grades, each with a salary range in line with industry standards and regional averages. People must be placed correctly in a grade so that their skills and responsibilities match the job requirements of their level. Initial placements and promotion decisions should be carefully made and reviewed by a management team.

Usually job grades are embedded in titles, and promotions make the new job grade public through a new title. Thus a person’s job grade is generally considered public information. If employees are fairly placed in their job grade, and promoted only when they are clearly performing at the new job grade, then salary differences based on job grade are generally perceived to be fair. Thus a team can have both senior and junior people, generalists and highly skilled specialists, all making different amounts of money. As long the system of determining job grades and promotions is transparent and perceived to be fair, this kind of differential pay is rarely a problem.

The management team at Sue’s company decided to focus on a promotion process that did not use either a ranking or a quota system. Instead, clear promotion criteria would be established for each level, and when someone had met the criteria, they would be eligible for promotion. A management committee would review each promotion proposal and gain a consensus that the promotion criteria were met. This would be similar to existing committees that reviewed promotions to fill open supervisor or management positions.

Guideline #2: De-emphasize The Merit Pay System
When the primary tool for significant salary increases is promotion, then it’s important to focus as much attention as possible on making sure the promotion system is fair. When it comes to the evaluation system that drives merit pay, it’s best not try too hard to sort people out. Studies show that when information sharing and coordination are necessary, organizations that reduce pay differences between the highest and the lowest paid employees tend to perform better over time.

Use evaluations mainly to keep everyone at an appropriate level in their salary grade. Evaluations might flag those who are ready for promotion and those who need attention, but that should trigger a separate promotion or corrective action process. About four evaluation grades are sufficient, and a competent supervisor with good evaluation criteria and input from appropriate sources can make fair evaluations that accomplish these purposes.

Even when annual raises are loosely coupled to merit, evaluations will always be a big deal for employees, so attention should be paid to making them fair and balanced. Over the last decade, balanced scorecards have become popular for management evaluations; at least in theory, balanced scorecards ensure that the multiple aspects of a manager’s job all receive attention. A simple version of a balanced scorecard might also be used for a merit pay evaluations, to emphasize the fact that people must perform well on many dimensions to be effective. A supervisor might develop a scorecard with each employee that takes into account team results, new competencies, leadership, and so on. It is important that employees perceive that the input to a scorecard is valid and fairly covers the multiple aspects of their job. It is important to keep things simple, because too much complexity will unduly inflate the attention paid to a pay system which works better when it is understated. Finally, scorecards should not be used to feed a ranking system.

Guideline #3: Tie Profit Sharing To Economic Drivers
Nucor Steel decided to get into the steel business in 1968, and thirty years later it was the biggest steel company in the US. When Nucor started up, Bethlehem Steel considered it a mere gnat, but 35 years later Bethlehem Steel was not only bankrupt, but sold off for assets. So Nucor Steel is one very successful company that has done a lot of things right in a tough industry. Quite surprisingly, Nucor has a decades-old tradition of paying for performance. How does the company avoid the dysfunctions of rewards?
Nucor Steel started with the realization that profit per ton of finished steel as its key economic driver, and based its profit sharing plan on the contribution a team makes to improving this number. So for example, a team that successfully develops a new steel-making process or starts up a new plant on schedule will not get see an increase in pay until the process or plant has improved the company’s profit per ton of steel. Thus Nucor avoids sub-optimization by tying its differential pay system as close to the economic driver of its business as possible.

Guideline # 4: Reward Based on Span of Influence, Not Span of Control
Conventional wisdom says that people should be evaluated based on results that are under their control. However, this kind of evaluation creates competition rather than collaboration. Nucor makes sure that its profit sharing formula rewards relatively large teams, not just the individuals or small groups who have direct responsibility for an area. Following this principle, if a software program creates a significant profit increase, everyone from those who brought the idea into the company to developers and testers to operations and support people to the end users should share in any reward.

Nucor Steel works hard to create a learning environment, where experts move from one plant to another, machine operators play a significant role in selecting and deploying new technology, and tacit knowledge spreads rapidly throughout the company. It’s reward system encourages knowledge sharing by rewarding people for influencing the success of areas they do not control.

How, exactly, can rewards be based on span of influence rather than span of control? I recommend a technique called ‘Measure UP’. No matter how hard you try to evaluate knowledge work or how good a scorecard you create, something will go unmeasured. Over time, the unmeasured area will be de-emphasized and problems will arise. We have a tendency to add more measurements to the scorecard to draw attention to the neglected areas.

However, it is a lot easier to catch everything that falls between the cracks by reducing the number of measurements and raising them to a higher level. For instance, instead of measuring software development with cost and schedule and earned value, try creating a P&L or ROI for the project, and help the team use these tools to drive tradeoff decisions.

Guideline #5: Find Better Motivators Than Money
While monetary rewards can be a powerful driver of behavior, the motivation they provide is not sustainable. Once people have an adequate income, motivation comes from things such as achievement, growth, control over one’s work, recognition, advancement, and a friendly working environment. No matter how good your evaluation and reward system may be, don’t expect it to do much to drive stellar performance over the long term.

In the book “Hidden Value” , Charles O’Reilly and Jeffrey Pfeffer present several case studies of companies that obtain superb performance from ordinary people. These companies have people-centered values which are aligned with actions at all levels. They invest in people, share information broadly, and rely on teams, and emphasize leadership rather than management. Finally, they do not use money as a primary motivator; they emphasize the intrinsic rewards of fun, growth, teamwork, challenge and accomplishment.

Treat monetary rewards like explosives because they have will a powerful impact whether you intend it or not. So use them lightly and with caution. They can get you in trouble much faster than they can solve your problems. Once you go down the path of monetary rewards, you may never be able to go back, even when they cease to be effective, as they inevitably will. Make sure that people are fairly and adequately compensated, and then move on to more effective ways to improve performance.

Six Months Later
The New Venture Team is having another celebration. They had all been surprised when the VP came to their team meeting six months earlier. But they quickly recovered and told her that they each wanted to be the best, they wanted to work with the best, and they did not appreciate the implication that some of them were better than others. When the VP left, the team cheered Sue for sticking up for them, and then got down to work with renewed enthusiasm. Now, two releases later, the customers were showing their appreciation with their pocketbooks.

There haven’t been any dramatic pay increases and only the occasional, well-deserved promotion. But the company has expanded its training budget and New Venture team members have found themselves mentoring other teams. Sue is rather proud of them all as she fills out the newly revised appraisal input forms that have more team-friendly evaluation criteria. This time Sue is confident that her judgment will not be questioned.

Published in Better Software Magazine July, 2004 

Screen Beans Art, © A Bit Better Corporation

January 6, 2003

Measure Up

Getting measurements right can be devilishly difficult, but getting them wrong can be downright dangerous. If you look underneath most self-defeating behavior in organizations, you will often find a well-intentioned measurement which has gone wrong. Consider the rather innocent-sounding measurement of productivity, and it’s close cousin, utilization. One of the biggest impediments to adopting Just-in-Time manufacturing was the time-honored practice of trying to extract maximum productivity out of every machine. The inevitable result was that mounds of inventory collected to feed machines and additional piles of inventory stacked up at the output side of the machines. The long queues of material slowed everything down, as always queues do. Quality problems often took days to surface, and customer orders often took weeks to fill. Eventually manufacturing people learned that running machines for maximum productivity was a sub-optimizing practice, but it was a difficult lesson.

As software development organizations search for productivity on today’s tight economy, we see the same lesson being learned again. Consider the testing department which is expected to run at 100% utilization. Mounds of code tend to accumulate at the input side of the testing department, and piles completed tests stack up at the output side of the testing department. Many defects lurk in the mountain of code, and more are being created by developers who do not have immediate feedback on their work. When a testing department is expected to run at full utilization, the likely result will be an increased defect level, resulting in more work for the testing department.

Nucor Steel grew from a startup in 1968 into a $4 billion giant, attributing much of its success to an incentive pay system based on productivity. Productivity? How did Nucor keep their productivity measurement robust and honest throughout all of that growth? How did they avoid the sub-optimization so common most productivity measurements?

The secret is that Nucor measures productivity at a team level, not at an individual level. For example, a plant manager is not rewarded on the productivity of his or her plant, but on the productivity of all plants. The genius of Nucor’s productivity measurement is that it avoids sub-optimization by measuring results at one level higher than one would expect, thus encouraging knowledge sharing and system-wide optimization.

How can this be fair? How can plant managers be rewarded based on productivity of plants over which they have no control? The problem is, if we measure people solely on results over which they have full control, they have little incentive to collaborate beyond their own sphere of influence to optimize the overall business. While local measurements may seem fair to individuals, they are hardly fair to the organization as a whole.

Measure-UP, the practice of measuring results at the team rather than the individual level, keeps measurements honest and robust. The simple act of raising a measurement one level up from the level over which an individual has control changes its dynamic from a personal performance measurement to a system effectiveness indicator.

In the book “Measuring and Managing Performance in Organizations”, Dorset House 1996, Robert Austin discusses the dangers of performance measurements. The beauty of performance measurements is that “You get what you measure.” The problem with performance measurements is that “You get only what you measure, nothing else.” You tend to loose the things that you can’t measure: insight, collaboration, creativity, dedication to customer satisfaction.

Austin recommends aggregating individual performance measurements into higher level informational measures that hide individual results in favor of group results. As radical as this may sound, it is not unfamiliar. Edward Demming, the noted quality expert, insisted that most quality defects are not caused by individuals, but by management systems that make error-free performance all but impossible. Attributing defects to individuals does little to address the systemic causes of defects, and placing blame on individuals when the problem is systematic perpetuates the problem.

Software defect measurements are frequently attributed to individual developers, but the development environment often conspires against individual developers and makes it impossible to write defect-free code. Instead of charting errors by developer, a systematic effort to provide developers with immediate testing feedback, along with a root cause analysis of remaining defects, is much more effective at reducing the overall software defect rate.

By aggregating defect counts into an informational measurement, and hiding individual performance measurements, it becomes easier to address the root causes of defects. If an entire development team, testers and developers alike, feel responsible for the defect count, then testers will tend to become involved earlier and provide more timely and useful feedback to developers. Defects caused by code integration will become everyone’s problem, not just the unlucky person who wrote the last bit of code.

It flies in the face of conventional wisdom to suggest that the most effective way to avoid the pitfalls of measurements is to use measurements that are outside the personal control of the individual being measured. But conventional wisdom is misleading. Instead of making sure that people are measured within their span of control, it is more effective to measure people one level above their span of control. This is the best way to encourage teamwork, collaboration, and global, rather than local, optimization.

Screen Beans Art, © A Bit Better Corporation

Lessons from Planned Economies

Just as a market economy which relies on the collective actions of intelligent agents gives superior performance in a complex and changing economic environment, so too an agile project leadership approach which leverages the collective intelligence of a development team will give superior performance in a complex and changing business environment. However, conventional project management training focuses on using a plan as the program for action; it does not teach project leaders how to create a software development environment that fosters self-organization and learning. Since very few courses with such a focus are available today, this paper proposes a curriculum for agile software development leaders.

Planned Economies
In the middle of the 20th century, dozens of countries and millions of people believed that central planning was the best way to run their economies. Even today there are many people who can’t quite understand why market economies invariably outperform planned economies; it would seem that at least some of the planned economies should have flourished. After all, there are advantages to centralizing economic decisions: virtually full employment is possible; income can be distributed more equally; central coordination should be more efficient; directing resources into investment should spur growth. So why did planned economies fail?

There are two fundamental problems with planned economies: First, in a complex and changing economic system, it is impossible to plan for everything, so a lot of things fall between the cracks. For instance, planned economies usually suffer a shortage of spare parts, because no one plans for machines to break down. Secondary effects such as environmental impact are often ignored. Furthermore, planners do not have control of the purchase of goods, so they have to guess what consumers really want. Inaccurate forecasts are amplified by a long planning cycle, causing chronic shortages and surpluses.

The second problem with planned economies is diminished incentives for individuals. When compensation does not depend on contribution, there is little to gain from working hard. When incentives are tied to meeting targets, risk adverse managers focus on routine production to meet goals. The stronger that rewards are tied to meeting targets, the more disincentive there is for being creative or catching the things that fall between the cracks.

If we look at conventional software project management, we see practices similar to those used in planned economies, and we also see the similar results. Among projects over $3 million, less than 10% meet the conventional definition of success: on time, on budget, on scope. For projects over $6 million the number drops below 1% [1]. Interestingly, the underlying causes of failure of planned economies are the same things that cause failure in software projects, and further, the remedy is similar in both cases.

The difference between a planned and a market economy is rooted in two different management philosophies: management-as-planning/adhering and management-as-organizing/learning. Management-as-planning/adhering focuses on creating a plan that becomes a blueprint for action, then managing implementation by measuring adherence to the plan. Management-as-organizing/learning focuses on organizing work so that intelligent agents know what to do by looking at the work itself, and improve upon their implementation through a learning process.

The Planning/Adhering Model Of Project Management
Conventional wisdom holds that managing software projects is equivalent to meeting pre-planned cost, schedule and scope targets. The unquestioned dominance of cost, schedule and scope – often to the exclusion of less tangible factors such as usability or realization of purpose – draws heavily on the contract administration roots of project management. Therefore project management training and certification programs tend to focus on the management-as-planning/adherence philosophy. This philosophy has become entrenched because it seems to address two fears: a fear of scope-creep, and a fear that the cost of changes escalates significantly as development progresses.

However, management-as-planning/adherence leads to the same problems with software projects that planned economies suffered: in a complex and changing environment, it is virtually impossible for the plan to cover everything, and measuring adherence to the plan diminishes incentives for creativity and catching the things that fall between the cracks.

In the classic article ‘Managing by Whose Objectives?,’[4] Harry Levinson suggests that the biggest problem with management-by-objectives is that important intangibles which are not measurable fail to get addressed, because they are not in the action plan of any managers. Often these are secondary ‘hygiene’ factors, similar to environmental considerations in a planned economy.

In the book ‘Measuring and Managing Performance in Organizations’[5], Robert Austin makes the same point: over time, people will optimize what is measured and rewarded. Anything which is not part of the measurement plan will fade from importance. Austin points out that managers are often uncomfortable with the idea of not being able to measure everything, so they compensate through one of three techniques:
  • Standardization. By creating standards for each step in a development process, it is hoped that all steps in the project can be planned and measured, and nothing will be missed.
  • Specification. Specification involves constructing a detailed model of the product and/or process and planning every step in detail.
  • Subdivision, functional decomposition. The Work Breakdown Structure (WBS) is the classic example of attempts to decompose a project into steps so that all steps can be planned. 
Conventional project management practices have emphasized all of these techniques to help a project manager be certain that everything is covered in the project plan. However, just as in a planned economy, these techniques are insufficient to catch everything in all but the simplest of projects. In fact, drilling down to detail early in the project has the opposite effect – it tends to create blind spots, not resolve them. By taking a depth-first rather than a breadth-first approach to planning, mistakes and omissions become more likely,[6] and these tend to be more costly because of early investment in details. Thus an early drill-down approach tends to amplify, not reduce, the cost of change.

A management-as-planning/adherence approach also tends to amplify, not reduce, scope-creep. In many software development projects, a majority of the features are seldom or never used.[1] Part of the reason for this is that asking clients at the beginning of a project what features they want, and then preventing them from changing their minds later, creates strong incentives to increase the number of features requested, just in case they are needed. While limiting scope usually provides the best opportunity for reducing software development costs, fixing scope early and controlling it rigidly tends to expand, not reduce scope.

Just as in planned economies, management-as-planning/adhering tends to have unintended consequences that produce precisely the undesirable results that the plans were supposed to prevent. The problem lies not in the planning, which is very useful, but in using the plan as a roadmap for action and measuring performance against the plan.

The Organizing/Learning Model Of Project Management
Market economies deal with the problems of planned economies by depending upon collaborating intelligent agents to make decisions within an economic framework. In market economies, it is the job of the government to organize the economic framework with such things as anti-trust laws and social safety nets. Economic activity is conducted by intelligent agents who learn from experience what is needed and how to fill the needs.

Of course, the economies of individual countries dwarf most software projects, so we might look further to find examples of management-as-organizing/learning. We will explore two domains: manufacturing and product development.

Throughout most of the 20th century, mass production in the US focused on getting things done through central planning and control, reflecting the strong influence of Frederick Taylor’s Scientific Management. The climax came when computer systems made it possible to plan the exact movement of materials and work throughout a plant. Material Requirements Planning (MRP) systems were widely expected to increase manufacturing efficiency in the 1980’s, but in fact, most MRP systems were a failure at detailed production planning. They failed for the same reasons that planned economies failed: the systems could not readily adapt to slight variations in demand or productivity. Thus they created unworkable schedules, which had to be ignored, causing the systems to become ever more unrealistic.

As the centralized MRP planning systems were failing, Just-in-Time systems appeared as a counterpoint to Scientific Management. Just-in-Time forsakes central planning in favor of collaborating teams (intelligent agents). The environment is organized in such a way that the work itself and the neighboring teams signal what needs to be done; rather than a central plan. When problems occur, the root cause is sought out and eliminated, creating an environment in which intelligent agents continually improve the overall system. In almost all manufacturing environments, implementing Just-in-Time trumps any attempt to plan detailed production activities using a MRP system. These systems succeed for the same reason a market economy succeeds: intelligent agents are better at filling in the gaps and adapting to variation than a centrally planned system.

An argument can be made that manufacturing analogies are not appropriate for software development, because manufacturing is repetitive, while projects deal with unique situations. Because of this uniqueness, the argument goes, management as planning/ adhering is the only way to maintain control a design and development environment. A look at product development practices shows that the opposite is true: creating a detailed plan and measuring adherence to that plan is actually a rather ineffective approach a complex product development project.

In the late 1980’s Detroit was shocked to discover that a typical Japanese automotive company could develop a new car in 2/3’s the time for half the cost as a typical US automaker.[7] The difference was that product development in Japan used a concurrent development process, which allows for learning cycles during the design process as well as on-going communication and negotiation among intelligent agents as design proceeds.

Just as market economies routinely outperform planned economies, concurrent development routinely outperforms sequential development. Replacing sequential (plan-up-front) engineering with concurrent (plan-as-you-go) engineering has been credited with reducing product development time by 30-70%, engineering changes by 65-90%, and time to market by 20-90%, while improving quality by 200-600%, and productivity by 20-110%.[8]

Based on experience from other domains, management-as-organizing/learning would appear to have a better chance of resulting successful software projects than the prevailing management-as-planning/adhering approach. An examination of the Agile Manifesto shows that agile software development approaches favor the management-as-organizing/learning philosophy. (See Table 1.) Therefore, we can expect that in the long run, agile software development might significantly outperform traditional software development practices. In fact, evidence is mounting that agile approaches can be very effective. [9]

Table 1. Mapping Values from the Agile Manifesto to Management Philosophies

A Curriculum For Agile Software Project Leadership
Existing training for project management appears to be largely focused on the management-as-planning/adhering philosophy. Courses seem to be aimed at obtaining certification in an approach to project management developed for other domains, such as facilities construction or military procurement. Even courses aimed specifically at software development tend to focus on work breakdown and a front-end-loaded approach to managing scope. As we have seen, this is not a good match for concurrent development or agile software development.

It would seem that at a curriculum should be available for leaders of agile software development projects, given the dismal track record of current approaches and the potential of agile software development. Project managers who know how to develop work breakdown structures and measure earned value often wonder what to do with an agile software development project. Senior managers often wonder how to improve the skills of project leaders. Courses on management-as-organizing/learning are needed to fill this void, but there seem to be few project management courses with this focus. To help make agile project leadership training more widely available, this article outlines a possible curriculum for such courses.

Change the Name: Project Leadership
Since this is new territory, we may as well start with a new name, and move away from the administrative context of the word management. All projects, agile or otherwise, benefit from good leaders; that is, people who set direction, align people, and create a motivating environment. By using the term leadership we distinguish this course from one which focuses on the management tasks of planning, budgeting, staffing, tracking, and controlling.

Setting Direction
Planning is a good thing; the ultimate success of any project depends upon the people who implement a project understanding what constitutes success. Planning becomes brittle with it decomposes the problem too fast and moves too quickly to solutions. The best approach to early planning is to move up one notch and take a broader view, rather than decompose the problem and commit to detail too early.[6] A project leader starts by understanding the purpose of the project and keeping that purpose in front of the development team at all times.

Organizing Through Iterations
The idea of management-as-organizing/learning is to structure the work so that developers can figure out what to do from the work itself. The fundamental tool for doing this is short iterations which develop working software delivering business value. Project leaders need to know how to organize iteration planning meetings and how to structure the development environment and workday so that people know what to do when they come in to work every day, without being told.

This part of the curriculum must cover such concepts as story cards, backlog lists, daily meetings, pair programming, and information radiators. It should also stress the importance of organizing worker-to-worker collaboration between those who understand what the system must do to provide value and those who understand what the code does.

Concurrent Development
Strategies for concurrent development are an important tool for project leaders, especially in organizations which are used to sequential development. General strategies include:
  • sharing partially complete design information
  • communicating design constraints instead of proposed solutions
  • maintaining multiple options
  • avoiding extra features
  • developing a sense of how to absorb changes
  • developing a sense of what is critically important in the domain
  • developing a sense of when decisions must be made
  • delaying decisions until they must be made
  • developing a quick response capability
System Integrity
Project leaders must assure that the software developed under their guidance has integrity. This starts with assuring that the basic tools are in place for good software development: version control, a build process, automated testing, naming conventions, software standards, etc. Leaders must assure that the development team is guided by the true voice of the customer, so that the resulting system delivers value both initially and over time. They must assure that technical leadership establishes the basis of a sound architecture and an effective interface. They must make sure that comprehensive tests are used to provide immediate feedback, as well a framework so that refactoring can safely take place.

Leading Teams
People with experience in traditional project management who are about to lead agile software development projects might need some coaching in how to encourage a team make its own commitments, estimate its own work, and self-organize around iteration goals. Project leaders attend the daily meetings, listen to and solve team problems, serve as an intermediary between management and the team, secure necessary resources and technical expertise, resolve conflicts and keep everyone working together effectively; but they do not tell developers how to do their job. Project leaders coordinate the end-of iteration demonstration and the beginning of iteration planning, making sure that work is properly prioritized and all stakeholder interests are served.

Measurements
Feature lists or release plans, along with associated effort estimates, are often developed early in an agile project. The difference from traditional project management occurs when actual measurements begin to vary from these plans; for agile development, it is assumed that the plan is in error, and needs to be revised. By measuring the actual velocity or burndown rate, a far better picture of project health can be obtained than measuring variance from a guesstimate. Creating more accurate estimates becomes easier as developers gain experience in a domain and customers see working software. Leaders should learn how to combine reports of actual progress with increasingly accurate estimates into a tool for negotiating the scope of a project; this can be far more effective at limiting scope that the traditional method of fixing scope and controlling it with change approval mechanisms.

A useful technique for project leaders is to aggregate all measurements one level higher than normal.[5] This encourages collaboration and knowledge sharing because issues are called to the attention of a larger group of people. It also helps to avoid local optimization and the dangers of individual performance measurements. This technique is useful, for instance, for defect measurements.

Large Projects
Various techniques for synchronizing agile development across multiple teams are important for project managers to understand. Some of the techniques that might be covered are: divisible architectures, daily build and smoke test, spanning applications, loosely coupled teams that develop interfaces before modules.

Conclusion
We have often heard that the sequential, or waterfall, approach to software development projects is widely known to be ineffective, but it seems to be quite difficult to do things differently. We also heard that many, many features in most systems, possibly even a majority, are not needed or used; yet limiting scope to what is necessary seems to be an intractable problem. One solution to these problems can be found in concurrent engineering, which is widely used in product development as an alternative to sequential development. Concurrent engineering works more or less like a market economy: both depend on collaborating intelligent agents, both allow the end result emerge through communication, and both provide incentives for individuals to be creative and do everything necessary to achieve success.
_______________
References:
[1] Johnson, Jim, Chairman of The Standish Group, Keynote “ROI, It’s Your Job,” Third International Conference on Extreme Programming, Alghero, Italy, May, 26-29, 2002.

[2] Johnston, R B, and Brennan, M, “Planning or Organizing: the Implications of Theories of Activity for Management of Operations, Omega: The International Journal of Management Science, Vol 24 no. 4 pp. 367-384, Elsevier Science, 1996.

[3] Koskela1, Lauri, “On New Footnotes to Shingo”, 9th International Group for
Lean Construction Conference, Singapore, August, 6-8 2001.

[4] Levinson, Harry, “Management by Whose Objectives?” Harvard Business Review, Vol 81, no 1, January, 2003, Reprint of 1970 article.

[5] Austin, Robert D., Measuring and Managing Performance in Organizations, 1996, Dorset Publishing House

[6] Thimbleby, Harold, “Delaying Commitment”, IEEE Software, vol 5, no 3, May, 1988

[7] Clark, Kim B, Fujimoto, Takahiro, Product Development Performance; Strategy, Organization, and Management in the World Auto Industry, Harvard Business School Press, Boston, 1991.

[8] Thomas Group Inc., National Institute of Standards & Technology Institute for Defense Analyses, from Business Week, April 30, 1990, pp 111

[9] Weber Morales, Alexandra, “Extreme Quality”, Software Development, Volume 11, No 2, February 2003.

[10] Kotter, John P, “What Leaders Really Do,” Harvard Business Review, Vol 79, no 11, December, 2001, Reprint of 1990 article.

Screen Beans Art, © A Bit Better Corporation

April 16, 2002

Righteous Contracts

Right"eous a. Doing that which is right; yielding to all their due; just; equitable.
[Webster’s Revised Unabridged Dictionary, 1913]

Righteous contracts. A good name for contracts whose purpose is to assure that the parties act in a just and equitable manner and yield to the other their due. Righteous contracts are those governing investments in specialized assets – assets which are very important to a business, but have no value anywhere else. For example, software developed specifically for a single company is a specialized asset, since it is useful only by the company for which it was developed. Agreements to develop specialized assets create a bilateral monopoly; that is, once the parties start working together, they have little option but to continue working together. This bilateral monopoly provides an ideal environment for opportunistic behavior on the part of both supplier and customer.

Thus the purpose of righteous contracts is to prevent opportunistic behavior, to keep one party from taking advantage of another when market forces are not in a position to do so. In a free market where there are several competent competitors, market forces control opportunism. This works for standard or commodity components, but not for specialized assets.

The traditional way to develop specialized assets has been to keep the work inside a vertical organization, where opportunism is controlled by administration. Inside a company, local optimization would presumably be prevented by someone positioned to arbitrate between departments for the overall good of the enterprise. Vertical integration allows a company to deal with uncertainty and change in a rapid and adaptive manner.

Outsourcing
Recently, however, outsourcing has become common in many companies, for very good reasons. An outside company may have lower labor costs or more specialized experience in an area that is not one of the firms core competencies. The cost of producing a service or asset can be considerably lower in an outside company. Of course, there are transaction costs associated with outsourcing, and the total cost (production costs plus transaction costs) must be lower, or vertical integration would make more sense.

Transaction costs associated with outsourcing include the cost of selecting potential suppliers, negotiating and renegotiating agreements, monitoring and enforcing the agreement, billing and tracking payments. Transaction costs also include inventory and transportation above that needed for vertical integration. In addition, there are risks associated with outsourcing, which may result in additional costs. One cost would be that of diminished communication. For example, development of any kind usually requires intense communication between various technical specialties and target users. If distance or intellectual property issues reduce the communication, it will cost more to develop the asset and the results may suffer as well. In addition, moving a specialized skill outside the company may incur lost opportunity costs.

There are two types of contracts which are used for developing specialized assets – Contracts which are executed before the development is done by the supplier, and contracts which are executed after the supplier does the work. A contract executed before work is done is known as a before-the-fact (or ex ante) contract. There are two types of before-the-fact contracts – fixed price contracts and flexible (time-and-materials) contracts. Once these contracts are executed, they set up a bilateral monopoly, fraught with opportunities for exploitation on one side or the other. Therefore, the purpose of these contract is to set up control systems to prevent exploitation.

A contract executed after work is done is called an after-the-fact (or ex post) contract. Suppose a supplier develops a system that it thinks a customer will find valuable and then tries to sell the system. In this case, control comes after the fact; the supplier makes its own decisions, and it’s reward is based on the results. Of course this is a risky proposition, so the supplier has to hedge its bets. One way to do this is to sell the system to multiple customers, basically making it into a commodity product. But this doesn’t help a company that wants suppliers to develop proprietary components for them. In order to entice suppliers to develop specialized assets prior to a contract, a company usually sets up a sole source or favored source program. If a company treats its favored suppliers well, the suppliers develop confidence that their investments will be rewarded and continue to make investments.

On the surface, after-the-fact contracts may seem implausible for software development, but in fact, they are the best solution for contracting a development project. Moreover, the best kind of development processes to use inside a company are those that mimic after-the-fact contracts. How can this be? The explanation starts by understanding why before-the-fact contracts provide poor governance for development projects.

Fixed-Price Contracts
Let’s examine the most commonly used before-the-fact contract, the fixed price contract. A key motivator for fixed price contracts is the desire of a customer to transfer risk to the supplier. This may work for simple, well-defined problems, but it is inappropriate for wicked problems.[1] If the project is complex or uncertain, a fixed price contract transfers a very high risk to the supplier. If the supplier is not equipped to deal with this risk, it will come back to haunt the customer.

Risk should be born by the party best able to manage it. If a problem is technically complex, then the supplier is most likely to be in a position to manage it. If a problem is uncertain or changing, then the customer is in the best position to manage it. Transferring the risk for such problems to the supplier is not only unfair, it is also unwise. There is no such thing as a win-loose contract. If a supplier is trapped on the wrong side of a win-loose contract, the bilateral monopoly which has been formed will trap the customer as well. Both sides loose in the end.

Fixed price contracts do not usually lower cost, because there is always at least some risk in estimating the cost. If the supplier is competent, it will include this risk in the bid. If the supplier does not understand the complexity of the problem, it is likely to underbid. The process of selecting a supplier for a fixed-price contract has a tendency to favor the most optimistic (or the most desperate) supplier. Consequently, the supplier least likely to understand the project’s complexity is most likely to be selected. Thus fixed price contracts tend to select the supplier most likely to get in trouble.

Therefore it is quite common for the customer find a supplier unable to deliver on a fixed price contract. Because the customer no longer has the option to choose another supplier, they must often come to the rescue of the supplier. Alternatively, the supplier might be able to cover its loss, but most likely it will attempt to make the loss up through change orders which add more revenue to the contract. This leads the customer to aggressively avoid any change to the contract. Faced with no other way to recoup the loss, a supplier will be motivated to find ways to deliver less than the customer really wants, either by lowering the quality or reducing the features.

The customer using fixed price contracts to transfer responsibility and risk will often find both back on their plates in the end, and if so, they will be worse off because of it.

Flexible Contracts
“Customers should prefer flexible-price contracts to fixed-price contracts where it is cheaper for the customer to deal with uncertainty than it is for the contractor to do so or where the customer is more concerned with the ability of the contractor to provide a product that works than with price,” writes Fred Thompson in the Handbook of Public Administration. (Second Edition), Rabin, Hildreth, Miller, editors, New York: Marcel Dekker, Inc., 1998.

The flexible-price contract is designed to deal with uncertainty and complexity, but it does not do away with risk, it simply shifts it from the supplier to the customer. For example, after the DOD (U.S. Department of Defense) experienced some very high profile bailouts on fixed price contracts, it began to use more flexible-price contracts is situations where the government was better able to manage the risk. Of course, with the risk transferred to the customer, the supplier has little incentive to contain costs in a flexible-price contract, a point that did not escape contract negotiators at DOD. In order to protect the public interest, DOD perfected controls imposed on the supplier.

Controlling suppliers of flexible-price contracts evolved into a discipline called project management. The waterfall lifecycle grew out of military contracts, and an early focus of PMI (Project Management Institute) was DOD contracts. Companies with DOD contracts not only hire administrators to oversee compliance with contract requirements, they also add accountants to sort out allowable and unallowable costs. Flexible-price contracts invariably have high transaction costs, due to the high cost of control.

Controls Do Not Add Value
High transaction costs would be reasonable if they added value, but in fact, transaction costs are by definition non-value-adding costs. Fred Thompson (Ibid.) notes, “Controls contribute nothing of positive value; their singular purpose lies in helping us to avoid waste. To the extent that they do what they are supposed to do, they can generate substantial savings. But it must be recognized that controls are themselves very costly.”

One way to avoid the high cost of control in flexible-price contracts is not to use them. It may be better to do development internally, where it is easier to deal with uncertainty and respond to change. The question is, on what basis should an outsourcing decision be made? Thompson (Ibid.) counsels, “The choice of institutional design should depend upon minimizing the sum of production costs and transactions costs.” He also notes, “Vertical integration occurs because it permits transaction or control costs to be minimized.”

An interesting problem with this equation is that vertical integration does not always work to minimize control costs. In fact, many organizations find themselves using DOD-like project management controls internally. It seems incongruous that control mechanisms which add cost but not value, and which were invented to prevent opportunistic behavior, would come to dominate development in the very place where they should not be needed. If the reason to develop internally is to provide flexibility in the face of uncertainty, then costly, change-resistant control practices are inappropriate. Traditional project control practices (that freeze requirements, require approval for changes, and track tasks instead of features) have a tendency to create waste, not value, when used inside a company.

After-the-fact Contracts
Let’s assume for the sake of argument that the choice has been made to outsource a complex, specialized development effort. The next question is, how can transaction costs be reduced? In the manufacturing industry, this is done with after-the-fact contracts.

Despite the obvious risks, is not uncommon for suppliers to develop specialized components for a manufacturer prior to obtaining a contract. For example, 3M Optical Systems Division used to develop optically precise lenses for specialized automotive taillights. The reward was a one year contract for a specific model. Unfortunately, after the first year, the automotive company would invariably find a cheaper way to make a similar lens, and Optical Systems would loose the business before it had recovered its investment. The division eventually decided that after-the-fact contracts with Detroit automakers were not profitable and left the business.

There are ways to make after-the-fact contracts work better. Toyota awards contracts for an entire run of a model, and uses target costing to manage costs. Thus a supplier knows that if it wins the business, it can recover its investment, while the customer is confident that the supplier will work reduce costs in line with its economic requirements. In addition, the supplier understands that it will receive favored consideration for similar components in the future.

After-the-fact contracts require two elements to work: shared risk and trust. Toyota shares the risk with a component supplier by guaranteeing the business over the life of an model. Both parties agree to work together to try to meet a target cost profile over the life of the agreement. Note that meeting future target costs is neither guaranteed nor is it the sole responsibility of the supplier. In the best relationships, technical personnel from each company work freely together without worrying about proprietary information, both to meet target costs and to develop new components not yet subject to a contract.

If both parties are pleased with the results of the first contract, they develop trust and a good working relationship, and are more likely continue to do business together. The supplier is inclined to risk more in developing new components when it has developed confidence that the investment will pay off. This kind of relationship can achieve all of the benefits of both outsourcing and vertical integration combined.

But Software is Different…
You might be saying to yourself, this is fine if there is something to be manufactured and sold many times over, like a taillight, but in software we develop a system only once, it is complex and expensive, it is subject to many changes, and if it is not properly designed and executed, huge waste might result. Where is the parallel to THIS in manufacturing?

Consider the large and expensive metal dies which stamp out vehicle body panels. The cost of developing dies can account for half of a new model’s capital investment. Consequently, a great deal of time is spent in all automotive companies working to minimize the cost of these dies. The approach in Japan is distinctly different from that in the U.S., and dramatically more effective. The best Japanese companies develop stamping dies for half the cost and in half the time as their counterparts in the West. The resulting Japanese dies will be able to stamp out a body panel in 70% of the time needed by U.S. stamping operations.

From the classic book Product Development Performance by Clark and Fujimoto, Harvard Business School Press, 1991:
Japan firms use an ‘early design, early cut’ approach, while U.S. practice is essentially “wait to design,, wait to cut.”

Because it entails making resource commitments while the body design is still subject to frequent changes, the Japanese early design, early cut approach entails significant risks of waste and duplication of resources…. Many engineering changes occur after final release of blueprints. At peak, hundreds of changes are ordered per month.

Behind the wait to design, wait to cut approach in U.S. projects is a desire to avoid expensive die rework and scrappage, which we would expect to be an inevitable consequence of the bold overlapping that characterizes the Japanese projects. However, our study revealed a quite different reality. U.S. firms, despite their conservative approach to overlapping, were spending more on engineering changes than Japanese firms. U.S. car makers reported spending as much as 30-50 percent of original die cost on rework due to engineering changes, compared to a 10-20 percent margin allowed for engineering changes by Japanese products.

The Japanese cost advantage comes not from lower wages or lower material prices, but from fundamental differences in the attitudes of designers and tool and die makers toward changes and the way changes are implemented…. In Japan, when a die is expected to exceed its cost target, die engineers and tool makers work to find ways to compensate in other areas…. Die shops in high-performing companies develop know-how techniques for absorbing engineering changes at minimum cost…. In the United States, by contrast, engineering changes have been viewed as profit opportunities by tool makers….

Suppose a body engineer decides to change the design of a panel to strengthen body-shell rigidity. The high performers tend to move quickly. The body designer immediately instructs the die shop to stop cutting the die on the milling machine. Without paperwork or formal approval, the body designer goes directly to the die shop, discusses modifications with the die engineers, checks production feasibility, and makes the agree-upon changes on the spot. Unless the changes are major, decisions are made at the working level. Traditionally, the die shop simply resumes working on the same die. Paperwork is completed after the change has been made and submitted to supervisors for approval. The cost incurred by the change is also negotiated after the fact. The attitude is “change now, negotiate later.

In companies in which die development takes a long time and changes are expensive, the engineering change process is quite different. Consider the context in which changes occur. In extreme versions of the traditional U.S. system, tool and die makers are selected in a competitive bidding process that treats ‘outside’ tool shops as providers of a commodity service. The relationship with the die maker is managed by the purchasing department, with communication taking place through intermediaries and drawings. The individuals who design the dies and body panels never interact directly whit the people who make the dies.
You would think that tool and die makers in Japan must be a department inside the automotive company. How else could it be possible for a designer to walk into a tool and die shop, stop the milling, make changes, and start up the milling again, leaving approvals and cost negotiations for later? But this is not the case. Tool and die makers are supplier companies in Japan, just as they are in the U.S. The difference lies in the attitudes of the different countries toward supplier contracts.

For Toyota in particular, a supplier is a partner. The basis of this partnership is a target cost for each area of the car. This translates into target costs for all development activities, including dies. Of course, U.S. companies have target costs for each component also, but they tend to impose the cost on the supplier without regard to feasibility. This has a tendency to create a win-loose relationship, leaving the supplier no option but to recoup costs through the change process.

In contrast, Toyota does not impose cost targets on suppliers that it does not know how to meet, and engineers from both companies work together to meet target costs. If something goes wrong and the targets cannot be met, Toyota shares the problem in an equitable manner. In this win-win environment, arms-length exchange of information through written documentation and an extensive change approval processes is unnecessary.

The Toyota Production System is founded on the premise that superior results come from eliminating anything which does not add value. Since control systems do not add value, they must be minimized, just like inventory and set-up times. Therefore supplier partnerships based on shared risk and trust are the preferred relationship. The hallmarks of these partnerships are worker-level responsibility for meeting business goals, intense communication at the technical level, a stop-the-line and fix-it-immediately attitude, and an emphasis on speed. Even for large, one-of-a-kind development projects which require highly specialized design, this approach produces dramatically superior results.

Can this work for Software Development?
Developing specialized dies is not that much different than developing specialized software. The key is to establish a partnership relationship which allows true development to take place. Development is done using a repeated cycle of design-build-test, allowing the solution to emerge. The question is, how can a contract be written to support the emergent nature of development?

Neither fixed-price nor flexible-price contracts support the nature of software development. Development always involves tradeoffs, and an organization which facilitates the best tradeoff decisions will produce the best result. Before-the-fact contracts do not support the give-and-take between developers and customers necessary to make the best tradeoffs. A developer should not have to worry about dealing with problems as they arise, but with before-the-fact contracts, this activity has to be paid for by one company or the other. Since every hour must be accounted for, the give-and-take necessary for trade-off decisions is discouraged.

What is needed is a contract approach which allows developers and customers work closely together to develop a business value for a target cost. Examples of how to do this in a vertical organization abound. There many successful examples of using Scrum for product development. Microsoft’s approach to product development is documented by Michael Cusumano in Microsoft Secrets, Simon and Schuster, 1998. The general approach is to set a clear business goal, fix resources, prioritize features, deliver working software in short cycles, and stop working on features when time runs out. This approach has a track record of delivering systems, even large ones, in a predictable timeframe for a predicable cost.

The question is, how can a contract be written to support the same approach? The answer is to move to after-the-fact contracts in which a supplier is paid for the value of the work they do. It works like this: A customer has a clearly defined business value and a target cost in mind for achieving that value. This target cost includes payments to a supplier for their contributions. The customer comes to an agreement with a supplying partner that the business value and the target cost are achievable, including the target cost for the supplier’s participation. Work proceeds without contractual guarantees that the value will be delivered or the target cost will be achieved, but both partners are committed to meet these goals.

Workers at each company use adaptive processes[2] to develop the system as a single team. They communicate intensely at the developer-customer level to make the necessary tradeoffs to achieve the value within the target cost. As working software is delivered, both supplier and customer work together using velocity charts to monitor development progress. If adjustments to the business value or the target cost structure are required, these become apparent early, when they can be addressed by limiting the feature list or extending the schedule. If this changes the business value or target cost, the parties negotiate an equitable way to share the burden or benefit.

Conclusion
Trusted-based partnerships are the first requirement to make after-the-fact contracts work. Partnerships are necessary to facilitate worker-level responsibility for meeting business goals, intense communication between developers and users to make optimal tradeoffs, daily builds and automated testing to facilitate a fix-it-immediately attitude, and a focus on early delivery of working software to create the feedback system critical to good development.

Companies that develop contracts allowing these values to flourish can expect to produce the same dramatically superior results in software development that these values produce in product development.

Lessons for Outsourcers
If your company outsources software development, consider the following:

1. Fixed Price Contracts
Fixed price contracts are risky. There is both a technical risk that the job can’t be done for the allotted cost, and the very real risk that the selection process favors less knowledgeable suppliers. If you assure that you get a competent supplier, then you can be sure the supplier will add a good margin to the cost to cover their risk. Remember that risk should be born by the party most able to manage it, so if the project is complex and changes are likely, you should assume the risk. If the project is a wicked project, you should not even consider a fixed price contract.

If you are considering a fixed price contract, you are probably interested in transferring all responsibility to your supplier. But remember, if this results in a win-loose situation, you will not win. You are going to be committed to the supplier before the cracks begin to show, and if things go wrong, you will suffer as much, if not more, than your supplier. You may have to bail them out. They will no doubt be looking to make up for loses through change orders, so you will have to control these aggressively. That means if your project is prone to uncertainty or change, you really don’t want a fixed price contract.

And finally, it is much more difficult to get what you really need under a fixed price contract. If you got the low bidder, you probably did not get the supplier most familiar with your domain. If the bid was too low, your supplier will want to cut corners. This may mean less testing, fewer features, a clumsy user interface, out-of-date technology. You are going to need to carefully limit user feedback to control changes and keep the price in line, which will make it more difficult to get what your users really want.

Traditional Control Processes
Traditional project management processes tend to emphasize scope management using requirements traceability and an authorization-based change control system. Typically cost control is provided with some variation of an earned value measurement. The first thing to realize is that all of these techniques are expensive and do not add any value to the resulting software. These practices are meant to control opportunism, and if you are concerned that your supplier might take advantage of you, they might make sense. (But try partnerships first.)

You most likely do not want to be using these practices inside your own company; they get in the way of good software development. It’s pretty well known that an iterative approach to software development, with regular user communication and feedback, is far better than the waterfall approach. However, those pesky project management practices tend to favor waterfalls. It’s a good bet that your project will be subject to change (users change their preferences, technology changes, someone forgot a critical requirement), so you want to be using adaptive processes.

2. Trust-based Partnerships
For starters, all internal development should be based on trust-based partnerships – after all, that’s why you are doing inside development in the first place! If you can’t trust someone in your own company, who can you trust?

The fastest, cheapest way to develop software with a supplier is to let their technical people make decisions based on close interaction with and regular guidance from your users. You get the best results and the happiest users this way too. This kind of relationship requires risk sharing and excellent on-going communications. In exchange for this investment, trust-based partnerships adapt well to change and uncertainty and are most likely to yield faster, better, cheaper results.

Lessons for Contractors
If your company supplies software development, consider the following:

1. Fixed Price Contracts
You owe it to your customers to educate them on the pitfalls of fixed price contracts. Make sure they understand that this will make it more difficult for you to deliver the best business value.

2. Traditional Control Processes
Don’t accept traditional control mechanisms; there are better ways. Instead, use prioritized feature sets, rapid iterations and velocity charts to monitor projects.

Never allow the customer to fix cost, schedule and features simultaneously. Preferably, you want to agree to meet cost, schedule and overall business value targets, and provide a variable feature set. If the detailed feature set is not negotiable, then at least one of the other two must be flexible.

Find out what is REALLY important to your customer in terms of business value and deliver that.

3. Trust-based Partnerships
Your top priority when negotiating the relationship is to assure that your development team will have constant user involvement and feedback. You can negotiate what this means and who represents the user, but if you don’t have access to users or a user proxy, you will have a difficult time delivering business value. And delivering business value must be your main objective.
____________________
Footnotes:

[1] A Wicked Problem is one in which each attempt at creating a solution changes the understanding of the problem. See “Wicked Projects” by Mary Poppendieck, Software Development Magazine, May, 2002, posted on this site under the title “Wicked Problems.”

[2] For a discussion of adaptive processes, see “Wicked Projects” by Mary Poppendieck, Software Development Magazine, May, 2002, posted on this site under the title “Wicked Problems.”

Screen Beans Art, © A Bit Better Corporation