From Chapter 2: The Mythical Man-Month
Excerpt
...our techniques of estimating are poorly developed. (page 14)
Adapted from "Creation" by Phillip Medhurst |
Underestimation is not a new problem. In chapter 8, 'Calling the Shot,' Books suggests a remedy. He outlines an engineering approach to estimation using a parametric cost model. A lot of progress has been made since then. We now have sophisticated analytic, cost-estimation tools like COCOMO and SEER-SEM . Both are commonly used to gauge proposed NASA and DoD budgets. Both use parametric models based on the software size, modifying parameters and consults a substantial dataset derived from 50 years of carefully scrubbed project data. For example COCOMO allows the user to dial in parameters for the estimated lines of source code (often called SLOCs), required reliability, anticipated complexity, performance constraints, and team experience. You feed in the parameter values and the tool spits out a cost.
I've used COCOMO and SEER-SEM -- at least I've worked with an expert who knew how to properly set the parameters. The results were credible; they matched my thumb-jump estimates. And, since the models used historic project data, there was a plausible basis-of-estimate.
Sadly, the use of COCOMO was mere makework. In practice, the submitted estimates were based on institutional conventions that relied on in-house cost models. In my view, these in-house models codify the underestimation that characterized earlier projects.
Why would an organization adopt an approach that leads to chronic under estimation? There are more reasons than you can shake a stick at, but they are all the rational outcome of the cultural, financial and engineering forces native to a large government-funded bureaucracy. Here's an illustration of how those forces might work:
- A proposal manager will strive to contain cost. Government cost guidelines or cost caps must be met or the proposal is a gonner.
- The proposed system will be assumed to have the same requirements as the last system. This will be perceived as cheaper and less risky.
- The separate engineering disciplines will compete for a funding wedge in a zero-sum game. A favorite will emerge that adds science value.
- A high degree of software inheritance will be assumed. Since nothing new is needed, the estimate becomes the cost of the last project.
- Since the inherited software already exists, the proposed software costs will be expected to be cheaper than the last project cost.
- The estimation process will repeat in a cost-cutting spiral until the engineering teams agree to a budget that is uncomfortable. The staff must be fed.
- Cost models will be used, but only to demonstrate that that a reduced cost is credible. (Skillful use of the parameters, particularly the reuse parameter is valuable).
- Experience shows that approved government contracts come with a wink and a nod. The government sponsor is more likely to add budget late in the project when program success is on the line.
- The project manager keeps reserves as long as possible in the likely event of last minute cost surprises.
- The project customers make changes that drive requirement changes. Some change are important; some are frivolous. All must be addressed.
- Salary and overhead costs will have increased since proposal submission. The budget remains fixed, so the work must be done by a smaller workforce.
- Government cuts lead to project cuts. Then each discipline will fight push the cuts elsewhere in the project. The management must account for the hard fact that without hardware there is no system. The choices are limited. Software and operations budgets are the best candidates.
- The software for the new project will not be the same as the last project. The platforms will be different. The new requirements will impact the design. Additional resources are needed.
- The schedule remains fixed to meet the launch date. The engineering teams do their best to make up the differences with long hours.
- As the crisis become apparent, reserves become available.
- Management invokes the obvious remedy: add staff
I have little faith that better tools will lead to better estimates. The tools merely serve the institution. However, available budgets would be allocated more effectively if project management operated as if the mythical man-month was not a myth.
Easier said than done.
1In some sense cost and schedule are two sides of the same coin. In practice they are not. Each estimate is constructed against different constraints.
2The actual hours worked are not reported and the actual man-months are not known.
Another observation: software behaves as a gas, so it will expand to consume its container. In most cases its container is cost and schedule. Rob Austin suggests the way to compress the gas is to deploy two development teams in competition with the same requirements; it's both faster and cheaper in the long run. (I know what you're thinking. I didn't go there with 'better' for different reasons).
ReplyDeleteThe cost and schedule models unfortunately, however scientific, are misused as you say. They have become stilted yet accepted. They offer faux rigor for the lazy to keep the BOE hounds away. The most accurate project estimates I've done were merely PERT beta distributions for each task -- gut feel expressions by the development team for best case, average case, and worst case durations. That may have yielded some self-fulfilling control because the estimators performed the work, but that's okay if the project is approved under these terms.
> two development teams
ReplyDeleteThis used to be a common practice at JPL. It was the natural by-product of competing management interests with independent budgets. It's no longer common. The budgets are more constrained; the management more monolithic.
When it was a common practice, there was strong competition by the teams for management attention. My recollection was that the winning team was associated with the ascendant management cadre and not necessarily with technical merit. I don't mean to imply that the winner wasn't better technically, but that wasn't always the case.
Any ideas about how the winner of this competition might be selected primarily for technical merit?