From Chapter 2: The Mythical Man-Month
Excerpt
When a task cannot be partitioned because of sequential constraints, the application of more effort has no effect on the schedule. The bearing of a child takes nine months, no matter how many women are assigned. Many software tasks have this characteristic because of the sequential nature of debugging.
(page 16)
Brook's text is based on his experience building an operating system for the then new IBM 360. All new hardware; all new software. There's a strong analogy to the job of developing hardware and software for a new spacecraft. Debugging can be nasty. Is it a bug in the software? The COTS operating system? The lastest-greatest version of the VHDL-Verilog that's been loaded in that fancy FPGA? Some weird combo that no one ever thought about and only shows up once in a blue moon?
Seems like there's a nasty, intractable bug in every project. The only hope of getting to the bottom of the problem is to stop the presses and start the analysis on a non-moving target. Hence, the credible claim: debugging is sequential.
The 360 team had it's work cut out. They had to produce a new OS for the newly minted processors and instruction sets; a far tougher problem than that faced by the current generation of spacecraft developers who use established instruction sets and COTS operating systems.1 What's more the 360 team was writing tests on keypunch cards and running them in batch mode.2.
To say the least, contemporary debugging practices are vastly improved. We have interactive debugging, sophisticated make utilities, powerful configuration management tools , test harness tools, and shelves full of run-time verification tools. This affords some parallelization of debugging, but when final integration rolls around, the work is serial.
And it's not just debugging. There are antecedents in the sequence that simply happen; they cannot be skipped, even if they are not in the published schedule. Here's a few examples:
- Antecedent: No planned requirement development or approval phase. Subsequent: programmers will invent the requirements they need for code development.
- Antecedent: No planned architecture or design phase. Subsequent: The architecture will emerge as each programmer independently designs the functional pieces
- Antecedent: No established development process for maintaining separate code branches. Subsequent: Programmers will merge their code branches on the trunk leading to entanglements, broken builds, and integration parties.3
- Antecedent: No strictly enforced code freeze. Subsequent: Programmers change code that is under test compromising the test process.
The best I&T Leads will strive to ensure that the code under test is not changed. A talented colleague once put it this way, "I don't want to see any of the programmer's chocolaty little finger prints on the code we're testing."3
In my experience, releasing untested or under-tested code is common place. As a practical matter, that's a lofty goal--the necessities of the moment almost always carry the day. I have no direct experience developing code where human lives might be at risk (e.g. airplanes, cars, or nuclear plants), but testing was commonly short changed in NASA's under-funded software development culture. Even our most high-handed reviews conveniently skipped lightly over these profoundly inconvenient technical details. Our rigorous institutional process had, for the most part, merely formalized a system of looking in the obvious places and recording those findings on paper. A colleague once compared the review process to looking for a lost key under a street light because that's what you can see. I would only add that the results would also be captured in PowerPoint and displayed to 30 colleagues.
Today's conventions for managing the serial character of software development is sufficient for today's systems. We get by. Same for the our approach to testing. But, the current practice is wholly inadequate for building the large, complex software-intensive systems imagined in movies and books.
Interestingly, I don't believe these kinds of topics are the focus of serious research. Progress will be slow.
1. The current generation of deep-space missions use the RAD750, a radiation hardened version of the processor that was introduced in 1997 and powered the multi-colored iMacs. The most commonly used COTS OS is VxWorks.
2. IBM System/360 Operation System: Programmer's Guide to Debugging
3. An entanglement occurs when source code files are mutually dependent and inconsistent. Consistency can only be restored by repairing all files at the same time. A broken build occurs when the source code will not compile and link. An integration party occurs when the whole team must stop development and work on restoring the build.
4. "...chocolatey little finger prints" is a phrase borrowed from Stephen Harrington, a witty and respected colleague from my Cx days.
to handle this issue we develop alot of automated unit and integration regression tests, followed with hardware regression tests. lotsa bots running every time any code hits a repo.
ReplyDeleteThis comment has been removed by the author.
DeleteAutomation helps a lot, I'd be the last to dismiss its value. But automation comes with a few of bugaboos. Here's what I've seen...
Delete* Automation is never complete.
* The tests are expensive to maintain through multiple iterations. Changes of code drive changes of test. It doesn't take long to end up with a huge backlog of orphaned or broken tests.
* Automation is great for unit tests. System-level testing tends to get a lot more complex. For example GUI tests, or tests that end up crossing code from different development organizations.
Of course I'm thinking about big systems (500K - 20M SLOC) with tons of legacy. I'm sure I have a bias.