Lessons in Boom & Bust, Re-engineering, and Brook’s Law Revisited
John H. Saunders, Ph.D..


What follows here are three short stories about the dynamics of looking forward, or strategic management, and how it may be aided through the use of system dynamics technology.


Boom and Bust

In each year from 1976 to 1982, Atari, the industry leader in computer video games, virtually doubled its revenue. During that six-year period, revenues streaked from $35 million to nearly $2 billion. But then in the period 1982-4 the company’s fortunes reversed. In a single year their operating income fell from a healthy positive $300 million to nearly $600 million in the red! Atari was not alone in its comet like appearance and demise during that period. Companies with instant name recognition in information technology during the early eighties such as Osbourne and Sinclair and products such as VisiCalc and WordStar are now unknown to the current generation of computer users. Much can be learned about the rise and fall of these entities by studying a phenomenon known as "Boom and Bust." Interesting, but why is this important?

In the software industry, technologies leapfrog each other in cycles as short as a few months. Understanding the factors that contribute to a product’s success and eventual demise are critical to a software company’s survival. They are also therefore critical to customers who adopt the vendor’s wares. Understanding these factors provides competitive advantage to both the seller and the buyer. Not recognizing them can mean product or even corporate failure to a vendor and significant headaches for those who adopt the technology, only to later see it fail.

Microsoft recognized a real threat to its future in 1985 when Apple introduced the Macintosh. To counter the Mac, Microsoft introduced Windows 1.0, followed by Windows 2.0. At face value this move would appear foolish. The processing power required by Windows simply could not be delivered by the mainstream chips of the period - Intel’s 8088 and 80286. But while Windows 1.0 and 2.0 represented a drag on the bottom line of Microsoft, Bill Gates and his crew understood Gordon Moore’s Law of Microprocessors. This law stated that the memory and processing power of the microchip would double every year. Microsoft understood that the increasing processing capability of each new generation of Intel processors would soon meet up with the processing and memory demands of the Windows product. And so in 1990 when Windows 3.0 met Intel 80386 a new boom cycle began. But Microsoft did not stop there. They understood the dynamics of the system portrayed in the figure below.

Figure 1. An example of a Boom & Bust System Dynamics Diagram.

The boom cycle increases for a period during which word of mouth and other factors increase demand. And while the cycle can be maintained for some period through enhanced marketing, eventually the discard rate will outpace the adoption rate, and the product will be replaced by competition and die. So long before Windows 3.x ceased its ascent, Microsoft began a new development cycle, one that would utilize the memory and processor of the yet unborn 80586. That development began as CAIRO, and was finally released as Windows95.

While many executives are aware that boom-bust cycles occur, understanding how they occur is difficult. This is because of the complexity of the many interacting components involved. And while many of those critical interacting components are known, or are embedded in the intelligence of the organization, the challenge of a senior manager is to understand how they all fit together. This challenge is particularly difficult when viewed from the standpoint of cognitive psychology. 

Cognitive psychologists have shown that there are limits to human mental processing power. Humans are also susceptible to bias in judgment, especially when surrounded by others who think alike and who have similar experiences. System dynamics modeling provides a comprehensive method for examining a problem holistically, and for setting priorities of system elements. It is the only decision technology which allows modelers to bring quantitative and qualitative, concrete and abstract factors together to examine their interaction over time. 

Re-engineering - Merit or Mirage

Despite the widespread popularity and appeal of re-engineering efforts, industry experts have established that as many as 70% of these initiatives fail to provide any measurable benefits to an organization1. The systems dynamics group at MIT under the direction of Dr. John Sterman sought answers to this dilemma. To do so they first selected organizations where re-engineering efforts had both succeeded and failed. Then they proceeded to do in-depth studies in those organizations to discover the dynamics of success and/or failure in re-engineering. 

Sterman and his group started with Analog Devices (AD), a maker of specialty semiconductors. In the early 1990’s AD was an industry leader in its niche market. For a decade it had experienced a 25% annual growth rate. Despite its success, Ray Stana, the founder and CEO believed that the efficiency of his company could still be improved. So in 1994 he began a "Total Quality Management(TQM)" effort. Some of the results follow.

 
 
Before TQM
After TQM
On-time Delivery
70%
96%
Outgoing Defects
500 parts/million
53 parts/million
Average Yield
26%
51%
Cycle Time
15 weeks
8 weeks
Stock Price
$18.75
$6.25
Earnings per Share
$.43
-$.28
Return on Investment
7%
-4%
 

It is not difficult to see that AD’s production was significantly improved. So why didn’t the bottom line increase accordingly? As a MIT graduate, Mr. Stana sought out the counsel of the system dynamics group at MIT. The MIT group did an extensive analysis of the company including interviews across the depth and breadth of the company. Through this effort they created a moderately size system dynamics simulation of the company. The simulation had about 800 nodes and provided a dynamic picture of the functioning between operations, marketing, personnel and other segments of the company. It also included dynamics external to the company such as market share and market drivers. From this the MIT group was able to formulate some rationale for why everything improved except the bottom line. While there were many dynamics occurring, one of the most evident to the group was the "home spun" nature of the company. AD took care of it’s own. Despite the new efficiencies generated by the TQM effort, all personnel, space, and administrative capability were retained. In fact much of this excess capacity was no longer needed. Ultimately AD bit the bullet and instituted some reductions. Fortunately, increases in the market as well as changes in their product lines also provided means for filling out the unused capacity. 

Brook’s Law Revised

In 1975 Fred P. Brooks, a senior manager at IBM, published a book called the Mythical Man-Month. This book became a icon in the information technology community. One principle Brook’s espoused was that adding manpower to a late software project made project completion even later. That principle became known as "Brooks Law." Anyone who has managed late software projects would understand this. When adding manpower, the software team finds itself preoccupied with bringing the new team members up to speed, thus making no progress on the project itself. 

Tarek Abdel-Hamid, a Ph.D. student in system dynamics at MIT, sought to verify Brooks Law. He made arrangements to study a large software project at NASA. He built a fairly large system dynamics simulation of the software development process. He included all the sectors which normally impact software project management including personnel, design, coding, and testing. His simulation also included cost and schedule associated with all these activities. His work demonstrated that adding personnel to a late project always made it more costly, but did not always cause it to be completed later. 

Since that time his model has been extended to include all elements of software project management and has been coded into a Management "Flight Simulator." It provides a simulation for testing different hypotheses about the amount of manpower and other resources necessary to complete computer programming projects.

 1.  Sterman, John. "The Improvement Paradox," a film produced by Pegasus Corporation. 1994.

 This article was originally published in Info Tech Talk, Vol 3 No 1, Winter 1998.
 

(c) 1997 John H. Saunders. Permission granted for use in academic environments