This chapter qualitatively discusses functions and how a system is analyzed through observation, over time, of the variables defined by the functions. In chapter 4, the quantitative tools needed to actually model and simulate a system will be provided.
3.1 Functions and Integrals of Functions
First, in system dynamics simulations, every variable used must be defined explicitly rather than implicitly. This means that a dependent variable must be isolated on the left hand side (LHS) of an equation, and all other variables used to define it placed on the right hand side (RHS). When two or more variables are related so that the value of the one dependent variable is uniquely determined when the values of the others are known, then the first variable is said to be a function of the others. For example, the order rate (OR) to be used by a proprietor for the upcoming week may be a function of the current desired inventory (DI) and actual inventory (AI):
OR(T,T+DT) = [DI(T)-AI(T)]/4 (3.1.1)
where OR(T,T+DT) is the order rate during the period T to T+DT and DI(T) is the desired inventory at time T.
The divisor 4 is a parameter which implies that the proprietor, to avoid overreacting to short term changes in sales, only orders one-fourth of the difference between desired inventory and actual inventory during this time period.
Relationships between system components are specified through the use of functions, many of which are simple analytic expressions such as 3.1.1.
It is not in the present intent to discuss functions in detail. The reader will likely have sufficient quantitative background to be familiar with most analytic functions. Many functions will be used in the exercises and examples which follow, and coursework in algebra and calculus can provide the needed technical definitions of different useful functional forms. But a broad discussion of functions and equations is appropriate, because the essence of dynamic system modeling is embedded in two types of equations.
The first type is indeed a simple functional relationship as in 3.1.1, where order rate is a function of inventory level. In system dynamics models, all rates and information auxiliaries are defined using such functions.
The second equation type specifies the summation or accumulation process. It could be called an "integral equation" because it integrates one or more variables, but in system dynamics it is instead called a "level" equation.
The form of the level equation used in system dynamics modeling is always the same. It simply equates a level at time T to what it was the last time it was calculated (at T-DT), plus what flowed in and less what flowed out since that time. Again assume DT=1. Then the value for a level called LEV is calculated as follows:
LEV(T) = LEV(T-1) + INFLOW(T-1,T) - OUTFLOW(T-1,T); (3.1.2)
where INFLOW(T-1,T) means "the inflow between T-1 and T" and similarly for the OUTFLOW, and LEV(T) means "the level at time T".
A level equation is related to the difference equation used in engineering system theory. A difference equation equivalent to expression 3.1.2 would define the rate of change of the level, instead of defining the level as the accumulation of its rate of change.
D LEV(T) = INFLOW(T-DT,T) - OUTFLOW(T-DT). (3.1.3)
Recall that the "D" or "delta" means "change in", with DT meaning a discrete time step. Equation 3.1.3 is easily converted to a level equation like 3.1.2. As
DLEV(T)/-T = LEV(T) - LEV(T-DT), then
LEV(T) - LEV(T-DT) = DT * [INFLOW(T-DT,T) - OUTFLOW(T-DT,T)].
But with DT=1, this becomes 3.1.2.
Virtually all natural processes are integrations rather than differentiations, so using the integral form for the accumulation process, instead of the difference form, seems logical.
As a digression, note that LEV, by being dependent on INFLOW and OUTFLOW, is therefore dependent on its own rate of change -- INFLOW less OUTFLOW is the rate of change of LEV per unit time. If INFLOW and OUTFLOW are exogenously provided, then equation 3.1.2 is dynamic, it exhibits accumulation, but does not reflect feedback. If INFLOW and OUTFLOW are dependent on LEV, then 3.1.2 has the feedback characteristic discussed in section 2.4.
3.2 Disaggregating Equations
In most realistic systems, the value of any selected system variable can be defined explicitly as a function of other system variables. Furthermore, each system variable usually depends on a very few -- about one to five -- other variables. Even in models with 200 variables, each variable typically depends directly on only a few neighboring variables.
For example, consider an automobile -- certainly a complex system with hundreds of components. Yet the motions of any selected variable related to the automobile's mechanical operation can probably be described by knowing only a few other components. The accelerator is linked to the throttle which affects the airflow through the carburetor, which influences the fuel flow affecting the engine speed and, through the transmission, rotates the drive shaft. The drive shaft affects the differential, which affect the axles, which affect the wheels, etc. Since the wheels and the accelerator are not linked directly, there is no need to incorporate, into one equation, the motion of the wheels as a function of the movement of the accelerator. That would require simultaneously accomodating all the linkages between the two.
Instead, a model of the system can be developed by piecemeal definition of the system components between accelerator and wheels, one equation per component, with each equation a function of only a selected number of variables. This is fortunate indeed, for it means the human mind can model a complex system by building linkages, each understandable, until all linkages are complete. The total set of such linkages will be a set of equations comprising the entire model.
The ability to disaggregate a system into variables and to relate the variables through static and dynamic functions is very important. Let us try to see why.
Suppose there are four components, A, B, C, and D, connected in a simple system which has "time" as an input. Let A represent the first component in the system. B the second component, is a function of A. C, the third component, is a function of B and A, and D, the output of the system, is a function of C and B. Suppose the form of the functional relationships from A to D are the following:
A = 3 + SIN(TIME); B = 3A; C = 2AB; D = B(B+C) (3.2.1)
Recalling section 2.2, there are two ways to approach the analysis of this system. The traditional mathematical approach is to "solve" for the value of D as a function of the system inputs. In this case time is the only input to the system, and D can be obtained as a function of time analytically, that is by solving all the equations 3.1.1 simultaneously:
D = B(B+C)
2 2 3
= 3A(3A+6A ) = 9A + 18A
or D = 9(3+SIN(TIME)) + 18(3+SIN(TIME))
Using 3.2.2, the value of variable D is obtained directly, by providing the value for TIME. Therein, D is a direct function of time, the system input. This form of solution to the system is called the "input-output" approach. The intent is to determine the value of D, the system output, as changes to TIME, the system input occur.
If TIME equaled 70 days, then 3.1.2 would, after some mathematics, yield D = 1,240.
The second approach to analyzing this system is to simply use the four functions of 3.1.1, and to solve each sequentially. The same value for D would result:
A = 3 + SIN(TIME) = 3 + SIN(70) = 3.94
B = 3A = 11.82
C = 2AB = 2(3.94)(11.82) = 93.1
D = B(B+C) = 11.82(11.82 + 93.1) = 1,240
This sequential method of solution we shall call the "systems" approach, as A, B, C, and D have been defined as individual components in this system.
The solution to this system is mathematically elementary using either the input-output or systems approach. More subtle is the philosophical mind-set behind the two solution methods.
The subtlety lies in the importance of the difference between a world with electronic computers and one without. The input-output approach required solving only one equation, while the systems approach solved four equations. A realistic system will have hundreds of variables. The systems approach then requires solving hundreds of equations at each time step to obtain a plot of a system variable over the time horizon of interest. That would be virtually impossible without the aid of modern computers.
The input-output approach, on the other hand, provides a system output variable as a function of time using only one equation. If that one equation is obtainable, then the input-output approach is the more feasible method.
The problem is that even the finest mathematicians cannot generally solve a large set of system equations -- if they include integral or differential equations -- to reduce them to one analytic equation. Consequently, complex systems, such as social systems, simply cannot be realistically analyzed mathematically. Before computers, the relationships between system inputs and outputs had to be approximated, either through intuition of decision makers, or statistical estimation, or perhaps both.
With the advent of computers, the systems approach has become a realistic alternative. It is now feasible to analyze the behavior of a system defined by hundreds of functions, on a micro-computer in a matter of minutes.
For complex human systems, not only is it more realistic to model the system through understanding of its individual linkages, but it is analytically dangerous to try to predict system behavior in total, as counterintuitive results may occur when the human mind tries to assimilate all parts of a system at once. The human mind is not suited to keep track of complex interactions simultaneously (Forrester, 1968, Sec. 3.1).
3.3 How Disaggregating a System Simplifies System Analysis
Complicated analyses, relying on quantitative methods too mathematical to be comfortably absorbed by the typical manager, are seldom accepted in the real world. A basic distrust of the analytic community can result from attempts to use such methods in areas where they really don't apply, or are too complex to be trusted by wary managers.
The system dynamics approach, by disaggregating a system into components that are easily modeled and understood, can be much more acceptable to managers familiar with graphical, rather than mathematical, communications.
We can demonstrate the point with an oversimplified, yet demonstrative model.
Consider the concept of "exponential growth", in a population, for example. There are various ways to describe this growth phenomenon. The most concise way is mathematically. "Three percent exponential growth" in a population means that starting with a population P(0) at time zero, the population at time t will be:
P(t) = P(0) * e (3.3.1)
This description would be fine for the technical community familiar
with exponential functions, but is not usually understood by non-mathematicians.
The analyst might instead describe exponential growth by analogy: "It's
like what would happen to your bank account if you got three percent compound
interest". Or the explanation could be graphic:
The type of growth shown in Figure 3.3.1 is easily understood, while representing the same exponential function in its mathematical form,
POP(t) = 100 e , (3.3.2)
may not be.
Furthermore, there is a subtle, but crucial modeling point to be made. Figure 3.3.1 represents a population growth pattern that results from a process we have not yet discussed. But why does that pattern occur, that is, what process causes the exponential growth?
To model the process that results in exponential growth, we would ignore the exponential function (3.3.2), and look for the stocks and flows, that is the system, that represents the underlying process. In this case, the population POP(T) is a stock, that at each time point equals its last value POP(T-DT) plus the net flow (arrivals less departures) between T-D and T. Calling this flow NETFLOW(T-DT,T), we would then need equations representing both POP and NETFLOW:
POP(T) = POP(T-DT) + NETFLOW(T-DT,T) (3.3.3A)
NETFLOW(T,T+DT) = 0.03 * POP(T) (3.3.3B)
The first of these has the simple form previously discussed for stock or level equations. The second says that each time period the net inflow is three percent of the population.
Note that both these expressions are much less complicated, mathematically than 3.3.2. Each is quite easy to model. If one were to start with POP equal to 100 at time zero, and at each time period thereafter add NETFLOW to the new population, results very similar to figure 3.3.1 would evolve. A model would have been built.
More complex systems have analogous results, except their mathematical forms analogous to expression 3.3.2 would not be tractible. They would be far too complicated to derive. But expressions for the system components behind the process, analogous to 3.3.3, would still be quite possible to model.
That is the reason a simulation method is useful for analyzing complex
The set of equations that define the variables in a system, if calculated, will only provide the status of the system at one instant in time. To see how the system changes over time, the set of defining equations must be calculated at sequential instants of time, and the values of the system variables observed.
The equations are calculated at the discrete DT time steps and the pattern over DT's is observed. This procedure is called iteration, or simulation.
Iteration is very similar to what a movie projector does when projecting a film. A film strip of still photographs is shown in sequence, and the movie seen on the screen seems to be a smooth progression of life -- so long as the time between the flashes of still photographs is small, that is, it has a short DT.
For modeling purposes, we need only note that we can define a system by specifying the system equations and that this set of equations holds for as long as the process proceeds. Calculating and observing the values for all system variables over time then provides knowledge about how the process being studied evolves over time.
Let us iterate the simple population model defined by equations 3.3.3 to simulate the real population process. We assume DT equals 1 to simplify the notation.
There must be an initial population P(0) (the initial point), which we assume equals 100. The rate of flow NETFLOW between time 0 and time 1, is calculated,
NETFLOW(0,1)= .03 * 100 = 3,
and the population at time 1 is obtained:
POP(1) = POP(0) + NETFLOW(0,1)
= 100 + 3 = 103.
NETFLOW(1,2) then equals .03*103 or 3.09, and
POP(2) = POP(1) + NETFLOW(1,2) = 103 + 3.09 = 106.09.
The process is simulated by continual iterations every year. After 40 years this would yield an estimated value for population of POP(40) = 326.2, quite close to the value shown at time 40 in figure 3.3.1.
One of the things that confuses students first encountering simulation is the relationship between the time unit for measuring a process (in this example the time unit of measure is a year) and the concept of a DT. If the DT equals the time unit of measure, then DT is 1.0. But the selection of DT is not always the same as the time unit. Often, the system needs to be iterated more often than once per time unit. For example, annual budget projections for a business (the time unit then being one year) may be best calculated in monthly units, in which case the DT would be .0833 or one twelfth of a time unit.
In fact, DT's are always some fraction of a time unit, usually 1/2, 1/4, 1/8, etc. The reason for making DT small is often to make the model results more smooth over time. The smaller DT makes the system simulation more continuous and more accurate in representing the real process, which usually changes continuously. This was demonstrated earlier in exercise 2.4.1
But care must be taken, when modeling, to adjust for the fact that the rates into and out of levels are calculated in units of flow per unit time, not flow per DT. The flow per unit time must, therefore, be multiplied by the value of DT to obtain the net flow during the DT period. In other words, the proper form for a level equation is
LEVEL(T) = LEVEL(T-DT) + DT*(FLOW(T-DT,T)), (3.3.4)
where FLOW represents the net change in the level per unit time. A level changing at a rate of 12 units per year, only changes six units in a half year (DT=.5). We will use the form of 3.3.4 for writing level equations in subsequent models.
3.5 System States
The concept of "system state" falls out naturally when a model simulation is iterated from time zero to the end of a selected time horizon. In fact, states define the system.
The first iteration cannot be accomplished without specifying the "initial conditions" for the system being modeled. This means every level must be given an initial value. Since the current value of a level depends on its last value, the iteration at time 1 (assume DT=1 again) could not occur unless the value of all levels at time 0 were known. Initial values for rate and auxiliary equations need not be specified, since they are directly determined from the values of levels.
So one type of system state is the initial state, or "initial condition".
The second system state, referred to as the "transient state", describes the system during the dynamic periods when the system is transforming from its initial condition to its final equilibrium, given there is one. This transient state is usually difficult to treat analytically ... the mathematics become too complex for most realistic systems.
If the simulation is allowed to run, and if the system being studied is inherently stable (meaning it will not grow or oscillate without bound), then eventually the system will settle into its "steady state". This is the state most often studied by the analytic community. It is mathematically appealing -- the equations defining it at this point are static, and can be solved simultaneously. Exercise 2.4.1 discussed this steady state "equilibrium".
Steady state does not necessarily mean the system variables are constant. They may be changing for one of two reasons. First, the system's input parameters may be varying in fixed pattern, perhaps sinusoidally, as in an annual sales cycle. Or the inputs may be varying randomly, due to unknown causes. In such cases, steady state means the average value of the system variables has become constant. This is the case in many statistical analyses of systems. For example, in queuing theory, the analysis is normally concerned with steady state results, under stochastic conditions of random arrivals and service times and not with the transient dynamics preceding steady state.
Most policy problems, however, must deal with continually changing, transient conditions. Fortunately, the convenience and power of modern computers has made it possible to conduct rapid "what-if" explorations of transient state problems.
3.6 Lags or Delays
A system is dynamic because there are delays in system variables. The level in equation 3.3.4, for example, used the (delayed) value of its last value to obtain a current value. There are other types of delays.
We therefore need to introduce the important but sometimes confusing topic of lagged, or delayed, variables. For now, the discussion is qualitative. We will return to quantitaive mechanics of modeling delays in chapter 4.
Accumulation is closely associated with the concepts of lags or delays in a system. A delay means there is a time delay between a variable changing, and the effect of that change becoming evident in other variables.
There are many reasons to model delays in a system. Some are natural results, like the death rate of a population being a delay of the birth rate. Others represent perception lags. Management's impression of the sales rate may lag the actual sales rate by several weeks. Other delays represent deliberate actions. Management may wait to adjust desired inventory levels until average sales show a definite increase, and average sales is a lag in that past values of sales must be retained to derive the average.
Delays in a system occur because a variable has been retained in a level or a series of levels for some time period (the delay time) before being used in the definition of another variable.
In the most elementary sense, a delay occurs in a system when material flows into a level, and accumulates there for a while before flowing out. If there is only one flow in, and one flow out of a level, then all that goes in, will eventually come out some time later, but it will be delayed, by at least one DT, and usually much more.
A set of levels can be used in series to delay the first effects of the delayed variable further. If three levels are in series, so that the entering flow must accumulate in the first level, then flow to the second, then the third, before the outflow emerges from the third level, then it is logical that the outflow will not begin as soon as it would if there was but one level.
The exact way the delayed value of a variable is related to the value of the variable itself depends on how one defines the equations for the outflows of the levels accumulating the variable being delayed, and on how many levels there are between the inflow and outflow. If one level is used, and the outflow is defined to be equal to one-ninth of the value in the level, then the delay will take a form called a first order delay, with an average delay of nine time periods. If three levels are in series, and each level has an outflow rate equal to one-third of the value of the level it flows out of, then the delayed variable (the flow out of the third level) will take a form called a third order delay, again with an average delay of nine time periods. If nine levels are in series, each with an outflow equal to the entire level it flows out of, then the delayed variable will take a form called a "boxcar delay", and the delayed variable will have exactly the same shape as the variable itself, but occur nine time periods later -- so long as the iteration time (the DT) is equal to one time period. (The reader might confirm this through intuition, or by modeling the nine level flow and iterating over ten time periods or so). Figure 3.6.1 demonstrates the delayed outflow (dotted lines) relative to a step increase in the inflow (solid lines).
Fig 3.6.1 HERE
The boxcar delay (so named because its levels are lined up like boxcars on a train) has one feature that may aid the modeler -- so long as DT=1, there is no "mixing" of materials into homogeneity. Usually, levels "mix" the new inflow with the contents already in the level, so there is no way to differentiate between what has recently entered, and what was already there. In a boxcar delay, everything in a level flows out before the new inflow arrives, and the connected levels maintain the differentiation. This would be useful, for example, in modeling a FIFO (first in, first out) inventory system, or keeping track of ships in a fleet by their year of launching -- there would be a level for one year old ships, which after a year flowed into the level for two year old ships, and so on.
Fortunately, the programming language used in system dynamics provides functions that produce delays. A detailed discussion of them is provided in programming manuals, to which the reader should refer for details. So that we may use delays in the exercises of chapter 6, a brief introduction to the programming mechanics of delays is provided in section 4.6.
Delays, and their length, have major implications on the control of a process. Lengthy delays, like the years it may take for a tax law change to affect investments, mean that actions taken will take a long time to cause changes, and once changes start, may take a decades for their effects to be fully felt.
Policies to accelerate these effects may cause instability. The captain of a large ship approaching a pier will know that reversing the ship's engines to slow the ship can take a minute or so, and he must give the backing order well in advance. Giving a very powerful backing order too late may result in hitting the pier, or not reaching it at all on this attempt. The analogy applies to national policy, where delays may be in years, and frequent, strong adjustments may cause instability, rather than avoid it.