The KISS Principle - Keep It Simple, Stupid!
Anyone involved in simulation will have almost certainly encountered this little nugget of wisdom. In a simulation context, it warns the modeler against adding more detail to a simulation than is necessary to meet the simulation's objectives. It is excellent advice. Unfortunately, there are far too many simulation modelers, academics and consultants included, who keep their models simple but forget that the term is relative: any simulation that is too simple to meet its objectives will yield inaccurate results and will be of limited usefulness.
For this reason, rather than promote the KISS principle, I prefer a quotation that is attributed to Albert Einstein: Keep it simple. As simple as possible. But no simpler!
(There are a number of different versions of this quotation, so please don't tell me that I have it wrong. This version suits my purposes just fine!)
In any simulation project, there must be a correspondence between the simulation's objectives
and the quality of its statistical output
Simulation objectives drive the entire simulation study. Without objectives, any simulation is an exercise in futility. Q. Is the simulation accurate? A. Compared to what? Q. Is the simulation valid? A. Define "valid"? Objectives are statements that ask questions of the completed simulation. A successful simulation study can answer all of its objectives.
For example, say that your simulation has the objective (amongst others): Determine whether the proposed system will achieve its target mean hourly throughput
. Given this objective, the simulation must produce a sufficiently accurate mean hourly throughput statistic. The more inaccurate this statistic, the more difficult it becomes to determine whether the proposed system will meet its throughput. Consequently, the simulation must be designed to maintain the accuracy of this statistic.
(I'd like to make a distinction between the simulation's objectives and those of the system being modeled. For instance, with the objective stated above, say our simulation determines that the system cannot meet target production. Well, the simulation has met its objective - determining whether the system will work or not - but the same probably cannot be said for the system itself.)
The quality of the statistical output is governed by a number of factors, but primarily by the simulation's code quality, scope
. We're going to focus on the latter two factors, since a buggy simulation will produce poor quality statistics no matter what the simulation's foundation is.
The scope defines which parts of the system will be included in the simulation, and which parts will not. Given that we want to keep the model as simple as possible - but no simpler - we'll omit as much as possible, without throwing the baby out with the bathwater. Our criterion will be: if I omit this element, will it compromise the simulation's statistical output? If the answer is yes, then we must include that element. If the answer is no, then we must make a modeling assumption
and omit it. If we're not sure, then, erring on the side of caution, we should include it. (These are, admittedly, subjective decisions and two different simulation modelers will come up with two different sets of answers. This is why it is essential that our decisions be reviewed by experts familiar with the system we're modeling.) More on modeling assumptions in a moment.
(I'd also like to make a distinction between the scope of the simulation, and the scope of any projects related to the system under study. The two are not necessarily the same. For example, let's say that a material handling supplier is contracted to modify part of a conveyor system at a car plant. They may even be asked to simulate their modifications to demonstrate that the finished system functions as expected! They may be tempted to just model the changes encompassed within the scope of their engineering contract. However, in general, the impact of the changes upon the remainder of the system should also be examined, and so there are compelling reasons for the scope of the simulation to differ from the associated engineering contract.)
For the elements that we've chosen to include in the scope, we need to determine in how much detail we will model each element. Once again, we do not include more detail than we need to, but we do not skimp by leaving out details that will compromise our statistics. Whichever level-of-detail we decide upon, we must make modeling assumptions to back up our decisions - and we must have those decisions reviewed by experts more familiar with their implications.
A modeling assumption is a statement that exposes a known difference between the simulation model and the system that it represents. Each statement is expressed so that it can be evaluated as being true
. For example, if we have decided not to include a sub-system governing the availability of a certain resource, say a cardboard box supply sub-system, we may make the assumption "There is an unlimited supply of cardboard boxes at the product packing station." If, in fact, there are frequent cardboard box supply problems, then we may need to model this sub-system. If we never, or rarely, run out of cardboard boxes - or if we're deliberately removing the effect of cardboard box supply problems to determine the impact of other problems - then the assumption is valid and we can safely omit this sub-system.
(I have often seen the term modeling assumption
misused. A local simulation consulting firm frequently issue simulation specifications that refer to modeling assumptions such as The cycle time of the packing machine is 5 minutes.
Most simulation models can make this item a parameter
so that, no matter what the cycle time may be, the simulation will always be valid. In my view, such data-related statements are not modeling assumptions.)
So far, so clear. I hope!
Sadly, many of the simulation models that I come across fail these fundamental modeling rules. Why? Time to get controversial...
The primary causes of overly simplistic simulation models, in my humble opinion, are twofold:
- Simulation packages that impose the scope and level-of-detail upon the modeler.
- A long-standing revulsion by the simulation community to the whole topic of detail in simulation models.
In the case of item 1, the problem is that the vast majority of computer simulation packages provide template modeling elements. Some, like Witness
, etc. provide pre-defined elements with a high degree of abstraction, so that they can be used to model - so their makers claim - anything. However, the more detail that needs to be included, the more lateral thinking you need to employ and the harder it becomes to coerce the package into creating a sufficiently accurate model.
At the other end of the abstraction spectrum are packages that provide more concrete, specialist elements such as the material handling templates provided by AutoMod
. This type of system does not handle abstraction well and almost requires a high-level of detail. Alas, if there are no templates that do what you want, then you must shoe-horn those provided into the behavior that you seek.
In either case, if your simulation weapon of choice does not easily represent the systems that you are trying to model, at the level-of-detail that you require, then it becomes all too tempting - and convenient - to skimp on the detail. This happens in practice far more than most simulation modeler's may care to admit.
But there's also a more general reluctance to tackle highly detailed simulations at all! This brings me to reason number 2. The KISS principle has been (mis-)applied to such an extent that you can see novice and experienced simulators alike visibly reel when confronted with a detailed simulation. Academics seems to be particularly affronted.
Whenever I've demonstrated such simulations at conferences, I frequently get comments like those below - often accompanied by a knowing smirk:
- But it will run too slowly!
- How much data did you require?
- How long did it take you to do all that?
- You have too much time on your hands, don't you?
It seems no-one ever asks "Was the simulation study successful?". That's what really
matters at the end of the day.
For my day job, I get to oversee simulation studies of material handling systems for, primarily, the automotive industry. I can tell you that I have seen just about every trick in the book.
One large simulation product vendor's consultants routinely offer low-detail, abstract simulation models whatever the requirements. As such, they tend to approve detailed conveyor layouts that do not, in fact, work. Could it be because their package cannot handle the detail? Surely not!
Actually, I could spend months giving examples of how low-detail, abstract simulations failed miserably in meeting their objectives, resulting in the installation and commissioning of poorly working conveyor systems. This practice gives simulation an extremely bad name!
Now, don't get me wrong. I am not advocating that all simulation models be intricate behemoths built with the elegant complexity of a swiss watch. Far from it! Keep it simple. As simple as possible. But no simpler!