If one adds in some extra detail to a general model it can become more specific — that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general – it is just you can imagine it would be more general.
To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) – it is much less general than the original, because it is true for far fewer cases.
This is not very surprising – a claim that a model has general validity is a very strong claim – it is unlikely to be achieved by arm-chair reflection or by merely leaving out most of the observed processes.
Only under some special conditions does simplification result in greater generality:
- When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
- When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
- When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)
In other cases, where you compare like with like (i.e. you don’t move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.
Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.
Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.
Edmonds, B. (2007) Simplicity is Not Truth-Indicative. In Gershenson, C.et al. (2007) Philosophy and Complexity. World Scientific, 65-80.
Edmonds, B. (2017) Different Modelling Purposes. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 39-58.
Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144.
Edmonds, B. (2018) A bad assumption: a simpler model is more general. Review of Artificial Societies and Social Simulation, 28th August 2018. https://roasss.wordpress.com/2018/08/28/be-2/