Tag Archives: bruceedmonds

A bad assumption: a simpler model is more general

By Bruce Edmonds

Thread6

If one adds in some extra detail to a general model it can become more specific — that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general – it is just you can imagine it would be more general.

To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) – it is much less general than the original, because it is true for far fewer cases.

This is not very surprising – a claim that a model has general validity is a very strong claim – it is unlikely to be achieved by arm-chair reflection or by merely leaving out most of the observed processes.

Only under some special conditions does simplification result in greater generality:

  • When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
  • When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
  • When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)

In other cases, where you compare like with like (i.e. you don’t move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.

References

Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.

Edmonds, B. (2007) Simplicity is Not Truth-Indicative. In Gershenson, C.et al. (2007) Philosophy and Complexity. World Scientific, 65-80.

Edmonds, B. (2017) Different Modelling Purposes. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 39-58.

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144.


Edmonds, B. (2018) A bad assumption: a simpler model is more general. Review of Artificial Societies and Social Simulation, 28th August 2018. https://roasss.wordpress.com/2018/08/28/be-2/

The “Formalist Fallacy”

By Bruce Edmonds

Thread3

This is the tendency to believe theories more if they are formalised (e.g. as sets of mathematical equations or computer simulations).

This can be simply an effect of Kuhn’s “Theoretical Spectacles” (1962) — due to the fact we can clearly see how a complex mechanism might result in some particular outcomes (due to the formal model) then we project this onto the world. That is, we fit our perception of some part of the world into the conception illustrated by the model. This is the opposite way to how science is supposed to work, where the model should be adjusted (or rejected) in light of the evidence.

Another reason for more readily accepting theories expressed in terms of mathematics is that maths has status. It used to be the case that mathematical models were the only practical formal technique, which is why science became associated with maths. Thus you are much more likely to be published in many journals if your paper is expressed mathematically, regardless of whether the formalism is used to prove or calculate anything.

If an idea is expressed in informal ways then we are freer to express doubt, as we have an instinctual idea of how slippery natural language statements can be. We know that humans are lazy and thus have a tendency to believe their own ideas, unless pretty well forced to change (e.g. by evidence). It should be the case that making ideas precise makes them easier to disprove (as in Popper 1963) but this is only the case if the mapping between the model and what it refers to is also precise. Otherwise one is free to imagine how a model could apply, giving the illusion of generality.

For example, Eckhart Arnold (2005) shows, in detail, how game theoretical models based on around the ‘Prisoner’s Dilemma’ (e.g. Axelrod 1984) fail to have empirical relevance. Other abstract models that have had many citations but do not seem to connect well to evidence include: (Schelling 1971), Hegselmann & Krause (2002) and Deffaunt et al (2002). Each of these is simple, formal but has interesting outcomes. As a result they seem apparently irresistible to other researchers with many citations and influence but no direct modelling relation with the observed world. This contrasts with modelling papers which compare simulated and real-world data (Chattoe-Brown 2018).

Do not mistake me – I think formalising ideas is very useful. It makes sharing the ideas without error or reinterpretation possible, allowing a community of researchers to critique, improve, check, and apply them (Edmonds 2000). It should also be easier to check if they actually work – for example if they do predict some unknown and measurable aspects of an observed system. It is just that formalism, of itself, does not make them more likely to be true (or the resulting models useful for anything that reliably relates to the observed world) but we are more likely to think they are, due to our tendency to project what we clearly understand.

References

Arnold, E. (2008). Explaining altruism: A simulation-based approach and its limits (Vol. 11). PhD Thesis. Walter de Gruyter. http://www.phil-fak.uni-duesseldorf.de/fileadmin/Redaktion/Institute/Philosophie/Theoretische_Philosophie/Allgemein/Hilfskraefte/Explaining_Altruism-colored_figures.pdf

Axelrod, Robert. 1984. The Evolution of Cooperation. Basic Books.

Chattoe-Brown, E. (2018) What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table? Review of Artificial Societies and Social Simulation, 11th June 2018. https://roasss.wordpress.com/2018/06/11/ecb/

Deffuant, G., Amblard, F., Weisbuch, G. and Faure, T. (2002) How can extremism prevail? A study based on the relative agreement interaction model. Journal of Artificial Societies and Social Simulation 5(4), 1. http://jasss.soc.surrey.ac.uk/5/4/1.html

Edmonds, B. (2000) The Purpose and Place of Formal Systems in the Development of Science, CPM Report 00-75, MMU, UK. http://cfpm.org/cpmrep75.html

Hegselmann, R. and Krause, U. (2002). Opinion Dynamics and Bounded Confidence Models, Analysis and Simulation. Journal of Artificial Societies and Social Simulation, 5(3), 2. http://jasss.soc.surrey.ac.uk/5/3/2.html

Kuhn, T.S. (1962) The Structure of Scientific Revolutions. University of Chicago Press.

Popper, K. (1963). Conjectures and refutations: the growth of scientific knowledge. London: Routledge.

Schelling, T. C. (1971). Dynamic models of segregation. Journal of mathematical sociology, 1(2), 143-186.


Edmonds, B. (2018) The "formalist fallacy". Review of Artificial Societies and Social Simulation, 11th June 2018. https://roasss.wordpress.com/2018/07/20/be/