A bad assumption: a simpler model is more general

By Bruce Edmonds

If one adds in some extra detail to a general model it can become more specific — that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general – it is just you can imagine it would be more general.

To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) – it is much less general than the original, because it is true for far fewer cases.

This is not very surprising – a claim that a model has general validity is a very strong claim – it is unlikely to be achieved by arm-chair reflection or by merely leaving out most of the observed processes.

Only under some special conditions does simplification result in greater generality:

  • When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
  • When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
  • When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)

In other cases, where you compare like with like (i.e. you don’t move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.

References

Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.

Edmonds, B. (2007) Simplicity is Not Truth-Indicative. In Gershenson, C.et al. (2007) Philosophy and Complexity. World Scientific, 65-80.

Edmonds, B. (2017) Different Modelling Purposes. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 39-58.

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144.


Edmonds, B. (2018) A bad assumption: a simpler model is more general. Review of Artificial Societies and Social Simulation, 28th August 2018. https://rofasss.org/2018/08/28/be-2/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Continuous model development: a plea for persistent virtual worlds

By Mike Bithell

Consider the following scenario:-

A policy maker has a new idea and wishes to know what might be the effect of implementing the associated policy or regulation. They ask an agent-based modeller for help. The modeller replies that the situation looks interesting. They will start a project to develop a new model from scratch and it will take three years. The policy maker replies they want the results tomorrow afternoon. On being informed that this is not possible (or that the model will of necessity be bad) the policy maker looks elsewhere.

Clearly this will not do. Yet it seems, at present that every new problem leads to the development of a new “hero” ABM developed from the ground up. I would like to argue that for practical policy problems we need a different approach., one in which persistent models are developed that outlast the life of individual research projects, and are continuously developed, updated and challenged against the kinds of multiple data streams that are now becoming available in the social realm.

By way of comparison consider the case of global weather and climate models. These are large models developed over many years. They are typically hundreds of thousands of lines of code, and are difficult for any single individual to fully comprehend. Their history goes back to the early 20th century, when Richardson made the first numerical weather forecast for Europe, doing all the calculations by hand. Despite the forecast being incorrect (a better understanding of how to set up initial conditions was needed) he was not deterred: His vision of future forecasts involved a large room full of “computers” (i.e. people) each calculating the numerics for their part of the globe and pooling the results to enable forecasting in real time (Richardson 1922). With the advent of digital computing in the 1950s these models began to be developed systematically, and their skill at representing the weather and climate has undergone continuous improvement (see e.g. Lynch 2006). At the present time there are perhaps a few tens of such models that operate globally, with various strengths and weaknesses,. Their development is very far from complete: The systems they represent are complex, and the models very complicated, but they gain their effectiveness through being run continually, tested and re-tested against data,, with new components being repeatedly improved and developed by multiple teams over the last 50 years. They are not simple to set up and run, but they persist over time and remain close to the state-of-the –art and to the research community.

I suggest that we need something like this in agent-based modelling. A suite of communally developed models that are not abstract, but that represent substantial real systems, such as large cities, countries or regions,; that are persistent and continually developed, on a code base that is largely stable; and more importantly undergo continual testing and validation. At the moment this last part of the loop is not typically closed: models are developed and scenarios proposed, but the model is not then updated in the light of new evidence, and then re-used and extended: the PhD has finished, or the project ended, and the next new problem leads to a new model. Persistent models, being repeatedly run by many, would gradually have bugs and inconsistencies discovered and corrected(although new ones would also inevitably be introduced), could be very complicated because continually tested, and continually available for interpretation and development of understanding, and become steadily better documented. Accumulated sets of results would show their strengths and weaknesses for particular kinds of issues, and where more work was most urgently needed.

In this way when, say ,the mayor London wanted to know the effect of a given policy, a set of state-of the-art models of London would already exist which could be used to test out the policy given the best available current knowledge. The city model would be embedded in a lager model or models of the UK, or even the EU, so as to be sure that boundary conditions would not be a problem, and to see what the wider unanticipated consequences might be. The output from such models might be very uncertain: “forecasts” (saying what will happen, as opposed to what kind of things might happen) would not be the goal, but the history of repeated testing and output would demonstrate what level of confidence was warranted in the types of behaviour displayed by the results: preferably this would at least be better than random guesswork. Nor would such a set of models rule out or substitute for other kinds of model: idealised, theoretical, abstract and applied case studies would still be needed to develop understanding and new ideas.

The kind of development of models for policy is already taking place in to an extent (see e.g. Waldrop 2018), but is currently very limited. However, in the face of current urgent and pressing problems, such as climate change, eco-system destruction, global financial insecurity, continuing widespread poverty and failure to approach sustainable development goals in any meaningful way, the ad-hoc make-a-new-model every time approach is inadequate. To build confidence in ABM as a tool that can be relied on for real world policy we need persistent virtual worlds.

References

Lynch, P. (2006). The Emergence of Numerical Weather Prediction: Richardson’s Dream. Cambridge: Cambridge University Press.

Richardson, L. F. (1922). Weather Prediction by Numerical Process (reprinted 2007). Cambridge: Cambridge University Press.

Waldrop, M. (2018). Free Agents. Science, 13, 360, 144-147. DOI: 10.1126/science.360.6385.144


Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Why the social simulation community should tackle prediction

By Gary Polhill

(Part of the Prediction-Thread)

On 4 May 2002, Scott Moss (2002) reported in the Proceedings of the National Academy of Sciences of the United States of America that he had recently approached the e-mail discussion list of the International Institute of Forecasters to ask whether anyone had an example of a correct econometric forecast of an extreme event. None of the respondents were able to provide a satisfactory answer.

As reported by Hassan et al. (2013), on 28 April 2009, Scott Moss asked a similar question of the members of the SIMSOC mailing list: “Does anyone know of a correct, real-time, model-based policy-impact forecast?” [1] No-one responded with such an example, and Hassan et al. note that the ensuing discussion questioned why we are bothering with agent-based models (ABMs). Papers such as Epstein’s (2008) suggest this is not an uncommon conversation.

On 23 March 2018, I wrote an email [2] to the SIMSOC mailing list asking for expressions of interest in a prediction competition to be held at the Social Simulation Conference in Stockholm in 2018. I received two such expressions, and consequently announced on 10 May 2018 that the competition would go ahead. [3] By 22 May 2018, however, one of the two had pulled out because of lack of data, and I contacted the list to say the competition would be replaced with a workshop. [4]

Why the problem with prediction? As Edmonds (2017), discussing different modelling purposes, says, prediction is extremely challenging in the type of complex social system in which an agent-based model would justifiably be applied. He doesn’t go as far as stating that prediction is impossible; but with Aodha (2017, p. 819) he says, in the final chapter of the same book, that modellers should “stop using the word predict” and policymakers should “stop expecting the word predict”. At a minimum, this suggests a strong aversion to prediction within the social simulation community.

Nagel (1979) gives attention to why prediction is hard in the social sciences. Not least amongst the reasons offered is the fact that social systems may adapt according to predictions made – whether those predictions are right or wrong. Nagel gives two examples of this: suicidal predictions are those in which a predicted event does not happen because steps are taken to avert the predicted event; self-fulfilling prophecies are events that occur largely because they have been predicted, but arguably would not have occurred otherwise.

The advent of empirical ABM, as hailed by Janssen and Ostrom’s (2006) editorial introduction to a special issue of Ecology and Society on the subject, naturally raises the question of using ABMs to make predictions, at least insofar as “predict” in this context means using an ABM to generate new knowledge about the empirical world that can be tested by observing it. There are various reasons why developing ABMs with the purpose of prediction is a goal worth pursuing. Three of them are:

  • Developing predictions, Edmonds (2017) notes, is an iterative process, requiring testing and adapting a model against various data. Engaging with such a process with ABMs offers vital opportunities to learn and develop methodology, not least on the collection and use of data in ABMs, but also in areas such as model design, calibration, validation and sensitivity analysis. We should expect, or at least be prepared for, our predictions to fail often. Then, the value is in what we learn from these failures, both about the systems we are modelling, and about the approach taken.
  • There is undeniably a demand for predictions in complex social systems. That demand will not go away just because a small group of people claim that prediction is impossible. A key question is how we want that demand to be met. Presumably at least some of the people engaged in empirical ABM have chosen an agent-based approach over simpler, more established alternatives because they believe ABMs to be sufficiently better to be worth the extra effort of their development. We don’t know whether ABMs can be better at prediction, but such knowledge would at least be useful.
  • Edmonds (2017) says that predictions should be reliable and useful. Reliability pertains both to having a reasonable comprehension of the conditions of application of the model, and to the predictions being consistently right when the conditions apply. Usefulness means that the knowledge the prediction supplies is of value with respect to its accuracy. For example, a weather forecast stating that tomorrow the mean temperature on the Earth’s surface will be between –100 and +100 Celsius is not especially useful (at least to its inhabitants). However, a more general point is that we are accustomed to predictions being phrased in particular ways because of the methods used to generate them. Attempting prediction using ABM may lead to a situation in which we develop different language around prediction, which in turn could have added benefits: (a) gaining a better understanding of what ABM offers that other approaches do not; (b) managing the expectations of those who demand predictions regarding what predictions should look like.

Prediction is not the only reason to engage in a modelling exercise. However, in future if the social simulation community is asked for an example of a correct prediction of an ABM, it would be desirable to be able to point to a body of research and methodology that has been developed as a result of trying to achieve this aim, and ideally to be able to supply a number of examples of success. This would be better than a fraught conversation about the point of modelling, and consequent attempts to divert attention to all of the other reasons to build an ABM that aren’t to do with prediction. To this end, it would be good if the social simulation community embraced the challenge, and provided a supportive environment to those with the courage to take it on.

Notes

  1. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;fb704db4.0904 (Cited in Hassan et al. (2013))
  2. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;14ecabbf.1803
  3. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SIMSOC;1802c445.1805
  4. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;ffe62b05.1805

References

Aodha, L. n. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 801-822.

Edmonds, B. (2017) Different modelling purposes. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 39-58.

Epstein, J. (2008) Why model? Journal of Artificial Societies and Social Simulation 11 (4), 12. http://jasss.soc.surrey.ac.uk/11/4/12.html

Hassan, S., Arroyo, J., Galán, J. M., Antunes, L. and Pavón, J. (2013) Asking the oracle: introducing forecasting principles into agent-based modelling. Journal of Artificial Societies and Social Simulation 16 (3), 13. http://jasss.soc.surrey.ac.uk/16/3/13.html

Janssen, M. A. and Ostrom, E. (2006) Empirically based, agent-based models. Ecology and Society 11 (2), 37. http://www.ecologyandsociety.org/vol11/iss2/art37/

Moss, S. (2002) Policy analysis from first principles. Proceedings of the National Academy of Sciences of the United States of America 99 (suppl. 3), 7267-7274. http://doi.org/10.1073/pnas.092080699

Nagel, E. (1979) The Structure of Science: Problems in the Logic of Scientific Explanation. Hackett Publishing Company.


Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)