By Bruce Edmonds, Gary Polhill and David Hales
(Part of the Prediction-Thread)
There is a lot of pressure on social scientists to predict. Not only is an ability to predict implicit in all requests to assess or optimise policy options before they are tried, but prediction is also the “gold standard” of science. However, there is a debate among modellers of complex social systems about whether this is possible to any meaningful extent. In this context, the aim of this paper is to issue the following challenge:
Are there any documented examples of models that predict useful aspects of complex social systems?
To do this the paper will:
- define prediction in a way that corresponds to what a wider audience might expect of it
- give some illustrative examples of prediction and non-prediction
- request examples where the successful prediction of social systems is claimed
- and outline the aspects on which these examples will be analysed
We start by defining prediction, taken from (Edmonds et al. 2019). This is a pragmatic definition designed to encapsulate common sense usage – what a wider public (e.g. policy makers or grant givers) might reasonably expect from “a prediction”.
By ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model.
Let us clarify the language in this.
- It has to be reliable. That is, one can rely upon its prediction as one makes this – a model that predicts erratically and only occasionally predicts is no help, since one does not whether to believe any particular prediction. This usually means that (a) it has made successful predictions for several independent cases and (b) the conditions under which it works is (roughly) known.
- What is predicted has to be unknown at the time of prediction. That is, the prediction has to be made before the prediction is verified. Predicting known data (as when a model is checked on out-of-sample data) is not sufficient . Nor is the practice of looking for phenomena that is consistent with the results of a model, after they have been generated (due to ignoring all the phenomena that is not consistent with the model in this process).
- What is being predicted is well defined. That is, How to use the model to make a prediction about observed data is clear. An abstract model that is very suggestive – appears to predict phenomena but in a vague and undefined manner but where one has to invent the mapping between model and data to make this work – may be useful as a way of thinking about phenomena, but this is different from empirical prediction.
- Which aspects of data about being predicted is open. As Watts (2014) points out, this is not restricted to point numerical predictions of some measurable value but could be a wider pattern. Examples of this include: a probabilistic prediction, a range of values, a negative prediction (this will not happen), or a second-order characteristic (such as the shape of a distribution or a correlation between variables). What is important is that (a) this is a useful characteristic to predict and (b) that this can be checked by an independent actor. Thus, for example, when predicting a value, the accuracy of that prediction depends on its use.
- The prediction has to use the model in an essential manner. Claiming to predict something obviously inevitable which does not use the model is insufficient – the model has to distinguish which of the possible outcomes is being predicted at the time.
Thus, prediction is different from other kinds of scientific/empirical uses, such as description and explanation (Edmonds et al. 2019). Some modellers use “prediction” to mean any output from a model, regardless of its relationship to any observation of what is being modelled . Others use “prediction” for any empirical fitting of data, regardless of whether that data is known before hand. However here we wish to be clearer and avoid any “post-truth” softening of the meaning of the word for two reasons (a) distinguishing different kinds of model use is crucial in matters of model checking or validation and (b) these “softer” kinds of empirical purpose will simply confuse the wider public when if talk to themabout “prediction”. One suspects that modellers have accepted these other meanings because it then allows them to claim they can predict (Edmonds 2017).
Nate Silver and his team aim to predict future social phenomena, such as the results of elections and the outcome of sports competitions. He correctly predicted the outcomes of all 50 electoral colleges in Obama’s election before it happened. This is a data-hungry approach, which involves the long-term development of simulations that carefully see what can be inferred from the available data, with repeated trial and error. The forecasts are probabilistic and repeated many times. As well as making predictions, his unit tries to establish the level of uncertainty in those predictions – being honest about the probability of those predictions coming about given the likely levels of error and bias in the data. These models are not agent-based in nature but tend to be of a mostly statistical nature, thus it is debatable whether these are treated as complex systems – it certainly does not use any theory from complexity science. His book (Silver 2012) describes his approach. Post hoc analysis of predictions – explaining why it worked or not – is kept distinct from the predictive models themselves – this analysis may inform changes to the predictive model but is not then incorporated into the model. The analysis is thus kept independent of the predictive model so it can be an effective check.
Many models in economics and ecology claim to “predict” but on inspection, this only means there is a fit to some empirical data. For example, (Meese & Rogoff 1983) looked at 40 econometric models where they claimed they were predicting some time-series. However, 37 out of 40 models failed completely when tested on newly available data from the same time series that they claimed to predict. Clearly, although presented as being predictive models, they could not predict unknown data. Although we do not know for sure, presumably what happened was that these models had been (explicitly or implicitly) fitted to the out-of-sample data, because the out-of-sample data was already known to the modeller. That is, if the model failed to fit the out-of-sample data when the model was tested, it was then adjusted until it did work, or alternatively, only those models that fitted the out-of-sample data were published.
The challenge is envisioned as happening like this.
- We publicise this paper requesting that people send us example of prediction or near-prediction on complex social systems with pointers to the appropriate documentation.
- We collect these and analyse them according to the characteristics and questions described below.
- We will post some interim results in January 2020 , in order to prompt more examples and to stimulate discussion. The final deadline for examples is the end of March 2020.
- We will publish the list of all the examples sent to us on the web, and present our summary and conclusions at Social Simulation 2020 in Milan and have a discussion there about the nature and prospects for the prediction of complex social systems. Anyone who contributed an example will be invited to be a co-author if they wish to be so-named.
How suggestions will be judged
For each suggestion, a number of answers will be sought – namely to the following questions:
- What are the papers or documents that describe the model?
- Is there an explicit claim that the model can predict (as opposed to might in the future)?
- What kind of characteristics are being predicted (number, probabilistic, range…)?
- Is there evidence of a prediction being made before the prediction was verified?
- Is there evidence of the model being used for a series of independent predictions?
- Were any of the predictions verified by a team that is independent of the one that made the prediction?
- Is there evidence of the same team or similar models making failed predictions?
- To what extent did the model need extensive calibration/adjustment before the prediction?
- What role does theory play (if any) in the model?
- Are the conditions under which predictive ability claimed described?
Of course, negative answers to any of the above about a particular model does not mean that the model cannot predict. What we are assessing is the evidence that a model can predict something meaningful about complex social systems. (Silver 2012) describes the method by which they attempt prediction, but this method might be different from that described in most theory-based academic papers.
This exercise might shed some light of some interesting questions, such as:
- What kind of prediction of complex social systems has been attempted?
- Are there any examples where the reliable prediction of complex social systems has been achieved?
- Are there certain kinds of social phenomena which seem to more amenable to prediction than others?
- Does aiming to predict with a model entail any difference in method than projects with other aims?
- Are there any commonalities among the projects that achieve reliable prediction?
- Is there anything we could (collectively) do that would encourage or document good prediction?
It might well be that whether prediction is achievable might depend on exactly what is meant by the word.
This paper resulted from a “lively discussion” after Gary’s (Polhill et al. 2019) talk about prediction at the Social Simulation conference in Mainz. Many thanks to all those who joined in this. Of course, prior to this we have had many discussions about prediction. These have included Gary’s previous attempt at a prediction competition (Polhill 2018) and Scott Moss’s arguments about prediction in economics (which has many parallels with the debate here).
 This is sufficient for other empirical purposes, such as explanation (Edmonds et al. 2019)
 Confusingly they sometimes the word “forecasting” for what we mean by predict here.
 Assuming we have any submitted examples to talk about
Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1 (see also http://cfpm.org/discussionpapers/236)
Edmonds, B. (2017) The post-truth drift in social simulation. Social Simulation Conference (SSC2017), Dublin, Ireland. (http://cfpm.org/discussionpapers/195)
Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.
Grimm V, Revilla E, Berger U, Jeltsch F, Mooij WM, Railsback SF, Thulke H-H, Weiner J, Wiegand T, DeAngelis DL (2005) Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science 310: 987-991.
Meese, R.A. & Rogoff, K. (1983) Empirical Exchange Rate models of the Seventies – do they fit out of sample? Journal of International Economics, 14:3-24.
Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/
Polhill, G., Hare, H., Anzola, D., Bauermann, T., French, T., Post, H. and Salt, D. (2019) Using ABMs for prediction: Two thought experiments and a workshop. Social Simulation 2019, Mainz.
Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.
Thorngate, W. & Edmonds, B. (2013) Measuring simulation-observation fit: An introduction to ordinal pattern analysis. Journal of Artificial Societies and Social Simulation, 16(2):14. http://jasss.soc.surrey.ac.uk/16/2/4.html
Watts, D. J. (2014). Common Sense and Sociological Explanations. American Journal of Sociology, 120(2), 313-351.
Edmonds, B., Polhill, G and Hales, D. (2019) Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge
© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)