Tag Archives: prediction

Basic Modelling Hygiene – keep descriptions about models and what they model clearly distinct

By Bruce Edmonds

The essence of a model is that it relates to something else – what it models – even if this is only a vague or implicit mapping. Otherwise a model would be indistinguishable from any other computer code, set of equations etc (Hesse 1964; Wartofsky 1966). The centrality of this essence makes it unsurprising that many modellers seem to conflate the two.

This is made worse by three factors.

  1. A strong version of Kuhn’s “Spectacles” (Kuhn 1962) where the researcher goes beyond using the model as a way of thinking about the world to projecting their model onto the world, so they see the world only through that “lens”. This effect seems to be much stronger for simulation modelling due to the intimate interaction that occurs over a period of time between modellers and their model.
  2. It is a natural modelling heuristic to make the model more like what it models (Edmonds & al. 2019), introducing more elements of realism. This is especially strong with agent-based modelling which lends itself to complication and descriptive realism.
  3. It is advantageous to stress the potential connections between a model (however abstract) and possible application areas. It is common to start an academic paper with a description of a real-world issue to motivate the work being reported on; then (even if the work is entirely abstract and unvalidated) to suggest conclusions for what is observed. A lack of substantiated connections between model and any empirical data can be covered up by slick passing from the world to the model and back again and a lack of clarity as to what their research achieves (Edmonds & al. 2019).

Whatever the reasons the result is similar – that the language used to describe entities, processes and outcomes in the model is the same as that used for its descriptions of what is intended to be modelled.

Such conflation is common in academic papers (albeit to different degrees). Expert modellers will not usually be confused by such language because they understand the modelling process and know what to look for in a paper. Thus one might ask, what is the harm of a little rhetoric and hype in the reporting of models? After all, we want modellers to be motivated and should thus be tolerant of their enthusiasm. To show the danger I will thus look at an example that talks about modelling aspects of ethnocentrism.

In their paper, entitled “The Evolutionary Dominance of Ethnocentric Cooperation“, Hartshorn, Kaznatcheev & Shultz (2013) further analyse the model described in (Hammond & Axelrod 2006). The authors have reimplemented the original model and extensively analysed it especially the temporal dynamics. The paper is solely about the original model and its properties, there is no pretence of any validation or calibration with respect to any data. The problem is in the language used, because it the language could equally well refer to the model and the real world.

Take the first sentence of its abstract: “Recent agent-based computer simulations suggest that ethnocentrism, often thought to rely on complex social cognition and learning, may have arisen through biological evolution“. This sounds like the simulation suggests something about the world we live in – that, as the title suggests, Ethnocentric cooperation naturally dominates other strategies (e.g. humanitarianism) and so it is natural. The rest of the abstract then goes on in the same sort of language which could equally apply to the model and the real world.

Expert modellers will understand that they were talking about the purely abstract properties of the model, but this will not be clear to other readers. However, in this case there is evidence that it is a problem. This paper has, in recent years, shot to the top of page requests from the JASSS website (22nd May 2020) at 162,469 requests over a 7-day period, but is nowhere in the top 50 articles in terms of JASSS-JASSS citations. Tracing where these requests come from, results in many alt-right and Russian web sites. It seems that many on the far right see this paper as confirmation of their Nationalist and Racist viewpoints. This is far more attention than a technical paper just about a model would get, so presumably they took it as confirmation about real-world conclusions (or were using it to fool others about the scientific support for their viewpoints) – namely that Ethnocentrism does beat Humanitarianism and this is an evolutionary inevitability [note 1].

This is an extreme example of the confusion that occurs when non-expert modellers read many papers on modelling. Modellers too often imply a degree of real-world relevance when this is not justified by their research. They often imply real-world conclusions before any meaningful validation has been done. As agent-based simulation reaches a less specialised audience, this will become more important.

Some suggestions to avoid this kind of confusion:

  • After the motivation section, carefully outline what part this research will play in the broader programme – do not leave this implicit or imply a larger role than is justified
  • Add in the phrase “in the model” frequently in the text, even if this is a bit repetitive [note 2]
  • Keep  discussions about the real world in a different sections from those that discuss the model
  • Have an explicit statement of what the model can reliably say about the real world
  • Use different terms when referring to parts of the model and part of the real world (e.g. actors for real world individuals, agents in the model)
  • Be clear about the intended purpose of the model – what can be achieved as a result of this research (Edmonds et al. 2019) – for example, do not imply the model will be able to predict future real world properties until this has been demonstrated (de Matos Fernandes & Keijzer 2020)
  • Be very cautious in what you conclude from your model – make sure this is what has been already achieved rather than a reflection of your aspirations (in fact it might be better to not mention such hopes at all until they are realised)

Notes

  1. To see that this kind of conclusion is not necessary see (Hales & Edmonds 2019).
  2. This is similar to a campaign to add the words “in mice” in reports about medical “breakthroughs”, (https://www.statnews.com/2019/04/15/in-mice-twitter-account-hype-science-reporting)

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, grant number ES/S015159/1.

References

Edmonds, B., et al. (2019) Different Modelling Purposes, Journal of Artificial Societies and Social Simulation 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html>. doi:10.18564/jasss.3993

Hammond, R. A., N. D. and Axelrod, R. (2006). The Evolution of Ethnocentrism. Journal of Conflict Resolution, 50(6), 926–936. doi:10.1177/0022002706293470

Hartshorn, Max, Kaznatcheev, Artem and Shultz, Thomas (2013) The Evolutionary Dominance of Ethnocentric Cooperation, Journal of Artificial Societies and Social Simulation 16(3), 7. <http://jasss.soc.surrey.ac.uk/16/3/7.html>. doi:10.18564/jasss.2176

Hesse, M. (1964). Analogy and confirmation theory. Philosophy of Science, 31(4), 319-327.

Kuhn, T. S. (1962). The Structure of Scientific Revolutions. Univ. of Chicago Press.

de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Wartofsky, M. (1966). the Model Muddle – Proposals for an Immodest Realism. Journal Of Philosophy, 63(19), 589-589.


Edmonds, B. (2020) Basic Modelling Hygiene - keep descriptions about models and what they model clearly distinct. Review of Artificial Societies and Social Simulation, 22nd May 2020. https://rofasss.org/2020/05/22/modelling-hygiene/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Don’t try to predict COVID-19. If you must, use Deep Uncertainty methods

By Patrick Steinmanna, Jason R. Wangb, George A. K. van Voorna, and Jan H. Kwakkelb

a Biometris, Wageningen University & Research, Wageningen, the Netherlands, b Delft University of Technology, Faculty of Technology, Policy & Management, Delft, the Netherlands

(A contribution to the: JASSS-Covid19-Thread)

Abstract

We respond to the recent JASSS article on COVID-19 and computational modelling. We disagree with the authors on one major point and note the lack of discussion of a second one. We believe that COVID-19 cannot be predicted numerically, and attempting to make decisions based on such predictions will cost human lives. Furthermore, we note that the original article only briefly comments on uncertainty. We urge those attempting to model COVID-19 for decision support to acknowledge the deep uncertainties surrounding the pandemic, and to employ Decision Making under Deep Uncertainty methods such as exploratory modelling, global sensitivity analysis, and robust decision-making in their analysis to account for these uncertainties.

Introduction

We read the recent article in the Journal of Artificial Societies and Social Simulation on predictive COVID-19 modelling (Squazzoni et al. 2020) with great interest. We agree with the authors on many general points, such as the need for rigorous and transparent modelling and documentation. However, we were dismayed that the authors focused solely on how to make predictive simulation models of COVID-19 without first discussing whether making such models is appropriate under the current circumstances. We believe this question is of greater importance, and that the answer will likely disappoint many in the community. We also note that the original piece does not engage substantively with methods of modelling and model analysis specifically designed for making time-critical decisions under uncertainty.

We respond to the call issued by the Review of Artificial Societies and Social Simulation for responses and opinions on predictive modelling of COVID-19. In doing so, we go above and beyond the recent RofASSS contribution by de Matos Fernandes & Keijzer (2020)—rather than saying that definite “predictions” should be replaced by probabilistic “expectations”, we contend that no probabilities whatsoever should be applied when modelling systems as uncertain as a global pandemic. This is presented in the first section. In the second section, we discuss how those with legitimate need for predictive epidemic modelling should approach their task, and which tools might be beneficial in the current context. In the last section, we summarize our opinions and issue our own challenges to the community.

To Model or Not to Model COVID-19, That Is the Question

The recent call attempts to lay out a path for using simulation modelling to forecast the COVID-19 epidemic. However, there is no critical reflection on the question of whether modelling is the appropriate tool for this, under the current circumstances. The authors argue that with sufficient methodological rigour, high-quality data and interdisciplinary collaboration, complex outcomes (such as the COVID-19 epidemic) can be predicted well and quickly enough to provide actionable decision support.

Computational modelling is difficult in the best of times. Even models with seemingly simple structure can have emergent behavior rendering them perfectly random (Wolfram 1983) or Turing complete (Cook 2004). Attempting to draw any kind of conclusions from a simulation model, especially in the life-and-death context of pandemic decision making, must be done carefully and with respect for uncertainty. If, for whatever reason, this cannot be done, then modelling is not the right tool to answer the question at hand (Thompson & Smith 2019). The numerical nature of models is seductive, but must be employed wisely to avoid “useless arithmetic” (Pilkey-Jarvis & Pilkey 2008) or statistical fallacies (Benessia et al. 2016).

Trying to skilfully predict how the COVID-19 outbreak will evolve regionally or globally is a fool’s errand. Epistemic uncertainties about key parameters and processes describing the disease abound. Human behaviour is changing in response to the outbreak. Research and development burgeon in many sciences with presently unknowable results. Anyone claiming to know where the world will be in even a few weeks is at best delusional. Uncertainty is aggravated by the problem of equifinality (Oreskes et al. 1994). For any simulation model of COVID-19, there will be a set of model parametrizations that has a similar quality of fit with the available data. Much of this is acknowledged by Squazzoni et al. (2020), yet inexplicably they still call for developing probabilistic forecasts of the outbreak using empirically validated models. We instead contend that “about these matters, there is no scientific basis on which to form any calculable probability” (Keynes 1937), and that validation should be based on usefulness in aiding time-urgent decision-making, rather than predictive accuracy (Pielke 2003). However, the capacity for such policy-oriented modelling must be built between pandemics, not during them (Rivers et al. 2019).

This call to abstain from predicting COVID-19 does not imply that the broader community should refrain from modelling completely. The illustrative power of simple models has been amply demonstrated in various media outlets. We do urge modellers not to frame their work as predictive (e.g. “How Pandemics Can End”, rather than “How COVID-19 Will End”), and to use watermarks where possible to indicate that the shown work is not predictive. There is also ample opportunity to use simulation modelling to solve ancillary problems. For example, established transport and logistics models could be adapted to ensure supply of critical healthcare equipment is timely and efficient. Similarly, agri-food models could explore how to secure food production and distribution under labour shortages. These can be vital, though less sensational, contributions of simulation modelling to the ongoing crisis.

Deep Uncertainty: How to Predict COVID-19, if(f) You Must

Deep Uncertainty (Lempert et al. 2003) is present when analysts cannot know, or stakeholders cannot agree on:

  1. The probability distributions relevant to unknown system variables,
  2. The relations and mechanisms present in the system, and/or
  3. The metrics by which future system states should be evaluated.

All three conditions are present in the case of the COVID-19 pandemic. To give a brief example of each, we know very little about asymptomatic infections, whether a vaccine will ever become available, and whether the socio-psychological and economic impacts of a “flattened curve” future are bearable (and by whom). The field of Decision Making under Deep Uncertainty has been working on problems of a similar nature for many years already, and developed a variety of tools to analyse such problems (Marchau et al. 2019). These methods may be beneficial for designing COVID-19 policies with simulation models—if, as discussed previously, this is appropriate. In the following, we present three such methods and their potential value for COVID-19 decision support: exploratory modelling, global sensitivity analysis, and robust decision-making.

Exploratory modelling (Bankes 1993) is a conceptual approach to using simulation models for policy analysis. It emerges as a response to the question how models that cannot be empirically validated can still be used to inform planning and decision-making (Hodges 1991, Hodges & Dewar 1992). Instead of consolidating increasing amounts of knowledge into “the” model of a system, exploratory modelling advocates using wide uncertainty ranges for unknown parameters to generate a large ensemble of plausible futures, with no predictive or probabilistic power attached or implied a priori (Shortridge & Zaitchik 2018). This ensemble may represent a variety of assumptions, theories, and system structures. It could even be generated using a multitude of models (Page 2018; Smaldino 2017) and metrics (Manheim 2018). By reasoning across such an ensemble, insights agnostic to specific assumptions may be reached, sidestepping a priori biases that are inherent in only examining a simple set of scenarios, as COVID-19 policy models observed by the authors do. Reasoning across such limited sets obscures policy-relevant futures which emerge as hybrids of pre-specified positive and negative narratives (Lamontagne et al. 2018). In the context of the COVID-19 pandemic, exploratory modelling could be used to contrast a variety of assumptions about disease transmission mechanisms (e.g., the role of schools, children, or asymptomatic cases in the speed of the outbreak), reinfection potential, or adherence to social distancing norms. Many ESSA members are already familiar with such methods—NetLogo’s BehaviorSpace function is a prime example. The Exploratory Modelling & Analysis Workbench (Kwakkel 2017) provides a similar, platform-agnostic functionality by means of a Python interface. We encourage all modellers to embrace such tools, and to be honest about which parameters and structural assumptions are uncertain, how uncertain they are, and how this affects the inferences that can and cannot be made based on the results from the model.

Global sensitivity analysis (Saltelli 2004) is a method of studying both the importance and interaction of uncertain parameters on the outputs of a simulation model. Many simulation modellers are already familiar with local sensitivity analysis, where parameters are varied one at a time to ascertain their individual effect on model output. This is insufficient for studying parameter interactions in non-linear systems (Saltelli et al. 2019; ten Broeke et al. 2016). In global sensitivity analysis, combinations of parameters are varied and studied simultaneously, illuminating their joint or interaction effects. This is critical for the rigorous study of complex system models, where parameters may have unexpected, non-linear interactions. In the context of the COVID-19 epidemic, we have seen at least two public health agencies perform local sensitivity analysis over small parameter ranges, which may blind decision makers to worst-case futures (Siegenfeld & Bar-Yam 2020). Global sensitivity analysis might reveal how different assumptions for e.g. duration of Intensive Care (IC) and age-related case severity may interact to create a “perfect storm” of IC need. A collection of global sensitivity analysis methods has been encoded for Python in the SALib package (Herman & Usher 2018), and how to use these with NetLogo is illustrated in Jaxa-Rozen & Kwakkel (2018).

Robust Decision Making (RDM) (Lempert et al. 2006) is a general analytic method for designing policies which are robust across uncertainties—they perform well regardless of which future actually materializes. Policies are designed by iteratively stress-testing them across ensembles of plausible futures representing different assumptions, theories, and input parameter combinations. This represents a departure from established, probabilistic risk management approaches, which are inappropriate for fat-tailed processes such as pandemics (Norman et al. 2020). More recently, RDM has been extended to Dynamic Adaptive Policy Pathways (DAPP) (Kwakkel et al. 2015) by incorporating adaptive policies conditioned on specific triggers or signposts identified in exploratory modeling runs. In the context of the COVID-19 epidemic, DAPP might be used to design policies which can adapt as the situation develops (Hamarat et al. 2012)—possibly representing a transparent and verifiable approach to implementing the “hammer and dance” epidemic suppression method which has been widely discussed in popular media. Thinking in terms of pathways conditional on how the outbreak evolves is also a more realistic way of preparing for the dance: Rather than giving a human time line, the virus determines the time line. All we can do is indicate the conditions under which certain types of actions will be taken.

Conclusions: Please Don’t. If You Must, Use Deep Uncertainty methods.

We have raised two points of importance which are not discussed in a recent article on COVID-19 predictive modelling in JASSS. In particular, we have proposed that the question of whether such models should be created must precede any discussion of how to do so. We found that complex outcomes such as epidemics cannot reliably be predicted using simulation models, as there are numerous uncertainties that significantly affect possible future system states. However, models may be still be useful in times of crisis, if created and used appropriately. Furthermore, we have noted that there exists an entire field of study focusing on Decision Making under Deep Uncertainty, and that model analysis methods for situations like this already exist. We have briefly highlighted three methods—exploratory modelling, global sensitivity analysis, and robust decision-making—and given examples for how they might be used in the present context.

Stemming from these two points, we issue our own challenges to the ESSA modelling community and the field of systems simulation in general:

  • COVID-19 prediction distancing challenge: Do not attempt to predict the COVID-19 epidemic.
  • COVID-19 deep uncertainty challenge: If you must predict the COVID-19 epidemic, embrace deep uncertainty principles, including transparent treatment of uncertainties, exploratory modeling, global sensitivity analysis, and robust decision-making.

References

Bankes, S. (1993). Exploratory Modeling for Policy Analysis. Operations Research, 41(3), 435–449. doi: 10.1287/opre.41.3.435

Benessia, A., Funtowicz, S., Giampietro, M., Guimarães Pereira, A., Ravetz, J. R., Saltelli, A., Strand, R., & van der Sluijs, J. P. (2016). Science on the Verge. Consortium for Science, Policy & Outcomes Tempe, AZ and Washington, DC.

Cook, M. (2004). Universality in Elementary Cellular Automata. Complex Systems, 15(1), 1–40.

de Matos Fernandes, C. A., & Keijzer, M. A. (2020). No one can predict the future: More than a semantic dispute. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Hamarat, C., Kwakkel, J., & Pruyt, E. (2012). Adaptive Policymaking under Deep Uncertainty : Optimal Preparedness for the next pandemic. Proceedings of the 30th International Conference of the System Dynamics Society, Nrc 2009.

Herman, J., & Usher, W. (2018). SALib : Sensitivity Analysis Library in Python ( Numpy ). Contains Sobol , SALib : An open-source Python library for Sensitivity Analysis. 41(April), 2015–2017. doi:10.1016/S0010-1

Jaxa-Rozen, M., & Kwakkel, J. H. (2018). PyNetLogo: Linking NetLogo with Python. Journal of Artificial Societies and Social Simulation, 21(2). <http://jasss.soc.surrey.ac.uk/21/2/4.html> doi:10.18564/jasss.3668

Keynes, J. M. (1937). The General Theory of Employment. The Quarterly Journal of Economics, 51(2), 209. doi:10.2307/1882087

Kwakkel, J. H. (2017). The Exploratory Modeling Workbench: An open source toolkit for exploratory modeling, scenario discovery, and (multi-objective) robust decision making. Environmental Modelling & Software, 96, 239–250. doi:10.1016/j.envsoft.2017.06.054

Kwakkel, J. H., Haasnoot, M., & Walker, W. E. (2015). Developing dynamic adaptive policy pathways: a computer-assisted approach for developing adaptive strategies for a deeply uncertain world. Climatic Change, 132(3), 373–386. doi:10.1007/s10584-014-1210-4

Lamontagne, J. R., Reed, P. M., Link, R., Calvin, K. V., Clarke, L. E., & Edmonds, J. A. (2018). Large Ensemble Analytic Framework for Consequence-Driven Discovery of Climate Change Scenarios. Earth’s Future, 6(3), 488–504. doi:10.1002/2017EF000701

Lempert, R. J., Groves, D. G., Popper, S. W., & Bankes, S. C. (2006). A general, analytic method for generating robust strategies and narrative scenarios. Management Science, 52(4), 514–528. doi:10.1287/mnsc.1050.0472

Lempert, R. J., Popper, S., & Bankes, S. (2003). Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. doi:10.7249/mr1626

Manheim, D. (2018). Building Less Flawed Metrics: Dodging Goodhart and Campbell’s Laws. In MPRA.

Marchau, V. A. W. J., Walker, W. E., Bloemen, P. J. T. M., & Popper, S. W. (Eds.). (2019). Decision Making under Deep Uncertainty. Springer International Publishing. doi:10.1007/978-3-030-05252-2

Norman, J., Bar-Yam, Y., & Taleb, N. N. (2020). Systemic Risk of Pandemic via Novel Pathogens – Coronavirus: A Note. New England Complex Systems Institute. http://arxiv.org/abs/1410.5787

Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263(5147), 641–646. doi:10.1126/science.263.5147.641

Page, S. E. (2018). The model thinker: what you need to know to make data work for you. Hachette UK.

Pilkey-Jarvis, L., & Pilkey, O. H. (2008). Useless Arithmetic: Ten Points to Ponder When Using Mathematical Models in Environmental Decision Making. Public Administration Review, 68(3), 470–479. doi:10.1111/j.1540-6210.2008.00883_2.x

Rivers, C., Chretien, J. P., Riley, S., Pavlin, J. A., Woodward, A., Brett-Major, D., Maljkovic Berry, I., Morton, L., Jarman, R. G., Biggerstaff, M., Johansson, M. A., Reich, N. G., Meyer, D., Snyder, M. R., & Pollett, S. (2019). Using “outbreak science” to strengthen the use of models during epidemics. Nature Communications, 10(1), 9–11. doi:10.1038/s41467-019-11067-2

Saltelli, A. (2004). Global sensitivity analysis: an introduction. Proc. 4th International Conference on Sensitivity Analysis of Model Output (SAMO’04), 27–43.

Saltelli, A., Aleksankina, K., Becker, W., Fennell, P., Ferretti, F., Holst, N., Li, S., & Wu, Q. (2019). Why so many published sensitivity analyses are false: A systematic review of sensitivity analysis practices. Environmental Modelling and Software, 114(March 2018), 29–39. doi:10.1016/j.envsoft.2019.01.012

Shortridge, J. E., & Zaitchik, B. F. (2018). Characterizing climate change risks by linking robust decision frameworks and uncertain probabilistic projections. Climatic Change, 151(3–4), 525–539. doi:10.1007/s10584-018-2324-x

Siegenfeld, A. F., & Bar-Yam, Y. (2020). What models can and cannot tell us about COVID-19 (pp. 1–3). New England Complex Systems Institute.

Smaldino, P. E. (2017). Models are stupid, and we need more of them. Computational Social Psychology, 311–331. doi:10.4324/9781315173726

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F., & Gilbert, N. (2020). Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2), 10. <http://jasss.soc.surrey.ac.uk/23/2/10.html> doi:10.18564/jasss.4298

ten Broeke, G., van Voorn, G. A. K., & Ligtenberg, A. (2016). Which Sensitivity Analysis Method Should I Use for My Agent-Based Model? Journal of Artificial Societies and Social Simulation, 19(1), 1–35. <http://jasss.soc.surrey.ac.uk/19/1/5.html> doi:10.18564/jasss.2857

Thompson, E. L., & Smith, L. A. (2019). Escape from model-land. Economics: The Open-Access, Open-Assessment E-Journal. doi:10.5018/economics-ejournal.ja.2019-40

Wolfram, S. (1983). Statistical mechanics of cellular automata. Reviews of Modern Physics, 55(3), 601–644. doi:10.1103/RevModPhys.55.60


Steinmann, P., Wang, J. R., van Voorn, G. A. K. and Kwakkel, J. H. (2020) Don’t try to predict COVID-19. If you must, use Deep Uncertainty methods. Review of Artificial Societies and Social Simulation, 17th April 2020. https://rofasss.org/2020/04/17/deep-uncertainty/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

No one can predict the future: More than a semantic dispute

By Carlos A. de Matos Fernandes and Marijn A. Keijzer

(A contribution to the: JASSS-Covid19-Thread)

Models are pivotal to battle the current COVID-19 crisis. In their call to action, Squazzoni et al. (2020) convincingly put forward how social simulation researchers could and should respond in the short run by posing three challenges for the community among which is a COVID-19 prediction challenge. Although Squazzoni et al. (2020) stress the importance of transparent communication of model assumptions and conditions, we question the liberal use of the word ‘prediction’ for the outcomes of the broad arsenal of models used to mitigate the COVID-19 crisis by ours and other modelling communities. Four key arguments are provided that advocate using expectations derived from scenarios when explaining our models to a wider, possibly non-academic audience.

The current COVID-19 crisis necessitates that we implement life-changing policies that, to a large extent, build upon predictions from complex, quickly adapted, and sometimes poorly understood models. The examples of models spurring the news to produce catchphrase headlines are abundant (Imperial College, AceMod-Australian Census-based Epidemic Model, IndiaSIM, IHME, etc.). And even though most of these models will be useful to assess the comparative effectiveness of interventions in our aim to ‘flatten the curve’, the predictions that disseminate to news media are those of total cases or timing of the inflection point.

The current focus on predictive epidemiological and behavioural models brings back an important discussion about prediction in social systems. “[T]here is a lot of pressure for social scientists to predict” (Edmonds, Polhill & Hales, 2019), and we might add ‘especially nowadays’. But forecasting in human systems is often tricky (Hofman, Sharma & Watts, 2017). Approaches that take well-understood theories and simple mechanisms often fail to grasp the complexity of social systems, yet models that rely on complex supervised machine learning-like approaches may offer misleading levels of confidence (as was elegantly shown recently by Salganik et al., 2020). COVID-19 models appear to be no exception as a recent review concluded that “[…] their performance estimates are likely to be optimistic and misleading” (Wynants et al., 2020, p. 9). Squazzoni et al. describe these pitfalls too (2020: paragraph 3.3). In the crisis at hand, it may even be counter-productive to rely on complex models that combine well-understood mechanisms with many uncertain parameters (Elsenbroich & Badham, 2020).

Considering the level of confidence we can have about predictive models in general, we believe there is an issue with the way predictions are communicated by the community. Scientists often use ‘prediction’ to refer to some outcome of a (statistical) model where they ‘predict’ aspects of the data that are already known, but momentarily set aside. Edmonds et al. (2019: paragraph 2.4) state that “[b]y ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model”. Predictive accuracy, in this case, can then be computed later on, by comparing the prediction to the truth. Scientists know that when talking about predictions of their models, they don’t claim to generalize to situations outside of the narrow scope of their study sample or their artificial society. We are not predicting the future, and wouldn’t claim we could. However, this is wildly different from how ‘prediction’ is commonly understood: As an estimation of some unknown thing in the future. Now that our models quickly disseminate to the general public, we need to be careful with the way we talk about their outcomes.

Predictions in the COVID-19 crisis will remain imperfect. In the current virus outbreak, society cannot afford to rely on the falsification of models for interventions against empirical data. As the virus remains to spread rapidly, our only option is to rely on models as a basis for policy, ceteris paribus. And it is precisely here – at ‘ceteris paribus’ – where the terminology ‘predictions’ miss the mark. All things will not be equal tomorrow, the next day, or the day after that (Van Bavel et al. [2020] note numerous topics that affect managing the COVID-19 pandemic and its impact on society). Policies around the globe are constantly being tweaked, and people’s behaviour changes dramatically as a consequence (Google, 2020). Relying on predictions too much may give a false sense of security.

We propose to avoid using the word ‘prediction’ too much and talk about scenarios or expectations instead where possible. We identify four reasons why you should avoid talking about prediction right now:

  1. Not everyone is acquainted with noise and emergence. Computational Social Scientists generally understand the effects of noise in social systems (Squazzoni et al., 2020: paragraph 1.8). Small behavioural irregularities can be reinforced in complex systems fostering unexpected outcomes. Yet, scientists not acquainted with studying complex social systems may be unfamiliar with the principles we have internalized by now, and put over-confidence in the median outputs of volatile models that enter the scientific sphere as predictions.
  2. Predictions do not convey uncertainty. The general public is usually unacquainted with academic esoteric concepts. For instance, showing a flatten-the-curve scenario generally builds upon mean or median approximation, oftentimes neglecting to include variability of different scenarios. Still, there are numerous other outcomes, building on different parameter values. We fear that by stating a prediction to an undisciplined public, they expect such a thing to occur for certain. If we forecast a sunny day, but there’s rain, people are upset. Talking about scenarios, expectations, and mechanisms may prevent confusion and opposition when the forecast does not occur.
  3. It’s a model, not a reality. The previous argument feeds into the third notion: Be honest about what you model. A model is a model. Even the most richly calibrated model is a model. That is not to say that such models are not informative (we reiterate: models are not a shot in the dark). Still, richly calibrated models based on poor data may be more misleading than less calibrated models (Elsenbroich & Badham, 2020). Empirically calibrated models may provide more confidence at face value, but it lies in the nature of complex systems that small measurement errors in the input data may lead to big deviations in outputs. Models present a scenario for our theoretical reasoning with a given set of parameter values. We can update a model with empirical data to increase reliability but it remains a scenario about a future state given an (often expansive) set of assumptions (recently beautifully visualized by Koerth, Bronner, & Mithani, 2020).
  4. Stop predicting, start communicating. Communication is pivotal during a crisis. An abundance of research shows that communicating clearly and honestly is a best practice during a crisis, generally comforting the general public (e.g., Seeger, 2006). Squazzoni et al. (2020) call for transparent communication. by stating that “[t]he limitations of models and the policy recommendations derived from them have to be openly communicated and transparently addressed”. We are united in our aim to avert the COVID-19 crisis but should be careful that overconfidence doesn’t erode society’s trust in science. Stating unequivocally that we hope – based on expectations – to avert a crisis by implementing some policy, does not preclude altering our course of action when an updated scenario about the future may require us to do so. Modellers should communicate clearly to policy-makers and the general public that this is the role of computational models that are being updated daily.

Squazzoni et al. (2020) set out the agenda for our community in the coming months and it is an important one. Let’s hope that the expectations from the scenarios in our well-informed models will not fall on deaf ears.

References

Edmonds, B., Polhill, G., & Hales, D. (2019). Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H., & Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html> doi: 10.18564/jasss.3993

Elsenbroich, C., & Badham, J. (2020). Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020.
https://rofasss.org/2020/04/12/focussing-on-our-strengths/

Google. (2020). COVID-19 Mobility Reports. https://www.google.com/covid19/mobility/ (Accessed 15th April 2020)

Hofman, J. M., Sharma, A., & Watts, D. J. (2017). Prediction and Explanation in Social Systems. Science, 355, 486–488. doi: 10.1126/science.aal3856

Koerth, M., Bronner, L., & Mithani, J. (2020, March 31). Why It’s So Freaking Hard To Make A Good COVID-19 Model. FiveThirtyEight. https://fivethirtyeight.com/

Salganik, M. J. et al. (2020). Measuring the Predictability of Life Outcomes with a Scientific Mass Collaboration. PNAS. 201915006. doi: 10.1073/pnas.1915006117

Seeger, M. W. (2006). Best Practices in Crisis Communication: An Expert Panel Process, Journal of Applied Communication Research, 34(3), 232-244.  doi: 10.1080/00909880600769944

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Van Bavel, J. J. et al. (2020). Using social and behavioural science to support COVID-19 pandemic response. PsyArXiv. https://doi.org/10.31234/osf.io/y38m9

Wynants. L., et al. (2020). Prediction models for diagnosis and prognosis of COVID-19 infection: systematic review and critical appraisal. BMJ, 369, m1328. doi: 10.1136/bmj.m1328


de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Call for responses to the JASSS Covid19 position paper

In the recent position paper in JASSS, entitled “Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action” the authors suggest some collective actions we, as social simulators, could take.

We are asking for submissions that present serious comments on this paper. This  could include:

  • To discuss other points of view
  • To talk about possible modelling approaches
  • To review simulation modelling of covid19 that includes social aspects
  • To point out some of the difficulties of interpretation and the interface with the policy/political world
  • To discuss or suggest other possible collective actions that could be taken.

All such contributions will form the the: JASSS-Covid19-Thread


Predicting Social Systems – a Challenge

By Bruce Edmonds, Gary Polhill and David Hales

(Part of the Prediction-Thread)

There is a lot of pressure on social scientists to predict. Not only is an ability to predict implicit in all requests to assess or optimise policy options before they are tried, but prediction is also the “gold standard” of science. However, there is a debate among modellers of complex social systems about whether this is possible to any meaningful extent. In this context, the aim of this paper is to issue the following challenge:

Are there any documented examples of models that predict useful aspects of complex social systems?

To do this the paper will:

  1. define prediction in a way that corresponds to what a wider audience might expect of it
  2. give some illustrative examples of prediction and non-prediction
  3. request examples where the successful prediction of social systems is claimed
  4. and outline the aspects on which these examples will be analysed

About Prediction

We start by defining prediction, taken from (Edmonds et al. 2019). This is a pragmatic definition designed to encapsulate common sense usage – what a wider public (e.g. policy makers or grant givers) might reasonably expect from “a prediction”.

By ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model.

Let us clarify the language in this.

  • It has to be reliable. That is, one can rely upon its prediction as one makes this – a model that predicts erratically and only occasionally predicts is no help, since one does not whether to believe any particular prediction. This usually means that (a) it has made successful predictions for several independent cases and (b) the conditions under which it works is (roughly) known.
  • What is predicted has to be unknown at the time of prediction. That is, the prediction has to be made before the prediction is verified. Predicting known data (as when a model is checked on out-of-sample data) is not sufficient [1]. Nor is the practice of looking for phenomena that is consistent with the results of a model, after they have been generated (due to ignoring all the phenomena that is not consistent with the model in this process).
  • What is being predicted is well defined. That is, How to use the model to make a prediction about observed data is clear. An abstract model that is very suggestive – appears to predict phenomena but in a vague and undefined manner but where one has to invent the mapping between model and data to make this work – may be useful as a way of thinking about phenomena, but this is different from empirical prediction.
  • Which aspects of data about being predicted is open. As Watts (2014) points out, this is not restricted to point numerical predictions of some measurable value but could be a wider pattern. Examples of this include: a probabilistic prediction, a range of values, a negative prediction (this will not happen), or a second-order characteristic (such as the shape of a distribution or a correlation between variables). What is important is that (a) this is a useful characteristic to predict and (b) that this can be checked by an independent actor. Thus, for example, when predicting a value, the accuracy of that prediction depends on its use.
  • The prediction has to use the model in an essential manner. Claiming to predict something obviously inevitable which does not use the model is insufficient – the model has to distinguish which of the possible outcomes is being predicted at the time.

Thus, prediction is different from other kinds of scientific/empirical uses, such as description and explanation (Edmonds et al. 2019). Some modellers use “prediction” to mean any output from a model, regardless of its relationship to any observation of what is being modelled [2]. Others use “prediction” for any empirical fitting of data, regardless of whether that data is known before hand. However here we wish to be clearer and avoid any “post-truth” softening of the meaning of the word for two reasons (a) distinguishing different kinds of model use is crucial in matters of model checking or validation and (b) these “softer” kinds of empirical purpose will simply confuse the wider public when if talk to themabout “prediction”. One suspects that modellers have accepted these other meanings because it then allows them to claim they can predict (Edmonds 2017).

Some Examples

Nate Silver and his team aim to predict future social phenomena, such as the results of elections and the outcome of sports competitions. He correctly predicted the outcomes of all 50 electoral colleges in Obama’s election before it happened. This is a data-hungry approach, which involves the long-term development of simulations that carefully see what can be inferred from the available data, with repeated trial and error. The forecasts are probabilistic and repeated many times. As well as making predictions, his unit tries to establish the level of uncertainty in those predictions – being honest about the probability of those predictions coming about given the likely levels of error and bias in the data. These models are not agent-based in nature but tend to be of a mostly statistical nature, thus it is debatable whether these are treated as complex systems – it certainly does not use any theory from complexity science. His book (Silver 2012) describes his approach. Post hoc analysis of predictions – explaining why it worked or not – is kept distinct from the predictive models themselves – this analysis may inform changes to the predictive model but is not then incorporated into the model. The analysis is thus kept independent of the predictive model so it can be an effective check.

Many models in economics and ecology claim to “predict” but on inspection, this only means there is a fit to some empirical data. For example, (Meese & Rogoff 1983) looked at 40 econometric models where they claimed they were predicting some time-series. However, 37 out of 40 models failed completely when tested on newly available data from the same time series that they claimed to predict. Clearly, although presented as being predictive models, they could not predict unknown data. Although we do not know for sure, presumably what happened was that these models had been (explicitly or implicitly) fitted to the out-of-sample data, because the out-of-sample data was already known to the modeller. That is, if the model failed to fit the out-of-sample data when the model was tested, it was then adjusted until it did work, or alternatively, only those models that fitted the out-of-sample data were published.

The Challenge

The challenge is envisioned as happening like this.

  1. We publicise this paper requesting that people send us example of prediction or near-prediction on complex social systems with pointers to the appropriate documentation.
  2. We collect these and analyse them according to the characteristics and questions described below.
  3. We will post some interim results in January 2020 [3], in order to prompt more examples and to stimulate discussion. The final deadline for examples is the end of March 2020.
  4. We will publish the list of all the examples sent to us on the web, and present our summary and conclusions at Social Simulation 2020 in Milan and have a discussion there about the nature and prospects for the prediction of complex social systems. Anyone who contributed an example will be invited to be a co-author if they wish to be so-named.

How suggestions will be judged

For each suggestion, a number of answers will be sought – namely to the following questions:

  • What are the papers or documents that describe the model?
  • Is there an explicit claim that the model can predict (as opposed to might in the future)?
  • What kind of characteristics are being predicted (number, probabilistic, range…)?
  • Is there evidence of a prediction being made before the prediction was verified?
  • Is there evidence of the model being used for a series of independent predictions?
  • Were any of the predictions verified by a team that is independent of the one that made the prediction?
  • Is there evidence of the same team or similar models making failed predictions?
  • To what extent did the model need extensive calibration/adjustment before the prediction?
  • What role does theory play (if any) in the model?
  • Are the conditions under which predictive ability claimed described?

Of course, negative answers to any of the above about a particular model does not mean that the model cannot predict. What we are assessing is the evidence that a model can predict something meaningful about complex social systems. (Silver 2012) describes the method by which they attempt prediction, but this method might be different from that described in most theory-based academic papers.

Possible Outcomes

This exercise might shed some light of some interesting questions, such as:

  • What kind of prediction of complex social systems has been attempted?
  • Are there any examples where the reliable prediction of complex social systems has been achieved?
  • Are there certain kinds of social phenomena which seem to more amenable to prediction than others?
  • Does aiming to predict with a model entail any difference in method than projects with other aims?
  • Are there any commonalities among the projects that achieve reliable prediction?
  • Is there anything we could (collectively) do that would encourage or document good prediction?

It might well be that whether prediction is achievable might depend on exactly what is meant by the word.

Acknowledgements

This paper resulted from a “lively discussion” after Gary’s (Polhill et al. 2019) talk about prediction at the Social Simulation conference in Mainz. Many thanks to all those who joined in this. Of course, prior to this we have had many discussions about prediction. These have included Gary’s previous attempt at a prediction competition (Polhill 2018) and Scott Moss’s arguments about prediction in economics (which has many parallels with the debate here).

Notes

[1] This is sufficient for other empirical purposes, such as explanation (Edmonds et al. 2019)

[2] Confusingly they sometimes the word “forecasting” for what we mean by predict here.

[3] Assuming we have any submitted examples to talk about

References

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1 (see also http://cfpm.org/discussionpapers/236)

Edmonds, B. (2017) The post-truth drift in social simulation. Social Simulation Conference (SSC2017), Dublin, Ireland. (http://cfpm.org/discussionpapers/195)

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Grimm V, Revilla E, Berger U, Jeltsch F, Mooij WM, Railsback SF, Thulke H-H, Weiner J, Wiegand T, DeAngelis DL (2005) Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science 310: 987-991.

Meese, R.A. & Rogoff, K. (1983) Empirical Exchange Rate models of the Seventies – do they fit out of sample? Journal of International Economics, 14:3-24.

Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/

Polhill, G., Hare, H., Anzola, D., Bauermann, T., French, T., Post, H. and Salt, D. (2019) Using ABMs for prediction: Two thought experiments and a workshop. Social Simulation 2019, Mainz.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Thorngate, W. & Edmonds, B. (2013) Measuring simulation-observation fit: An introduction to ordinal pattern analysis. Journal of Artificial Societies and Social Simulation, 16(2):14. http://jasss.soc.surrey.ac.uk/16/2/4.html

Watts, D. J. (2014). Common Sense and Sociological Explanations. American Journal of Sociology, 120(2), 313-351.


Edmonds, B., Polhill, G and Hales, D. (2019) Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Why the social simulation community should tackle prediction

By Gary Polhill

(Part of the Prediction-Thread)

On 4 May 2002, Scott Moss (2002) reported in the Proceedings of the National Academy of Sciences of the United States of America that he had recently approached the e-mail discussion list of the International Institute of Forecasters to ask whether anyone had an example of a correct econometric forecast of an extreme event. None of the respondents were able to provide a satisfactory answer.

As reported by Hassan et al. (2013), on 28 April 2009, Scott Moss asked a similar question of the members of the SIMSOC mailing list: “Does anyone know of a correct, real-time, model-based policy-impact forecast?” [1] No-one responded with such an example, and Hassan et al. note that the ensuing discussion questioned why we are bothering with agent-based models (ABMs). Papers such as Epstein’s (2008) suggest this is not an uncommon conversation.

On 23 March 2018, I wrote an email [2] to the SIMSOC mailing list asking for expressions of interest in a prediction competition to be held at the Social Simulation Conference in Stockholm in 2018. I received two such expressions, and consequently announced on 10 May 2018 that the competition would go ahead. [3] By 22 May 2018, however, one of the two had pulled out because of lack of data, and I contacted the list to say the competition would be replaced with a workshop. [4]

Why the problem with prediction? As Edmonds (2017), discussing different modelling purposes, says, prediction is extremely challenging in the type of complex social system in which an agent-based model would justifiably be applied. He doesn’t go as far as stating that prediction is impossible; but with Aodha (2017, p. 819) he says, in the final chapter of the same book, that modellers should “stop using the word predict” and policymakers should “stop expecting the word predict”. At a minimum, this suggests a strong aversion to prediction within the social simulation community.

Nagel (1979) gives attention to why prediction is hard in the social sciences. Not least amongst the reasons offered is the fact that social systems may adapt according to predictions made – whether those predictions are right or wrong. Nagel gives two examples of this: suicidal predictions are those in which a predicted event does not happen because steps are taken to avert the predicted event; self-fulfilling prophecies are events that occur largely because they have been predicted, but arguably would not have occurred otherwise.

The advent of empirical ABM, as hailed by Janssen and Ostrom’s (2006) editorial introduction to a special issue of Ecology and Society on the subject, naturally raises the question of using ABMs to make predictions, at least insofar as “predict” in this context means using an ABM to generate new knowledge about the empirical world that can be tested by observing it. There are various reasons why developing ABMs with the purpose of prediction is a goal worth pursuing. Three of them are:

  • Developing predictions, Edmonds (2017) notes, is an iterative process, requiring testing and adapting a model against various data. Engaging with such a process with ABMs offers vital opportunities to learn and develop methodology, not least on the collection and use of data in ABMs, but also in areas such as model design, calibration, validation and sensitivity analysis. We should expect, or at least be prepared for, our predictions to fail often. Then, the value is in what we learn from these failures, both about the systems we are modelling, and about the approach taken.
  • There is undeniably a demand for predictions in complex social systems. That demand will not go away just because a small group of people claim that prediction is impossible. A key question is how we want that demand to be met. Presumably at least some of the people engaged in empirical ABM have chosen an agent-based approach over simpler, more established alternatives because they believe ABMs to be sufficiently better to be worth the extra effort of their development. We don’t know whether ABMs can be better at prediction, but such knowledge would at least be useful.
  • Edmonds (2017) says that predictions should be reliable and useful. Reliability pertains both to having a reasonable comprehension of the conditions of application of the model, and to the predictions being consistently right when the conditions apply. Usefulness means that the knowledge the prediction supplies is of value with respect to its accuracy. For example, a weather forecast stating that tomorrow the mean temperature on the Earth’s surface will be between –100 and +100 Celsius is not especially useful (at least to its inhabitants). However, a more general point is that we are accustomed to predictions being phrased in particular ways because of the methods used to generate them. Attempting prediction using ABM may lead to a situation in which we develop different language around prediction, which in turn could have added benefits: (a) gaining a better understanding of what ABM offers that other approaches do not; (b) managing the expectations of those who demand predictions regarding what predictions should look like.

Prediction is not the only reason to engage in a modelling exercise. However, in future if the social simulation community is asked for an example of a correct prediction of an ABM, it would be desirable to be able to point to a body of research and methodology that has been developed as a result of trying to achieve this aim, and ideally to be able to supply a number of examples of success. This would be better than a fraught conversation about the point of modelling, and consequent attempts to divert attention to all of the other reasons to build an ABM that aren’t to do with prediction. To this end, it would be good if the social simulation community embraced the challenge, and provided a supportive environment to those with the courage to take it on.

Notes

  1. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;fb704db4.0904 (Cited in Hassan et al. (2013))
  2. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;14ecabbf.1803
  3. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SIMSOC;1802c445.1805
  4. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;ffe62b05.1805

References

Aodha, L. n. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 801-822.

Edmonds, B. (2017) Different modelling purposes. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 39-58.

Epstein, J. (2008) Why model? Journal of Artificial Societies and Social Simulation 11 (4), 12. http://jasss.soc.surrey.ac.uk/11/4/12.html

Hassan, S., Arroyo, J., Galán, J. M., Antunes, L. and Pavón, J. (2013) Asking the oracle: introducing forecasting principles into agent-based modelling. Journal of Artificial Societies and Social Simulation 16 (3), 13. http://jasss.soc.surrey.ac.uk/16/3/13.html

Janssen, M. A. and Ostrom, E. (2006) Empirically based, agent-based models. Ecology and Society 11 (2), 37. http://www.ecologyandsociety.org/vol11/iss2/art37/

Moss, S. (2002) Policy analysis from first principles. Proceedings of the National Academy of Sciences of the United States of America 99 (suppl. 3), 7267-7274. http://doi.org/10.1073/pnas.092080699

Nagel, E. (1979) The Structure of Science: Problems in the Logic of Scientific Explanation. Hackett Publishing Company.


Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)