Tag Archives: sensitivity analysis

Today We Have Naming Of Parts: A Possible Way Out Of Some Terminological Problems With ABM

By Edmund Chattoe-Brown


Today we have naming of parts. Yesterday,
We had daily cleaning. And tomorrow morning,
We shall have what to do after firing. But to-day,
Today we have naming of parts. Japonica
Glistens like coral in all of the neighbouring gardens,
And today we have naming of parts.
(Naming of Parts, Henry Reed, 1942)

It is not difficult to establish by casual reading that there are almost as many ways of using crucial terms like calibration and validation in ABM as there are actual instances of their use. This creates several damaging problems for scientific progress in the field. Firstly, when two different researchers both say they “validated” their ABMs they may mean different specific scientific activities. This makes it hard for readers to evaluate research generally, particularly if researchers assume that it is obvious what their terms mean (rather than explaining explicitly what they did in their analysis). Secondly, based on this, each researcher may feel that the other has not really validated their ABM but has instead done something to which a different name should more properly be given. This compounds the possible confusion in debate. Thirdly, there is a danger that researchers may rhetorically favour (perhaps unconsciously) uses that, for example, make their research sound more robustly empirical than it actually is. For example, validation is sometimes used to mean consistency with stylised facts (rather than, say, correspondence with a specific time series according to some formal measure). But we often have no way of telling what the status of the presented stylised facts is. Are they an effective summary of what is known in a field? Are they the facts on which most researchers agree or for which the available data presents the clearest picture? (Less reputably, can readers be confident that they were not selected for presentation because of their correspondence?) Fourthly, because these terms are used differently by different researchers it is possible that valuable scientific activities that “should” have agreed labels will “slip down the terminological cracks” (either for the individual or for the ABM community generally). Apart from clear labels avoiding confusion for others, they may help to avoid confusion for you too!

But apart from these problems (and there may be others but these are not the main thrust of my argument here) there is also a potential impasse. There simply doesn’t seem to be any value in arguing about what the “correct” meaning of validation (for example) should be. Because these are merely labels there is no objective way to resolve this issue. Further, even if we undertook to agree the terminology collectively, each individual would tend to argue for their own interpretation without solid grounds (because there are none to be had) and any collective decision would probably therefore be unenforceable. If we decide to invent arbitrary new terminology from scratch we not only run the risk of adding to the existing confusion of terms (rather than reducing it) but it is also quite likely that everyone will find the new terms unhelpful.

Unfortunately, however, we probably cannot do without labels for these scientific activities involved in quality controlling ABMs. If we had to describe everything we did without any technical shorthand, presenting research might well become impossibly unwieldy.

My proposed solution is therefore to invent terms from scratch (so we don’t end up arguing about our different customary usages to no purpose) but to do so on the basis of actual scientific practices reported in published research. For example, we might call the comparison of corresponding real and simulated data (which at least has the endorsement of the much used Gilbert and Troitzsch 2005 – see pp. 15-19 – to be referred to as validation) CORAS – Comparison Of Real And Simulated. Similarly, assigning values to parameters given the assumptions of model “structures” might be called PANV – Parameters Assigned Numerical Values.

It is very important to be clear what the intention is here. Naming cannot solve scientific problems or disagreements. (Indeed, failure to grasp this may well be why our terminology is currently so muddled as people try to get their different positions through “on the nod”.) For example, if we do not believe that correspondence with stylised facts and comparison measures on time series have equivalent scientific status then we will have to agree distinct labels for them and have the debate about their respective value separately. Perhaps the former could be called COSF – Comparison Of Stylised Facts. But it seems plainly easier to describe specific scientific activities accurately and then find labels for them than to have to wade through the existing marsh of ambiguous terminology and try to extract the associated science. An example of a practice which does not seem to have even one generally agreed label (and therefore seems to be neglected in ABM as a practice) is JAMS – Justifying A Model Structure. (Why are your agents adaptive rather than habitual or rational? Why do they mix randomly rather than in social networks?)

Obviously, there still needs to be community agreement for such a convention to be useful (and this may need to be backed institutionally for example by reviewing requirements). But the logic of the approach avoids several existing problems. Firstly, while the labels are useful shorthand, they are not arbitrary. Each can be traced back to a clearly definable scientific practice. Secondly, this approach steers a course between the Scylla of fruitless arguments from current muddled usage and the Charybdis of a novel set of terminology that is equally unhelpful to everybody. (Even if people cannot agree on labels, they knew how they built and evaluated their ABMs so they can choose – or create – new labels accordingly.) Thirdly, the proposed logic is extendable. As we clarify our thinking, we can use it to label (or improve the labels of) any current set of scientific practices. We will do not have to worry that we will run out of plausible words in everyday usage.

Below I suggest some more scientific practices and possible terms for them. (You will see that I have also tried to make the terms as pronounceable and distinct as possible.)

Practice Term
Checking the results of an ABM by building another.[1] CAMWA (Checking A Model With Another).
Checking ABM code behaves as intended (for example by debugging procedures, destructive testing using extreme values and so on). TAMAD (Testing A Model Against Description).
Justifying the structure of the environment in which agents act. JEM (Justifying the Environment of a Model): This is again a process that may pass unnoticed in ABM typically. For example, by assuming that agents only consider ethnic composition, the Schelling Model (Schelling 1969, 1971) does not “allow” locations to be desirable because, for example, they are near good schools. This contradicts what was known empirically well before (see, for example, Rossi 1955) and it isn’t clear whether simply saying that your interest is in an “abstract” model can justify this level of empirical neglect.
Finding out what effect parameter values have on ABM behaviour. EVOPE (Exploring Value Of Parameter Effects).
Exploring the sensitivity of an ABM to structural assumptions not justified empirically (see Chattoe-Brown 2021). ESOSA (Exploring the Sensitivity Of Structural Assumptions).

Clearly this list is incomplete but I think it would be more effective if characterising the scientific practices in existing ABM and naming them distinctively was a collective enterprise.

Acknowledgements

This research is funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5 (PI: Professor Bruce Edmonds, Centre for Policy Modelling, Manchester Metropolitan University: https://gtr.ukri.org/projects?ref=ES%2FS015159%2F1).

Notes

[1] It is likely that we will have to invent terms for subcategories of practices which differ in their aims or warranted conclusions. For example, rerunning the code of the original author (CAMWOC – Checking A Model With Original Code), building a new ABM from a formal description like ODD (CAMUS – Checking A Model Using Specification) and building a new ABM from the published description (CAMAP – Checking A Model As Published, see Chattoe-Brown et al. 2021).

References

Chattoe-Brown, Edmund (2021) ‘Why Questions Like “Do Networks Matter?” Matter to Methodology: How Agent-Based Modelling Makes It Possible to Answer Them’, International Journal of Social Research Methodology, 24(4), pp. 429-442. doi:10.1080/13645579.2020.1801602

Chattoe-Brown, Edmund, Gilbert, Nigel, Robertson, Duncan A. and Watts Christopher (2021) ‘Reproduction as a Means of Evaluating Policy Models: A Case Study of a COVID-19 Simulation’, medRXiv, 23 February. doi:10.1101/2021.01.29.21250743

Gilbert, Nigel and Troitzsch, Klaus G. (2005) Simulation for the Social Scientist, second edition (Maidenhead: Open University Press).

Rossi, Peter H. (1955) Why Families Move: A Study in the Social Psychology of Urban Residential Mobility (Glencoe, IL, Free Press).

Schelling, Thomas C. (1969) ‘Models of Segregation’, American Economic Review, 59(2), May, pp. 488-493. (available at https://www.jstor.org/stable/1823701)


Chattoe-Brown, E. (2022) Today We Have Naming Of Parts: A Possible Way Out Of Some Terminological Problems With ABM. Review of Artificial Societies and Social Simulation, 11th January 2022. https://rofasss.org/2022/01/11/naming-of-parts/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Don’t try to predict COVID-19. If you must, use Deep Uncertainty methods

By Patrick Steinmanna, Jason R. Wangb, George A. K. van Voorna, and Jan H. Kwakkelb

a Biometris, Wageningen University & Research, Wageningen, the Netherlands, b Delft University of Technology, Faculty of Technology, Policy & Management, Delft, the Netherlands

(A contribution to the: JASSS-Covid19-Thread)

Abstract

We respond to the recent JASSS article on COVID-19 and computational modelling. We disagree with the authors on one major point and note the lack of discussion of a second one. We believe that COVID-19 cannot be predicted numerically, and attempting to make decisions based on such predictions will cost human lives. Furthermore, we note that the original article only briefly comments on uncertainty. We urge those attempting to model COVID-19 for decision support to acknowledge the deep uncertainties surrounding the pandemic, and to employ Decision Making under Deep Uncertainty methods such as exploratory modelling, global sensitivity analysis, and robust decision-making in their analysis to account for these uncertainties.

Introduction

We read the recent article in the Journal of Artificial Societies and Social Simulation on predictive COVID-19 modelling (Squazzoni et al. 2020) with great interest. We agree with the authors on many general points, such as the need for rigorous and transparent modelling and documentation. However, we were dismayed that the authors focused solely on how to make predictive simulation models of COVID-19 without first discussing whether making such models is appropriate under the current circumstances. We believe this question is of greater importance, and that the answer will likely disappoint many in the community. We also note that the original piece does not engage substantively with methods of modelling and model analysis specifically designed for making time-critical decisions under uncertainty.

We respond to the call issued by the Review of Artificial Societies and Social Simulation for responses and opinions on predictive modelling of COVID-19. In doing so, we go above and beyond the recent RofASSS contribution by de Matos Fernandes & Keijzer (2020)—rather than saying that definite “predictions” should be replaced by probabilistic “expectations”, we contend that no probabilities whatsoever should be applied when modelling systems as uncertain as a global pandemic. This is presented in the first section. In the second section, we discuss how those with legitimate need for predictive epidemic modelling should approach their task, and which tools might be beneficial in the current context. In the last section, we summarize our opinions and issue our own challenges to the community.

To Model or Not to Model COVID-19, That Is the Question

The recent call attempts to lay out a path for using simulation modelling to forecast the COVID-19 epidemic. However, there is no critical reflection on the question of whether modelling is the appropriate tool for this, under the current circumstances. The authors argue that with sufficient methodological rigour, high-quality data and interdisciplinary collaboration, complex outcomes (such as the COVID-19 epidemic) can be predicted well and quickly enough to provide actionable decision support.

Computational modelling is difficult in the best of times. Even models with seemingly simple structure can have emergent behavior rendering them perfectly random (Wolfram 1983) or Turing complete (Cook 2004). Attempting to draw any kind of conclusions from a simulation model, especially in the life-and-death context of pandemic decision making, must be done carefully and with respect for uncertainty. If, for whatever reason, this cannot be done, then modelling is not the right tool to answer the question at hand (Thompson & Smith 2019). The numerical nature of models is seductive, but must be employed wisely to avoid “useless arithmetic” (Pilkey-Jarvis & Pilkey 2008) or statistical fallacies (Benessia et al. 2016).

Trying to skilfully predict how the COVID-19 outbreak will evolve regionally or globally is a fool’s errand. Epistemic uncertainties about key parameters and processes describing the disease abound. Human behaviour is changing in response to the outbreak. Research and development burgeon in many sciences with presently unknowable results. Anyone claiming to know where the world will be in even a few weeks is at best delusional. Uncertainty is aggravated by the problem of equifinality (Oreskes et al. 1994). For any simulation model of COVID-19, there will be a set of model parametrizations that has a similar quality of fit with the available data. Much of this is acknowledged by Squazzoni et al. (2020), yet inexplicably they still call for developing probabilistic forecasts of the outbreak using empirically validated models. We instead contend that “about these matters, there is no scientific basis on which to form any calculable probability” (Keynes 1937), and that validation should be based on usefulness in aiding time-urgent decision-making, rather than predictive accuracy (Pielke 2003). However, the capacity for such policy-oriented modelling must be built between pandemics, not during them (Rivers et al. 2019).

This call to abstain from predicting COVID-19 does not imply that the broader community should refrain from modelling completely. The illustrative power of simple models has been amply demonstrated in various media outlets. We do urge modellers not to frame their work as predictive (e.g. “How Pandemics Can End”, rather than “How COVID-19 Will End”), and to use watermarks where possible to indicate that the shown work is not predictive. There is also ample opportunity to use simulation modelling to solve ancillary problems. For example, established transport and logistics models could be adapted to ensure supply of critical healthcare equipment is timely and efficient. Similarly, agri-food models could explore how to secure food production and distribution under labour shortages. These can be vital, though less sensational, contributions of simulation modelling to the ongoing crisis.

Deep Uncertainty: How to Predict COVID-19, if(f) You Must

Deep Uncertainty (Lempert et al. 2003) is present when analysts cannot know, or stakeholders cannot agree on:

  1. The probability distributions relevant to unknown system variables,
  2. The relations and mechanisms present in the system, and/or
  3. The metrics by which future system states should be evaluated.

All three conditions are present in the case of the COVID-19 pandemic. To give a brief example of each, we know very little about asymptomatic infections, whether a vaccine will ever become available, and whether the socio-psychological and economic impacts of a “flattened curve” future are bearable (and by whom). The field of Decision Making under Deep Uncertainty has been working on problems of a similar nature for many years already, and developed a variety of tools to analyse such problems (Marchau et al. 2019). These methods may be beneficial for designing COVID-19 policies with simulation models—if, as discussed previously, this is appropriate. In the following, we present three such methods and their potential value for COVID-19 decision support: exploratory modelling, global sensitivity analysis, and robust decision-making.

Exploratory modelling (Bankes 1993) is a conceptual approach to using simulation models for policy analysis. It emerges as a response to the question how models that cannot be empirically validated can still be used to inform planning and decision-making (Hodges 1991, Hodges & Dewar 1992). Instead of consolidating increasing amounts of knowledge into “the” model of a system, exploratory modelling advocates using wide uncertainty ranges for unknown parameters to generate a large ensemble of plausible futures, with no predictive or probabilistic power attached or implied a priori (Shortridge & Zaitchik 2018). This ensemble may represent a variety of assumptions, theories, and system structures. It could even be generated using a multitude of models (Page 2018; Smaldino 2017) and metrics (Manheim 2018). By reasoning across such an ensemble, insights agnostic to specific assumptions may be reached, sidestepping a priori biases that are inherent in only examining a simple set of scenarios, as COVID-19 policy models observed by the authors do. Reasoning across such limited sets obscures policy-relevant futures which emerge as hybrids of pre-specified positive and negative narratives (Lamontagne et al. 2018). In the context of the COVID-19 pandemic, exploratory modelling could be used to contrast a variety of assumptions about disease transmission mechanisms (e.g., the role of schools, children, or asymptomatic cases in the speed of the outbreak), reinfection potential, or adherence to social distancing norms. Many ESSA members are already familiar with such methods—NetLogo’s BehaviorSpace function is a prime example. The Exploratory Modelling & Analysis Workbench (Kwakkel 2017) provides a similar, platform-agnostic functionality by means of a Python interface. We encourage all modellers to embrace such tools, and to be honest about which parameters and structural assumptions are uncertain, how uncertain they are, and how this affects the inferences that can and cannot be made based on the results from the model.

Global sensitivity analysis (Saltelli 2004) is a method of studying both the importance and interaction of uncertain parameters on the outputs of a simulation model. Many simulation modellers are already familiar with local sensitivity analysis, where parameters are varied one at a time to ascertain their individual effect on model output. This is insufficient for studying parameter interactions in non-linear systems (Saltelli et al. 2019; ten Broeke et al. 2016). In global sensitivity analysis, combinations of parameters are varied and studied simultaneously, illuminating their joint or interaction effects. This is critical for the rigorous study of complex system models, where parameters may have unexpected, non-linear interactions. In the context of the COVID-19 epidemic, we have seen at least two public health agencies perform local sensitivity analysis over small parameter ranges, which may blind decision makers to worst-case futures (Siegenfeld & Bar-Yam 2020). Global sensitivity analysis might reveal how different assumptions for e.g. duration of Intensive Care (IC) and age-related case severity may interact to create a “perfect storm” of IC need. A collection of global sensitivity analysis methods has been encoded for Python in the SALib package (Herman & Usher 2018), and how to use these with NetLogo is illustrated in Jaxa-Rozen & Kwakkel (2018).

Robust Decision Making (RDM) (Lempert et al. 2006) is a general analytic method for designing policies which are robust across uncertainties—they perform well regardless of which future actually materializes. Policies are designed by iteratively stress-testing them across ensembles of plausible futures representing different assumptions, theories, and input parameter combinations. This represents a departure from established, probabilistic risk management approaches, which are inappropriate for fat-tailed processes such as pandemics (Norman et al. 2020). More recently, RDM has been extended to Dynamic Adaptive Policy Pathways (DAPP) (Kwakkel et al. 2015) by incorporating adaptive policies conditioned on specific triggers or signposts identified in exploratory modeling runs. In the context of the COVID-19 epidemic, DAPP might be used to design policies which can adapt as the situation develops (Hamarat et al. 2012)—possibly representing a transparent and verifiable approach to implementing the “hammer and dance” epidemic suppression method which has been widely discussed in popular media. Thinking in terms of pathways conditional on how the outbreak evolves is also a more realistic way of preparing for the dance: Rather than giving a human time line, the virus determines the time line. All we can do is indicate the conditions under which certain types of actions will be taken.

Conclusions: Please Don’t. If You Must, Use Deep Uncertainty methods.

We have raised two points of importance which are not discussed in a recent article on COVID-19 predictive modelling in JASSS. In particular, we have proposed that the question of whether such models should be created must precede any discussion of how to do so. We found that complex outcomes such as epidemics cannot reliably be predicted using simulation models, as there are numerous uncertainties that significantly affect possible future system states. However, models may be still be useful in times of crisis, if created and used appropriately. Furthermore, we have noted that there exists an entire field of study focusing on Decision Making under Deep Uncertainty, and that model analysis methods for situations like this already exist. We have briefly highlighted three methods—exploratory modelling, global sensitivity analysis, and robust decision-making—and given examples for how they might be used in the present context.

Stemming from these two points, we issue our own challenges to the ESSA modelling community and the field of systems simulation in general:

  • COVID-19 prediction distancing challenge: Do not attempt to predict the COVID-19 epidemic.
  • COVID-19 deep uncertainty challenge: If you must predict the COVID-19 epidemic, embrace deep uncertainty principles, including transparent treatment of uncertainties, exploratory modeling, global sensitivity analysis, and robust decision-making.

References

Bankes, S. (1993). Exploratory Modeling for Policy Analysis. Operations Research, 41(3), 435–449. doi: 10.1287/opre.41.3.435

Benessia, A., Funtowicz, S., Giampietro, M., Guimarães Pereira, A., Ravetz, J. R., Saltelli, A., Strand, R., & van der Sluijs, J. P. (2016). Science on the Verge. Consortium for Science, Policy & Outcomes Tempe, AZ and Washington, DC.

Cook, M. (2004). Universality in Elementary Cellular Automata. Complex Systems, 15(1), 1–40.

de Matos Fernandes, C. A., & Keijzer, M. A. (2020). No one can predict the future: More than a semantic dispute. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Hamarat, C., Kwakkel, J., & Pruyt, E. (2012). Adaptive Policymaking under Deep Uncertainty : Optimal Preparedness for the next pandemic. Proceedings of the 30th International Conference of the System Dynamics Society, Nrc 2009.

Herman, J., & Usher, W. (2018). SALib : Sensitivity Analysis Library in Python ( Numpy ). Contains Sobol , SALib : An open-source Python library for Sensitivity Analysis. 41(April), 2015–2017. doi:10.1016/S0010-1

Jaxa-Rozen, M., & Kwakkel, J. H. (2018). PyNetLogo: Linking NetLogo with Python. Journal of Artificial Societies and Social Simulation, 21(2). <http://jasss.soc.surrey.ac.uk/21/2/4.html> doi:10.18564/jasss.3668

Keynes, J. M. (1937). The General Theory of Employment. The Quarterly Journal of Economics, 51(2), 209. doi:10.2307/1882087

Kwakkel, J. H. (2017). The Exploratory Modeling Workbench: An open source toolkit for exploratory modeling, scenario discovery, and (multi-objective) robust decision making. Environmental Modelling & Software, 96, 239–250. doi:10.1016/j.envsoft.2017.06.054

Kwakkel, J. H., Haasnoot, M., & Walker, W. E. (2015). Developing dynamic adaptive policy pathways: a computer-assisted approach for developing adaptive strategies for a deeply uncertain world. Climatic Change, 132(3), 373–386. doi:10.1007/s10584-014-1210-4

Lamontagne, J. R., Reed, P. M., Link, R., Calvin, K. V., Clarke, L. E., & Edmonds, J. A. (2018). Large Ensemble Analytic Framework for Consequence-Driven Discovery of Climate Change Scenarios. Earth’s Future, 6(3), 488–504. doi:10.1002/2017EF000701

Lempert, R. J., Groves, D. G., Popper, S. W., & Bankes, S. C. (2006). A general, analytic method for generating robust strategies and narrative scenarios. Management Science, 52(4), 514–528. doi:10.1287/mnsc.1050.0472

Lempert, R. J., Popper, S., & Bankes, S. (2003). Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. doi:10.7249/mr1626

Manheim, D. (2018). Building Less Flawed Metrics: Dodging Goodhart and Campbell’s Laws. In MPRA.

Marchau, V. A. W. J., Walker, W. E., Bloemen, P. J. T. M., & Popper, S. W. (Eds.). (2019). Decision Making under Deep Uncertainty. Springer International Publishing. doi:10.1007/978-3-030-05252-2

Norman, J., Bar-Yam, Y., & Taleb, N. N. (2020). Systemic Risk of Pandemic via Novel Pathogens – Coronavirus: A Note. New England Complex Systems Institute. http://arxiv.org/abs/1410.5787

Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263(5147), 641–646. doi:10.1126/science.263.5147.641

Page, S. E. (2018). The model thinker: what you need to know to make data work for you. Hachette UK.

Pilkey-Jarvis, L., & Pilkey, O. H. (2008). Useless Arithmetic: Ten Points to Ponder When Using Mathematical Models in Environmental Decision Making. Public Administration Review, 68(3), 470–479. doi:10.1111/j.1540-6210.2008.00883_2.x

Rivers, C., Chretien, J. P., Riley, S., Pavlin, J. A., Woodward, A., Brett-Major, D., Maljkovic Berry, I., Morton, L., Jarman, R. G., Biggerstaff, M., Johansson, M. A., Reich, N. G., Meyer, D., Snyder, M. R., & Pollett, S. (2019). Using “outbreak science” to strengthen the use of models during epidemics. Nature Communications, 10(1), 9–11. doi:10.1038/s41467-019-11067-2

Saltelli, A. (2004). Global sensitivity analysis: an introduction. Proc. 4th International Conference on Sensitivity Analysis of Model Output (SAMO’04), 27–43.

Saltelli, A., Aleksankina, K., Becker, W., Fennell, P., Ferretti, F., Holst, N., Li, S., & Wu, Q. (2019). Why so many published sensitivity analyses are false: A systematic review of sensitivity analysis practices. Environmental Modelling and Software, 114(March 2018), 29–39. doi:10.1016/j.envsoft.2019.01.012

Shortridge, J. E., & Zaitchik, B. F. (2018). Characterizing climate change risks by linking robust decision frameworks and uncertain probabilistic projections. Climatic Change, 151(3–4), 525–539. doi:10.1007/s10584-018-2324-x

Siegenfeld, A. F., & Bar-Yam, Y. (2020). What models can and cannot tell us about COVID-19 (pp. 1–3). New England Complex Systems Institute.

Smaldino, P. E. (2017). Models are stupid, and we need more of them. Computational Social Psychology, 311–331. doi:10.4324/9781315173726

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F., & Gilbert, N. (2020). Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2), 10. <http://jasss.soc.surrey.ac.uk/23/2/10.html> doi:10.18564/jasss.4298

ten Broeke, G., van Voorn, G. A. K., & Ligtenberg, A. (2016). Which Sensitivity Analysis Method Should I Use for My Agent-Based Model? Journal of Artificial Societies and Social Simulation, 19(1), 1–35. <http://jasss.soc.surrey.ac.uk/19/1/5.html> doi:10.18564/jasss.2857

Thompson, E. L., & Smith, L. A. (2019). Escape from model-land. Economics: The Open-Access, Open-Assessment E-Journal. doi:10.5018/economics-ejournal.ja.2019-40

Wolfram, S. (1983). Statistical mechanics of cellular automata. Reviews of Modern Physics, 55(3), 601–644. doi:10.1103/RevModPhys.55.60


Steinmann, P., Wang, J. R., van Voorn, G. A. K. and Kwakkel, J. H. (2020) Don’t try to predict COVID-19. If you must, use Deep Uncertainty methods. Review of Artificial Societies and Social Simulation, 17th April 2020. https://rofasss.org/2020/04/17/deep-uncertainty/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)