By Patrick Steinmanna, Jason R. Wangb, George A. K. van Voorna, and Jan H. Kwakkelb
a Biometris, Wageningen University & Research, Wageningen, the Netherlands, b Delft University of Technology, Faculty of Technology, Policy & Management, Delft, the Netherlands
(A contribution to the: JASSS-Covid19-Thread)
Abstract
We respond to the recent JASSS article on COVID-19 and computational modelling. We disagree with the authors on one major point and note the lack of discussion of a second one. We believe that COVID-19 cannot be predicted numerically, and attempting to make decisions based on such predictions will cost human lives. Furthermore, we note that the original article only briefly comments on uncertainty. We urge those attempting to model COVID-19 for decision support to acknowledge the deep uncertainties surrounding the pandemic, and to employ Decision Making under Deep Uncertainty methods such as exploratory modelling, global sensitivity analysis, and robust decision-making in their analysis to account for these uncertainties.
Introduction
We read the recent article in the Journal of Artificial Societies and Social Simulation on predictive COVID-19 modelling (Squazzoni et al. 2020) with great interest. We agree with the authors on many general points, such as the need for rigorous and transparent modelling and documentation. However, we were dismayed that the authors focused solely on how to make predictive simulation models of COVID-19 without first discussing whether making such models is appropriate under the current circumstances. We believe this question is of greater importance, and that the answer will likely disappoint many in the community. We also note that the original piece does not engage substantively with methods of modelling and model analysis specifically designed for making time-critical decisions under uncertainty.
We respond to the call issued by the Review of Artificial Societies and Social Simulation for responses and opinions on predictive modelling of COVID-19. In doing so, we go above and beyond the recent RofASSS contribution by de Matos Fernandes & Keijzer (2020)—rather than saying that definite “predictions” should be replaced by probabilistic “expectations”, we contend that no probabilities whatsoever should be applied when modelling systems as uncertain as a global pandemic. This is presented in the first section. In the second section, we discuss how those with legitimate need for predictive epidemic modelling should approach their task, and which tools might be beneficial in the current context. In the last section, we summarize our opinions and issue our own challenges to the community.
To Model or Not to Model COVID-19, That Is the Question
The recent call attempts to lay out a path for using simulation modelling to forecast the COVID-19 epidemic. However, there is no critical reflection on the question of whether modelling is the appropriate tool for this, under the current circumstances. The authors argue that with sufficient methodological rigour, high-quality data and interdisciplinary collaboration, complex outcomes (such as the COVID-19 epidemic) can be predicted well and quickly enough to provide actionable decision support.
Computational modelling is difficult in the best of times. Even models with seemingly simple structure can have emergent behavior rendering them perfectly random (Wolfram 1983) or Turing complete (Cook 2004). Attempting to draw any kind of conclusions from a simulation model, especially in the life-and-death context of pandemic decision making, must be done carefully and with respect for uncertainty. If, for whatever reason, this cannot be done, then modelling is not the right tool to answer the question at hand (Thompson & Smith 2019). The numerical nature of models is seductive, but must be employed wisely to avoid “useless arithmetic” (Pilkey-Jarvis & Pilkey 2008) or statistical fallacies (Benessia et al. 2016).
Trying to skilfully predict how the COVID-19 outbreak will evolve regionally or globally is a fool’s errand. Epistemic uncertainties about key parameters and processes describing the disease abound. Human behaviour is changing in response to the outbreak. Research and development burgeon in many sciences with presently unknowable results. Anyone claiming to know where the world will be in even a few weeks is at best delusional. Uncertainty is aggravated by the problem of equifinality (Oreskes et al. 1994). For any simulation model of COVID-19, there will be a set of model parametrizations that has a similar quality of fit with the available data. Much of this is acknowledged by Squazzoni et al. (2020), yet inexplicably they still call for developing probabilistic forecasts of the outbreak using empirically validated models. We instead contend that “about these matters, there is no scientific basis on which to form any calculable probability” (Keynes 1937), and that validation should be based on usefulness in aiding time-urgent decision-making, rather than predictive accuracy (Pielke 2003). However, the capacity for such policy-oriented modelling must be built between pandemics, not during them (Rivers et al. 2019).
This call to abstain from predicting COVID-19 does not imply that the broader community should refrain from modelling completely. The illustrative power of simple models has been amply demonstrated in various media outlets. We do urge modellers not to frame their work as predictive (e.g. “How Pandemics Can End”, rather than “How COVID-19 Will End”), and to use watermarks where possible to indicate that the shown work is not predictive. There is also ample opportunity to use simulation modelling to solve ancillary problems. For example, established transport and logistics models could be adapted to ensure supply of critical healthcare equipment is timely and efficient. Similarly, agri-food models could explore how to secure food production and distribution under labour shortages. These can be vital, though less sensational, contributions of simulation modelling to the ongoing crisis.
Deep Uncertainty: How to Predict COVID-19, if(f) You Must
Deep Uncertainty (Lempert et al. 2003) is present when analysts cannot know, or stakeholders cannot agree on:
- The probability distributions relevant to unknown system variables,
- The relations and mechanisms present in the system, and/or
- The metrics by which future system states should be evaluated.
All three conditions are present in the case of the COVID-19 pandemic. To give a brief example of each, we know very little about asymptomatic infections, whether a vaccine will ever become available, and whether the socio-psychological and economic impacts of a “flattened curve” future are bearable (and by whom). The field of Decision Making under Deep Uncertainty has been working on problems of a similar nature for many years already, and developed a variety of tools to analyse such problems (Marchau et al. 2019). These methods may be beneficial for designing COVID-19 policies with simulation models—if, as discussed previously, this is appropriate. In the following, we present three such methods and their potential value for COVID-19 decision support: exploratory modelling, global sensitivity analysis, and robust decision-making.
Exploratory modelling (Bankes 1993) is a conceptual approach to using simulation models for policy analysis. It emerges as a response to the question how models that cannot be empirically validated can still be used to inform planning and decision-making (Hodges 1991, Hodges & Dewar 1992). Instead of consolidating increasing amounts of knowledge into “the” model of a system, exploratory modelling advocates using wide uncertainty ranges for unknown parameters to generate a large ensemble of plausible futures, with no predictive or probabilistic power attached or implied a priori (Shortridge & Zaitchik 2018). This ensemble may represent a variety of assumptions, theories, and system structures. It could even be generated using a multitude of models (Page 2018; Smaldino 2017) and metrics (Manheim 2018). By reasoning across such an ensemble, insights agnostic to specific assumptions may be reached, sidestepping a priori biases that are inherent in only examining a simple set of scenarios, as COVID-19 policy models observed by the authors do. Reasoning across such limited sets obscures policy-relevant futures which emerge as hybrids of pre-specified positive and negative narratives (Lamontagne et al. 2018). In the context of the COVID-19 pandemic, exploratory modelling could be used to contrast a variety of assumptions about disease transmission mechanisms (e.g., the role of schools, children, or asymptomatic cases in the speed of the outbreak), reinfection potential, or adherence to social distancing norms. Many ESSA members are already familiar with such methods—NetLogo’s BehaviorSpace function is a prime example. The Exploratory Modelling & Analysis Workbench (Kwakkel 2017) provides a similar, platform-agnostic functionality by means of a Python interface. We encourage all modellers to embrace such tools, and to be honest about which parameters and structural assumptions are uncertain, how uncertain they are, and how this affects the inferences that can and cannot be made based on the results from the model.
Global sensitivity analysis (Saltelli 2004) is a method of studying both the importance and interaction of uncertain parameters on the outputs of a simulation model. Many simulation modellers are already familiar with local sensitivity analysis, where parameters are varied one at a time to ascertain their individual effect on model output. This is insufficient for studying parameter interactions in non-linear systems (Saltelli et al. 2019; ten Broeke et al. 2016). In global sensitivity analysis, combinations of parameters are varied and studied simultaneously, illuminating their joint or interaction effects. This is critical for the rigorous study of complex system models, where parameters may have unexpected, non-linear interactions. In the context of the COVID-19 epidemic, we have seen at least two public health agencies perform local sensitivity analysis over small parameter ranges, which may blind decision makers to worst-case futures (Siegenfeld & Bar-Yam 2020). Global sensitivity analysis might reveal how different assumptions for e.g. duration of Intensive Care (IC) and age-related case severity may interact to create a “perfect storm” of IC need. A collection of global sensitivity analysis methods has been encoded for Python in the SALib package (Herman & Usher 2018), and how to use these with NetLogo is illustrated in Jaxa-Rozen & Kwakkel (2018).
Robust Decision Making (RDM) (Lempert et al. 2006) is a general analytic method for designing policies which are robust across uncertainties—they perform well regardless of which future actually materializes. Policies are designed by iteratively stress-testing them across ensembles of plausible futures representing different assumptions, theories, and input parameter combinations. This represents a departure from established, probabilistic risk management approaches, which are inappropriate for fat-tailed processes such as pandemics (Norman et al. 2020). More recently, RDM has been extended to Dynamic Adaptive Policy Pathways (DAPP) (Kwakkel et al. 2015) by incorporating adaptive policies conditioned on specific triggers or signposts identified in exploratory modeling runs. In the context of the COVID-19 epidemic, DAPP might be used to design policies which can adapt as the situation develops (Hamarat et al. 2012)—possibly representing a transparent and verifiable approach to implementing the “hammer and dance” epidemic suppression method which has been widely discussed in popular media. Thinking in terms of pathways conditional on how the outbreak evolves is also a more realistic way of preparing for the dance: Rather than giving a human time line, the virus determines the time line. All we can do is indicate the conditions under which certain types of actions will be taken.
Conclusions: Please Don’t. If You Must, Use Deep Uncertainty methods.
We have raised two points of importance which are not discussed in a recent article on COVID-19 predictive modelling in JASSS. In particular, we have proposed that the question of whether such models should be created must precede any discussion of how to do so. We found that complex outcomes such as epidemics cannot reliably be predicted using simulation models, as there are numerous uncertainties that significantly affect possible future system states. However, models may be still be useful in times of crisis, if created and used appropriately. Furthermore, we have noted that there exists an entire field of study focusing on Decision Making under Deep Uncertainty, and that model analysis methods for situations like this already exist. We have briefly highlighted three methods—exploratory modelling, global sensitivity analysis, and robust decision-making—and given examples for how they might be used in the present context.
Stemming from these two points, we issue our own challenges to the ESSA modelling community and the field of systems simulation in general:
- COVID-19 prediction distancing challenge: Do not attempt to predict the COVID-19 epidemic.
- COVID-19 deep uncertainty challenge: If you must predict the COVID-19 epidemic, embrace deep uncertainty principles, including transparent treatment of uncertainties, exploratory modeling, global sensitivity analysis, and robust decision-making.
References
Bankes, S. (1993). Exploratory Modeling for Policy Analysis. Operations Research, 41(3), 435–449. doi: 10.1287/opre.41.3.435
Benessia, A., Funtowicz, S., Giampietro, M., Guimarães Pereira, A., Ravetz, J. R., Saltelli, A., Strand, R., & van der Sluijs, J. P. (2016). Science on the Verge. Consortium for Science, Policy & Outcomes Tempe, AZ and Washington, DC.
Cook, M. (2004). Universality in Elementary Cellular Automata. Complex Systems, 15(1), 1–40.
de Matos Fernandes, C. A., & Keijzer, M. A. (2020). No one can predict the future: More than a semantic dispute. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/
Hamarat, C., Kwakkel, J., & Pruyt, E. (2012). Adaptive Policymaking under Deep Uncertainty : Optimal Preparedness for the next pandemic. Proceedings of the 30th International Conference of the System Dynamics Society, Nrc 2009.
Herman, J., & Usher, W. (2018). SALib : Sensitivity Analysis Library in Python ( Numpy ). Contains Sobol , SALib : An open-source Python library for Sensitivity Analysis. 41(April), 2015–2017. doi:10.1016/S0010-1
Jaxa-Rozen, M., & Kwakkel, J. H. (2018). PyNetLogo: Linking NetLogo with Python. Journal of Artificial Societies and Social Simulation, 21(2). <http://jasss.soc.surrey.ac.uk/21/2/4.html> doi:10.18564/jasss.3668
Keynes, J. M. (1937). The General Theory of Employment. The Quarterly Journal of Economics, 51(2), 209. doi:10.2307/1882087
Kwakkel, J. H. (2017). The Exploratory Modeling Workbench: An open source toolkit for exploratory modeling, scenario discovery, and (multi-objective) robust decision making. Environmental Modelling & Software, 96, 239–250. doi:10.1016/j.envsoft.2017.06.054
Kwakkel, J. H., Haasnoot, M., & Walker, W. E. (2015). Developing dynamic adaptive policy pathways: a computer-assisted approach for developing adaptive strategies for a deeply uncertain world. Climatic Change, 132(3), 373–386. doi:10.1007/s10584-014-1210-4
Lamontagne, J. R., Reed, P. M., Link, R., Calvin, K. V., Clarke, L. E., & Edmonds, J. A. (2018). Large Ensemble Analytic Framework for Consequence-Driven Discovery of Climate Change Scenarios. Earth’s Future, 6(3), 488–504. doi:10.1002/2017EF000701
Lempert, R. J., Groves, D. G., Popper, S. W., & Bankes, S. C. (2006). A general, analytic method for generating robust strategies and narrative scenarios. Management Science, 52(4), 514–528. doi:10.1287/mnsc.1050.0472
Lempert, R. J., Popper, S., & Bankes, S. (2003). Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. doi:10.7249/mr1626
Manheim, D. (2018). Building Less Flawed Metrics: Dodging Goodhart and Campbell’s Laws. In MPRA.
Marchau, V. A. W. J., Walker, W. E., Bloemen, P. J. T. M., & Popper, S. W. (Eds.). (2019). Decision Making under Deep Uncertainty. Springer International Publishing. doi:10.1007/978-3-030-05252-2
Norman, J., Bar-Yam, Y., & Taleb, N. N. (2020). Systemic Risk of Pandemic via Novel Pathogens – Coronavirus: A Note. New England Complex Systems Institute. http://arxiv.org/abs/1410.5787
Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263(5147), 641–646. doi:10.1126/science.263.5147.641
Page, S. E. (2018). The model thinker: what you need to know to make data work for you. Hachette UK.
Pilkey-Jarvis, L., & Pilkey, O. H. (2008). Useless Arithmetic: Ten Points to Ponder When Using Mathematical Models in Environmental Decision Making. Public Administration Review, 68(3), 470–479. doi:10.1111/j.1540-6210.2008.00883_2.x
Rivers, C., Chretien, J. P., Riley, S., Pavlin, J. A., Woodward, A., Brett-Major, D., Maljkovic Berry, I., Morton, L., Jarman, R. G., Biggerstaff, M., Johansson, M. A., Reich, N. G., Meyer, D., Snyder, M. R., & Pollett, S. (2019). Using “outbreak science” to strengthen the use of models during epidemics. Nature Communications, 10(1), 9–11. doi:10.1038/s41467-019-11067-2
Saltelli, A. (2004). Global sensitivity analysis: an introduction. Proc. 4th International Conference on Sensitivity Analysis of Model Output (SAMO’04), 27–43.
Saltelli, A., Aleksankina, K., Becker, W., Fennell, P., Ferretti, F., Holst, N., Li, S., & Wu, Q. (2019). Why so many published sensitivity analyses are false: A systematic review of sensitivity analysis practices. Environmental Modelling and Software, 114(March 2018), 29–39. doi:10.1016/j.envsoft.2019.01.012
Shortridge, J. E., & Zaitchik, B. F. (2018). Characterizing climate change risks by linking robust decision frameworks and uncertain probabilistic projections. Climatic Change, 151(3–4), 525–539. doi:10.1007/s10584-018-2324-x
Siegenfeld, A. F., & Bar-Yam, Y. (2020). What models can and cannot tell us about COVID-19 (pp. 1–3). New England Complex Systems Institute.
Smaldino, P. E. (2017). Models are stupid, and we need more of them. Computational Social Psychology, 311–331. doi:10.4324/9781315173726
Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F., & Gilbert, N. (2020). Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2), 10. <http://jasss.soc.surrey.ac.uk/23/2/10.html> doi:10.18564/jasss.4298
ten Broeke, G., van Voorn, G. A. K., & Ligtenberg, A. (2016). Which Sensitivity Analysis Method Should I Use for My Agent-Based Model? Journal of Artificial Societies and Social Simulation, 19(1), 1–35. <http://jasss.soc.surrey.ac.uk/19/1/5.html> doi:10.18564/jasss.2857
Thompson, E. L., & Smith, L. A. (2019). Escape from model-land. Economics: The Open-Access, Open-Assessment E-Journal. doi:10.5018/economics-ejournal.ja.2019-40
Wolfram, S. (1983). Statistical mechanics of cellular automata. Reviews of Modern Physics, 55(3), 601–644. doi:10.1103/RevModPhys.55.60
Steinmann, P., Wang, J. R., van Voorn, G. A. K. and Kwakkel, J. H. (2020) Don’t try to predict COVID-19. If you must, use Deep Uncertainty methods. Review of Artificial Societies and Social Simulation, 17th April 2020. https://rofasss.org/2020/04/17/deep-uncertainty/
© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)