Category Archives: Content

For the comments etc. themselves

The ASSOCC Simulation Model: A Response to the Community Call for the COVID-19 Pandemic

Amineh Ghorbani1 , Fabian Lorig2 , Bart de Bruin1 , Paul Davidsson2, Frank Dignum3, Virginia Dignum3, Mijke van der Hurk4, Maarten Jensen3, Christian Kammler3, Kurt Kreulen1, Luis Gustavo Ludescher3, Alexander Melchior4, René Mellema3, Cezara Păstrăv3, Loïs Vanhée5, and Harko Verhagen6

1TU Delft, Netherlands, 
2Malmö University, Sweden, 3Umeå University, Sweden, 4Utrecht University, Netherlands, 5University of Caen, France,6Stockholm University, Sweden
*

(A contribution to the: JASSS-Covid19-Thread)

Abstract: This article is a response to the call for action to the social simulation community to contribute to research on the COVID-19 pandemic crisis. We introduce the ASSOCC model (Agent-based Social Simulation for the COVID-19 Crisis), a model that has specifically been designed and implemented to address the societal challenges of this pandemic. We reflect on how the model addresses many of the challenges raised in the call for action. We conclude by pointing out that the focus of the efforts of the social simulation community should be less on the data and prediction-based simulations but rather on the explanation of mechanisms and exploration of social dependencies and impact of interventions.

Introduction

The COVID-19 crisis is a pandemic that is currently spreading all over the world. It has already had a dramatic toll on humanity affecting the daily life of billions of people and causing a global economic crisis resulting in deficits and unemployment rates never experienced before. Decision makers as well as the general public are in dire need of support to understand the mechanisms and connections in the ongoing crisis as well as support for potentially life-threatening and far-reaching decisions that are to be made with unknown consequences. Many countries and regions are struggling to deal with the impacts of the COVID-19 crisis on healthcare, economy and social well-being of communities, resulting in many different interventions. Examples are the complete lock-down of cities and countries, appeals to the individual responsibility of citizens, and suggestions to use digital technology for tracking and tracing of the disease spread. All these strategies require considerable behavioural changes by all individuals.

In such an unprecedented situation, agent-based social simulation seems to be a very suitable technique for achieving a better understanding of the situation and for providing decision-making support. Most of the available simulations for pandemics focus either on specific aspects of the crisis, such as epidemiology (Chang et al., 2020) or simplified general agglomerated mechanics (e.g., IndiaSIM). Many models, repurposing existing models that were originally developed for other pandemics such as influenza are mostly illustrative and intend to provide theory exposition (Squazzoni et al., 2020). Although current simulations are based on advanced statistical modelling that enables sound predictions of specific aspects of the disease, they use very limited models of human motives and cultural differences. Yet, understanding the possible consequences of drastic policy measures requires more than statistical analysis such as R0 factor (the basic reproduction number, which denotes the expected number of cases directly generated by one case in a population) or economic variables. Measures impact people and thus need to consider individuals’ needs (e.g., affiliation, control, or self-fulfilment), social networks (norms, relationships), and how these attributes and conditions can quickly change during difficult situations (e.g., need for job and food security, overloaded hospitals, loss of relatives).

In this context we have developed ASSOCC (Agent-based Social Simulation for the COVID-19 Crisis; see Figure 1) as a many-faceted observatory of scenarios. In ASSOCC, we connect the many involved aspects in a cohesive simulation, for helping stakeholders to raise their general awareness on all critical aspects of the problem and especially the dependencies between them. Of course, one can hardly aim to cover a large variety of aspects and have very complete models on each of them. Thus, we strike a balance between broadness of the model and accuracy on all aspects. This simulation delivers a complementary perspective to state of the art disciplinary models. Where most of other simulations offer sharp yet isolated pieces of the image, our approach is valuable for combining the pieces of the puzzle since a specific modelling focus can limit space for debate (ní Aodha & Edmonds, 2017).

The ASSOCC approach puts the human behaviour central as a linking pin between many disciplines and aspects: psychology (needs, values, beliefs, plans), social sensitivity (norms, social networks, work relationships), infrastructures (transportation, supplies), epidemiology (spreading), economy (transactions, bankruptcy), cultural influences and public measures (closing activities, lock-down, social distancing, testing). The already complex model is extended on a daily basis. This is done in a largely modular fashion such that specific aspects can be switched on and off during the runs. This leads to some limitations and also requires re-calibration of variables, but overall it seems worth the effort when looking at the first results of the scenarios we have simulated.

In this article, we aim to share our approach to simulating the COVID-19 pandemic, outline how the building and use of ASSOCC takes up a number of the challenges that were posed in (Squazzoni et al., 2020), and emphasize the potentials of agent-based simulation as method in mastering pandemics.

Figure 1: A screenshot of the Graphical User Interface of the ASSOCC simulation

Figure 1: A screen shot of the Graphical User Interface of the ASSOCC simulation

Introducing the ASSOCC Model

The goal of the ASSOCC simulation model is to integrate different parts of our daily life that are affected by the pandemic in order to support decision makers when trading off different policies against each other. It facilitates the identification of potential interdependencies that might exist and need to be addressed. This is important as different countries, cultures and populations affect the suitability and consequences of measures thus requiring local conditions to be taken into account. The model allows stakeholders to study individual and social reactions to different policies, to explore different scenarios, and to analyse their potential effects.

Figure 2: A screenshot of the base simulation model.

Figure 2: A screen shot of the base simulation model.

How it works

The ASSOCC simulation model is based on a synthetic population that consists of a set of artificial individuals (see Figure 1), each with given needs, demographic characteristics and attitude towards regulations and risks. By having all these agents decide over time what they should be doing, we can analyse their reactions to many different policies, such as total lock-down or voluntary isolation. Agents can move, perceive other agents, and decide on their actions based on their individual characteristics and their perception of the environment. The environment constrains the physical actions of the agents but can also impose norms and regulations on their behaviour. Through interaction, agents can take over characteristics from the other agents, such as becoming infected with COVID-19, or receiving information.

Agents

In the ASSOCC model, there are four types of agents: children, students, workers, and retirees. These types represent different age groups with different socio-demographic attributes, common activities, infection risks and behaviours. Each agent has a health status that represents being infected, symptomatic or asymptomatic contagiousness, and a critical state. Moreover, agents have needs and capabilities as well as personal characteristics such as risk aversion and the propensity to follow the law. Needs of the agent include health, wealth and belonging. They are modelled using the water tank model introduced by Dörner et al. (2006). Agent capabilities capture for instance their jobs or family situations. Agents need a minimum wealth value to survive which they receive by working or through subsidies (or by living together with a working agent). In shops and workplaces, agents trade wealth for products and services. Agents pay tax to a central government that then uses this money for subsidies and the maintenance of public services such as hospitals and schools.

Places

During the simulation, agents can move between different places according to their needs and obligations. Places represent homes, shops, hospitals, workplaces, schools, airports and stations. By assigning agents to homes, different households can be represented: single adults, families, retirement homes, and multi-generational households with children, adults and elderly people. The configuration of households is assumed to have an impact on the spreading of COVID-19 and great differences in household configurations exist between countries. Thus, the distribution of these households can be set in the simulation to analyse the situation in different cities 
or countries.

Policies

Policies describe interventions that can be taken by decision makers such as social distancing, infection and immunity testing or closing of schools and workplaces. Policies have complex effects on health, wealth and well-being of all agents. Policies can be extended in many different ways to provide an experimentation environment for decision makers. It is not only the decision of whether or not to implement certain policies but also the point in time when the policy is implemented that influences its success.

Conceptual Design

The ASSOCC model has been conceptualized based on many theories from various scientific disciplines, including psychology (basic motives and needs (McClelland, 1987; Jerome, 2013)), sociology (Schwartz value system (Schwartz, 2012)), culture (Hofstede’s cultural dimensions (Hofstede et al., 2010)), economy (circular flow of income (Murphy, 1993)), and epidemiology (the SEIR model (Cope et al., 2018)). For the disease model, we looked at the following sources: a case study of a corona time lapse (Xu et al., 2020), a cohort study showing the general time lapse of the disease with and without fatality (Zhou et al., 2020) and the incubation period determined by confirmed cases (Lauer et al., 2020). This theory-driven model, determines the reaction of agents to policies and their physical and social context.

A short description of the conceptual architecture of ASSOCC as well as an overview of the agent architecture are available at the project website.

Tools

The simulation is built in Netlogo (see Figure 2 with a visual interface in Unity (see Figure 1. The Netlogo model can be used as a standalone simulation model. For the scenarios, we use the Unity interface for better visualisation of the simulation. The complete source code is available on Github under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Note that at the time of publication of this article, this is still a beta version of the model, which we are continuously developing. The complete description of the agent-based model using the ODD protocol can as well be found on the ASSOCC website.

Addressing Key Challenges

Having explained the ASSOCC framework, in this section, we explain how our modelling effort addresses the 
challenges raised by Squazzoni et al. (2020).

Like any model, the ASSOCC model cannot be a complete representation of reality and has its own limitations. Yet, we believe that the dimensions of social complexity that we have included provide a promising ground to draw useful insights. As rightfully highlighted by Squazzoni et al. (2020), the quality of a model depends on its purpose, its theoretical assumptions, the level of abstraction, and the quality of data.

The purpose of the ASSOCC model is to illustrate and investigate mechanisms. Through the simulation of scenarios, ASSOCC shows dependencies between human behaviour and the spread of the virus, the economic incentives and the psychological needs of people.

In the next sections we aim to explain how the ASSOCC model addresses the main issues raised in (Squazzoni et al., 2020).

Social Complexity

In order to incorporate pre-existing behavioral attitudes, network effects, social norms and culture that influence people’s response to policy measure, we have built a cross-disciplinary extended team of researchers. We have spent extra time and effort to construct a complex model where social complexity is extensively taken into account. As an example, the Maslow theory for individual needs takes pre-existing behavioral attitudes of individuals into account (Jerome, 2013). By connecting this theory to Schwartz value dimensions (Schwartz, 2012) and connecting these dimensions to the cultural dimensions of Hofstede (Hofstede et al., 2010), we incorporate a whole spectrum of individual biological and social needs all the way to cultural diversity among nations.

Yet, the limitations of ASSOCC are in the richness of each of the societal dimensions. We use some rather simple models, for example, in the economic, culture, social network and transport aspects. We document which choices have been made to indicate which complexities we left out and why they were left out and why we think this does not affect the validity of our results. For example, in the transport dimension we do not distinguish between cars and bikes. We do not need that as we do not have large distances and both cars and bikes can be used as solo transport means. We are aware that there are differences in economic terms and also in values for choosing between the two means of transport, but these aspects are not very relevant for the spread of the virus.

Transparency

Although there is pressure on the community to respond to this crisis and to provide expert judgement, we have not sacrificed the complexity of our model, nor it’s transparency to provide rapid answers. In fact, we have aimed to make our modelling process as transparent as possible. Starting from low level programming code, ASSOCC uses Github repository to make the code publicly available. Besides code documentation, our large scale model makes use of the ODD protocol to make the model transparent at the conceptual level. Additionally, by building the Unity interface layer on the Netlogo model, we aim to connect policy scenarios to the parameter setup of the model, so that policy makers themselves can see how changes to scenarios leads to various outcomes.

By emphasizing that ASSOCC creates simulations of policy scenarios, we step away from giving a particular advice for a “best” policy. Rather we highlight the fundamental questions and priorities that have to be dealt with to choose among various policies. This is done by showing the consequences of the implementation of various scenarios and comparing them. This comparison can for example show how different groups of people are affected economically and health-wise by a policy. The most appropriate policy thus depends on the outcomes that are deemed more desirable.

Data

Given the short time since the outbreak, accurate data on the COVID-19 outbreak suitable for complex agent-based models is not yet available. It is not clear how various cases are defined and how the data is collected. However, in our view, this should not limit our modelling abilities for this much-needed rapid response.

In our view, detailed data is not required to build a useful model. In fact, our model is a ’SimCity’ to study various policy scenarios rather than actual data-driven representation of cities. While we have made sure that our model can show similar patterns to the ones observed in reality for overall validity, small fine-grain data is not included. The data used for the simulation comes from particular epidemiological models, from economic models and from calibration of the model against known, normal situations.

As illustrated in models that were described in (Squazzoni et al., 2020), even models that are calibrated with real-world data fail to capture important aspects such as network effects as these changes are still based on stochastic randomized processes. Therefore, being aware that the current data is not yet available nor reliable, 
we have built our model on strong theoretical basis in order to avoid oversimplification of factors that play important roles in this crisis.

Interface between modelling and policy

As highlighted by Squazzoni et al. (2020), “good pandemic models are not always good policy advice models”. We fully agree with this point, which is central to our modelling efforts. A user-interface has been especially developed in Unity (see Figure 1) to support comprehension of the model by policy makers and to facilitate experimentation. In the Unity interface, one can explore the different parameters of a scenario, see the results of the simulations in graph form and also follow several aspects live through the elements available in the spatial representation of the town. This spatial interface is meant purely for better understanding of the model. We believe that having clarity regarding our modelling goal increases policy makers trust in our insights.

In addition, we have been in close contact with policy makers around the world to, on the one hand, understand their needs and immediate and long-term concerns, and on the other hand, communicate our model’s capabilities in the most concise manner to support their decisions. To date, we have engaged with policy makers in the Netherlands, Italy and Sweden.

Predictive Power

In our interactions with policy makers and other users, we make clear that the ASSOCC platform is not meant for giving detailed predictions, but to support the generation of insights. Such a broad model is best used to indicate dependencies and trends between different aspects of the society. Due to the computing power needed for each agent running the complex reasoning, it is difficult to scale this type of model to more than a few thousand agents, at least in NetLogo. 
The validation of the model can be done through the causal chains that can be followed throughout the model. I.e. certain outcomes can be linked through agent states to certain causes in the environment or the actions of other agents. If these causal chains can be interpreted as plausible stories that can be confirmed by the theories of those respective aspects, it is possible to achieve a certain type of high level validation. So, this is not a validation on data, but validation based on expert opinion.

A second type of validation that can be done on this type of ABM is to make a detailed comparison with established epidemiological models. For instance, we are comparing our simulation with the one used for (Ferretti et al., 2020) in a particular scenario where the effect of using tracking and tracing apps is investigated. By translating the assumptions and parameters very carefully to ASSOCC parameters and comparing the resulting simulations, we can validate the underlying models against more traditional ones and also show possible deviations that might come up and that highlights advantages or lacunas in the ASSOCC model. The results of this comparison will be published jointly by the two groups. Finally, we are calibrating ASSOCC parameters by using statistical data, such as R0, number of deaths, and demographic data as means to improve validity.

Conclusion

In this article, we presented the ASSOCC model as a comprehensive modelling endeavour that aims to contribute to the efforts for managing the COVID-19 crisis. By modelling multiple aspects of the society and interrelating them, we provide insights into the underlying mechanisms in the society that are influenced both by the outbreak as well as policy measures that aim to control it.

Being aware of the challenges, we have aimed to include as much social complexity as possible in the model to avoid biases and oversimplification. At the same time, by being in close contact with policy makers around the world, we have taken the actual needs and considerations into account, while providing a traceable, usable and comprehensible user interface that brings the modelling insights within the reach of policy makers. In our modelling efforts, we have paid extra attention to transparency, providing well-documented and open-source code that can be used by the rest of the simulation community.

All the assumptions, underlying theories and the source code of ASSOCC are available on the project website and on Github. We invite people to use it, give feedback and based on this feedback we continuously improve the model and its parameters. According to the development of the pandemic and the state of discussion, new scenarios will be added as well.

We hope that the ASSOCC model can contribute to handling this crisis in a way that shows the capabilities and usefulness of agent-based modelling.

References

Chang, S. L., Harding, N., Zachreson, C., Cliff, O. M. & Prokopenko, M. (2020). Modelling transmission and control of the covid-19 pandemic in australia. arXiv preprint arXiv:2003.10218 <https://arxiv.org/abs/2003.10218>

Cope, R. C., Ross, J. V., Chilver, M., Stocks, N. P., & Mitchell, L. (2018). Characterising seasonal influenza epidemiology using primary care surveillance data. PLoS computational biology, 14(8), e1006377. doi:10.1371/journal.pcbi.1006377

Dörner, D., Gerdes, J., Mayer, M., & Misra, S. (2006, April). A simulation of cognitive and emotional effects of overcrowding. In Proceedings of the Seventh International Conference on Cognitive Modeling (pp. 92-98). Triest, Italy: Edizioni Goliardiche.

Ferretti, L., Wymant, C., Kendall, M., Zhao, L., Nurtay, A., Abeler-Dörner, L., Parker, M., Bonsall, D. & Fraser, C. (2020). Quantifying sars-cov-2 transmission suggests epidemic control with digital contact tracing. Science,  31 Mar 2020:eabb6936. doi:10.1126/science.abb6936

Hofstede, G., Hofstede, G. J. & Minkov, M. (2010). Cultures and organizations: Software of the mind. revised and expanded 3rd edition. N.-Y.: McGraw-Hill.

Jerome, N. (2013). Application of the Maslow’s hierarchy of need theory; impacts and implications on organizational culture, human resource and employee’s performance. International Journal of Business and Management Invention, 2(3), 39–45.

Lauer, S. A., Grantz, K. H., Bi, Q., Jones, F. K., Zheng, Q., Meredith, H. R., Azman, A. S., Reich, N. G. & Lessler, J. (2020). The incubation period of coronavirus disease 2019 (covid-19) from publicly reported confirmed cases: estimation and application. Annals of internal medicine

McClelland, D. (1987). Human Motivation. Cambridge Univ. Press
Murphy, A. E. (1993). John law and richard cantillon on the circular flow of income. The European Journal of the History of Economic Thought, 1(1), 47–62.

Aodha, L. & Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Xu, Z., Shi, L., Wang, Y., Zhang, J., Huang, L., Zhang, C., Liu, S., Zhao, P., Liu, H., Zhu, L. et al. (2020). Pathological findings of covid-19 associated with acute respiratory distress syndrome. The Lancet respiratory medicine, 8(4), 420–422

Zhou, F., Yu, T., Du, R., Fan, G., Liu, Y., Liu, Z., Xiang, J., Wang, Y., Song, B., Gu, X. et al. (2020). Clinical course and risk factors for mortality of adult inpatients with covid-19 in wuhan, china: a retrospective cohort study. The Lancet, 395(10229), 1054-1062. doi:10.1016/S0140-6736(20)30566-3


Ghorbani, A., Lorig, F., de Bruin, B., Davidsson, P., Dignum, F., Dignum, V., van der Hurk, M., Jensen, M., Kammler, C., Kreulen, K., Ludescher, L. G., Melchior, A., Mellema, R., Păstrăv, C., Vanhée, L. and Verhagen, H. (2020) The ASSOCC Simulation Model: A Response to the Community Call for the COVID-19 Pandemic. Review of Artificial Societies and Social Simulation, 25th April 2020. https://rofasss.org/2020/04/25/the-assocc-simulation-model/


 

Sound behavioural theories, not data, is what makes computational models useful

By Umberto Gostoli and Eric Silverman

(A contribution to the: JASSS-Covid19-Thread)

The paper “Computational Models that Matter During a Global Pandemic Outbreak: A Call to Action” by Squazzoni et al. (2020) is a valuable contribution to the ongoing self-reflection in the social simulation community regarding the role of ABM in the broader social-scientific enterprise. In this paper the authors try to assess the potential capacity of ABM to provide policy makers with a tool allowing them to predict the evolution of the pandemic and the effects of alternative policy responses. Their conclusions suggest a role for computational modelling during the pandemic, but also have implications regarding the position of ABM within the scientific and policy arenas, and its added value relative to other methodologies of scientific inquiry.

We agree with the authors that ABM has an important (and urgent) role to play to help policy makers to take more informed decisions, provided that the models are based on reliable and robust theories of human behaviour and social interaction. However, following in the footsteps of Joshua Epstein (2008), we claim that the importance and relevance of ABM goes beyond the capacity of the models to make point predictions (i.e. in the form of ‘There will be X infections/deaths in Y days time’). We propose that the ability of ABM to develop, inform, and test relevant theory is of particular relevance during this global crisis.

This does not mean that additional data allowing for the models’ calibration and validation are not important, as they can certainly help reduce the uncertainty associated with the models’ outputs, but in our view they are not essential to what agent-based models have to offer. With that in mind, the lack of these data should not prevent the ABM community from participating in the mass mobilization of the scientific community, which is working at unprecedented speed to develop models to inform the vital policy decisions being taken during this pandemic.

As we argue in a recent position paper (Silverman et al. 2020), it is precisely when we have limited data, or no data at all, that simulations provide greater value than traditional methodologies like statistical inference; indeed, the less data we have the more important is the role that agent-based (and other computational) simulations have to play. Computational models provide a way to say something about the evolution of complex systems by delimiting the set of possible outcomes through the constraints imposed by the theoretical framework which is encoded in the model. When we find ourselves in new situations such as the Covid-19 pandemic, where the data (i.e., our past experience) cannot give us any clue regarding the future evolution of the system, we find that theories become the only tool we have to make educated guesses about what could (and could not) possibly happen. Models of complex systems have typically hundreds, if not thousands, of parameters, many of which have unknown values, and some of which have values we cannot know. If we wait for the data we need to make point predictions, we would never have a say in the policy arena, and probably if these data were available other methodologies would serve the purpose better than computational models. Delimiting and quantifying the uncertainty associated with future scenarios in the face of limited data is where computational models can make a vital contribution, as they can give policy-makers useful information for risk management.

By no means are we saying that the development and effective deployment of computational models is without challenges. But we claim that the main challenge lies in the identification and inclusion of sound behavioural theories, as the outputs we get will depend upon the reliability of our models’ theoretical input. Identifying such theories is a significant challenge, requiring theoretical contributions from a number of different fields, ranging from epidemiology and urban studies to sociology and economics.

Further, putting scholars from those disciplines into the same room will not be sufficient; we must create a multidisciplinary community of people sharing the same conceptual framework, an endeavour that takes a lot of dedication, perseverance and, crucially, time. The lack of such multidisciplinary research groups strongly limits the ABM community’s capacity to develop an effective computational model of the pandemic, and we hope that at least this crisis will prove that developing such a community is necessary to improve our capacity for a timely response to the next one.

In relation to this challenge, we are aiming to develop and support a global community of agent-based modellers focused on population health concerns, via the PHASE Network project funded by the UK Prevention Research Partnership. We urge readers to join the network via our website at https://phasenetwork.org/, and help us build a multidisciplinary health modelling community that can contribute to global efforts in improving health both during and after the Covid-19 pandemic.

We must also remember that the current crisis is very unlikely to be over quickly, and its longer-term effects on society will be substantial. At the time of writing more than 80 separate groups and institutions are embarking on efforts to build a vaccine for the coronavirus, but even with such concerted efforts there are no guarantees that a vaccine will be found. As Kissler et al. have shown, even if the virus appears to abate, further waves of infections could arise years afterwards (Kissler et al. 2020). Because of the resources and time it takes to develop theoretically sound computational models, in our view this methodology is better suited to address these longer-term questions of how society can reorganize itself to increase resilience against future pandemics – and here the ability of computational models to implement and test behavioural theories is of paramount importance. The questions that must be asked in the years to come are numerous and profound: How can the world of work change to be more robust to future crises and global shut-downs? Can welfare policies like universal basic income help prevent widespread economic devastation in future crises? How must our health and care systems evolve to better protect the most vulnerable in society?

We propose that computational models can make a particularly valuable contribution in this area. At the present time there is ample evidence of the disastrous effects of delayed or insufficient policy responses to a pandemic. Economic projections already suggest we are due to enter a post-pandemic collapse to rival the Great Depression. We can, and should, begin to develop theories and models about how we may adjust society for the post-Covid world. Models could be valuable tools for testing and developing ambitious socio-economic policy ideas in silico, in order to prepare for this new reality.

To conclude, in principle we share with the authors of the paper the belief that computational models have an important role to play to inform policy makers during crisis (such as pandemics). However, we wish to emphasize the need for sound and robust theoretical frameworks ready to be included in these models, rather than on the existence and availability of data. In practice, the lack of such frameworks is more critical for ensuring that the computational modelling community can make a useful contribution during this pandemic.

References

Epstein, J. M. (2008) Why model? Journal of Artificial Societies and Social Simulation, 11(4):12. <http://jasss.soc.surrey.ac.uk/11/4/12.html>.

Kissler, S. M., Tedijanto, C., Goldstein, E. Grad, Y. H.  and Lipsitch, M. (2020) Projecting the transmission dynamics of sars-cov-2 through the postpandemic period. Science. doi:10.1126/science.abb5793. <https://science.sciencemag.org/content/early/2020/04/14/science.abb5793>

Silverman, E., Gostoli, U., Picascia, S., Almagor, J., McCann, M., Shaw, R., & Angione, C. (2020). Situating Agent-Based Modelling in Population Health Research. arXiv preprint arXiv:2002.02345. <https://arxiv.org/abs/2002.02345&gt;

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Gostoli, U. and Silverman, E. (2020) Sound behavioural theories, not data, is what makes computational models useful. Review of Artificial Societies and Social Simulation, 22th April 2020. https://rofasss.org/2020/04/22/sound-behavioural-theories/


 

Don’t try to predict COVID-19. If you must, use Deep Uncertainty methods

By Patrick Steinmanna, Jason R. Wangb, George A. K. van Voorna, and Jan H. Kwakkelb

a Biometris, Wageningen University & Research, Wageningen, the Netherlands, b Delft University of Technology, Faculty of Technology, Policy & Management, Delft, the Netherlands

(A contribution to the: JASSS-Covid19-Thread)

Abstract

We respond to the recent JASSS article on COVID-19 and computational modelling. We disagree with the authors on one major point and note the lack of discussion of a second one. We believe that COVID-19 cannot be predicted numerically, and attempting to make decisions based on such predictions will cost human lives. Furthermore, we note that the original article only briefly comments on uncertainty. We urge those attempting to model COVID-19 for decision support to acknowledge the deep uncertainties surrounding the pandemic, and to employ Decision Making under Deep Uncertainty methods such as exploratory modelling, global sensitivity analysis, and robust decision-making in their analysis to account for these uncertainties.

Introduction

We read the recent article in the Journal of Artificial Societies and Social Simulation on predictive COVID-19 modelling (Squazzoni et al. 2020) with great interest. We agree with the authors on many general points, such as the need for rigorous and transparent modelling and documentation. However, we were dismayed that the authors focused solely on how to make predictive simulation models of COVID-19 without first discussing whether making such models is appropriate under the current circumstances. We believe this question is of greater importance, and that the answer will likely disappoint many in the community. We also note that the original piece does not engage substantively with methods of modelling and model analysis specifically designed for making time-critical decisions under uncertainty.

We respond to the call issued by the Review of Artificial Societies and Social Simulation for responses and opinions on predictive modelling of COVID-19. In doing so, we go above and beyond the recent RofASSS contribution by de Matos Fernandes & Keijzer (2020)—rather than saying that definite “predictions” should be replaced by probabilistic “expectations”, we contend that no probabilities whatsoever should be applied when modelling systems as uncertain as a global pandemic. This is presented in the first section. In the second section, we discuss how those with legitimate need for predictive epidemic modelling should approach their task, and which tools might be beneficial in the current context. In the last section, we summarize our opinions and issue our own challenges to the community.

To Model or Not to Model COVID-19, That Is the Question

The recent call attempts to lay out a path for using simulation modelling to forecast the COVID-19 epidemic. However, there is no critical reflection on the question of whether modelling is the appropriate tool for this, under the current circumstances. The authors argue that with sufficient methodological rigour, high-quality data and interdisciplinary collaboration, complex outcomes (such as the COVID-19 epidemic) can be predicted well and quickly enough to provide actionable decision support.

Computational modelling is difficult in the best of times. Even models with seemingly simple structure can have emergent behavior rendering them perfectly random (Wolfram 1983) or Turing complete (Cook 2004). Attempting to draw any kind of conclusions from a simulation model, especially in the life-and-death context of pandemic decision making, must be done carefully and with respect for uncertainty. If, for whatever reason, this cannot be done, then modelling is not the right tool to answer the question at hand (Thompson & Smith 2019). The numerical nature of models is seductive, but must be employed wisely to avoid “useless arithmetic” (Pilkey-Jarvis & Pilkey 2008) or statistical fallacies (Benessia et al. 2016).

Trying to skilfully predict how the COVID-19 outbreak will evolve regionally or globally is a fool’s errand. Epistemic uncertainties about key parameters and processes describing the disease abound. Human behaviour is changing in response to the outbreak. Research and development burgeon in many sciences with presently unknowable results. Anyone claiming to know where the world will be in even a few weeks is at best delusional. Uncertainty is aggravated by the problem of equifinality (Oreskes et al. 1994). For any simulation model of COVID-19, there will be a set of model parametrizations that has a similar quality of fit with the available data. Much of this is acknowledged by Squazzoni et al. (2020), yet inexplicably they still call for developing probabilistic forecasts of the outbreak using empirically validated models. We instead contend that “about these matters, there is no scientific basis on which to form any calculable probability” (Keynes 1937), and that validation should be based on usefulness in aiding time-urgent decision-making, rather than predictive accuracy (Pielke 2003). However, the capacity for such policy-oriented modelling must be built between pandemics, not during them (Rivers et al. 2019).

This call to abstain from predicting COVID-19 does not imply that the broader community should refrain from modelling completely. The illustrative power of simple models has been amply demonstrated in various media outlets. We do urge modellers not to frame their work as predictive (e.g. “How Pandemics Can End”, rather than “How COVID-19 Will End”), and to use watermarks where possible to indicate that the shown work is not predictive. There is also ample opportunity to use simulation modelling to solve ancillary problems. For example, established transport and logistics models could be adapted to ensure supply of critical healthcare equipment is timely and efficient. Similarly, agri-food models could explore how to secure food production and distribution under labour shortages. These can be vital, though less sensational, contributions of simulation modelling to the ongoing crisis.

Deep Uncertainty: How to Predict COVID-19, if(f) You Must

Deep Uncertainty (Lempert et al. 2003) is present when analysts cannot know, or stakeholders cannot agree on:

  1. The probability distributions relevant to unknown system variables,
  2. The relations and mechanisms present in the system, and/or
  3. The metrics by which future system states should be evaluated.

All three conditions are present in the case of the COVID-19 pandemic. To give a brief example of each, we know very little about asymptomatic infections, whether a vaccine will ever become available, and whether the socio-psychological and economic impacts of a “flattened curve” future are bearable (and by whom). The field of Decision Making under Deep Uncertainty has been working on problems of a similar nature for many years already, and developed a variety of tools to analyse such problems (Marchau et al. 2019). These methods may be beneficial for designing COVID-19 policies with simulation models—if, as discussed previously, this is appropriate. In the following, we present three such methods and their potential value for COVID-19 decision support: exploratory modelling, global sensitivity analysis, and robust decision-making.

Exploratory modelling (Bankes 1993) is a conceptual approach to using simulation models for policy analysis. It emerges as a response to the question how models that cannot be empirically validated can still be used to inform planning and decision-making (Hodges 1991, Hodges & Dewar 1992). Instead of consolidating increasing amounts of knowledge into “the” model of a system, exploratory modelling advocates using wide uncertainty ranges for unknown parameters to generate a large ensemble of plausible futures, with no predictive or probabilistic power attached or implied a priori (Shortridge & Zaitchik 2018). This ensemble may represent a variety of assumptions, theories, and system structures. It could even be generated using a multitude of models (Page 2018; Smaldino 2017) and metrics (Manheim 2018). By reasoning across such an ensemble, insights agnostic to specific assumptions may be reached, sidestepping a priori biases that are inherent in only examining a simple set of scenarios, as COVID-19 policy models observed by the authors do. Reasoning across such limited sets obscures policy-relevant futures which emerge as hybrids of pre-specified positive and negative narratives (Lamontagne et al. 2018). In the context of the COVID-19 pandemic, exploratory modelling could be used to contrast a variety of assumptions about disease transmission mechanisms (e.g., the role of schools, children, or asymptomatic cases in the speed of the outbreak), reinfection potential, or adherence to social distancing norms. Many ESSA members are already familiar with such methods—NetLogo’s BehaviorSpace function is a prime example. The Exploratory Modelling & Analysis Workbench (Kwakkel 2017) provides a similar, platform-agnostic functionality by means of a Python interface. We encourage all modellers to embrace such tools, and to be honest about which parameters and structural assumptions are uncertain, how uncertain they are, and how this affects the inferences that can and cannot be made based on the results from the model.

Global sensitivity analysis (Saltelli 2004) is a method of studying both the importance and interaction of uncertain parameters on the outputs of a simulation model. Many simulation modellers are already familiar with local sensitivity analysis, where parameters are varied one at a time to ascertain their individual effect on model output. This is insufficient for studying parameter interactions in non-linear systems (Saltelli et al. 2019; ten Broeke et al. 2016). In global sensitivity analysis, combinations of parameters are varied and studied simultaneously, illuminating their joint or interaction effects. This is critical for the rigorous study of complex system models, where parameters may have unexpected, non-linear interactions. In the context of the COVID-19 epidemic, we have seen at least two public health agencies perform local sensitivity analysis over small parameter ranges, which may blind decision makers to worst-case futures (Siegenfeld & Bar-Yam 2020). Global sensitivity analysis might reveal how different assumptions for e.g. duration of Intensive Care (IC) and age-related case severity may interact to create a “perfect storm” of IC need. A collection of global sensitivity analysis methods has been encoded for Python in the SALib package (Herman & Usher 2018), and how to use these with NetLogo is illustrated in Jaxa-Rozen & Kwakkel (2018).

Robust Decision Making (RDM) (Lempert et al. 2006) is a general analytic method for designing policies which are robust across uncertainties—they perform well regardless of which future actually materializes. Policies are designed by iteratively stress-testing them across ensembles of plausible futures representing different assumptions, theories, and input parameter combinations. This represents a departure from established, probabilistic risk management approaches, which are inappropriate for fat-tailed processes such as pandemics (Norman et al. 2020). More recently, RDM has been extended to Dynamic Adaptive Policy Pathways (DAPP) (Kwakkel et al. 2015) by incorporating adaptive policies conditioned on specific triggers or signposts identified in exploratory modeling runs. In the context of the COVID-19 epidemic, DAPP might be used to design policies which can adapt as the situation develops (Hamarat et al. 2012)—possibly representing a transparent and verifiable approach to implementing the “hammer and dance” epidemic suppression method which has been widely discussed in popular media. Thinking in terms of pathways conditional on how the outbreak evolves is also a more realistic way of preparing for the dance: Rather than giving a human time line, the virus determines the time line. All we can do is indicate the conditions under which certain types of actions will be taken.

Conclusions: Please Don’t. If You Must, Use Deep Uncertainty methods.

We have raised two points of importance which are not discussed in a recent article on COVID-19 predictive modelling in JASSS. In particular, we have proposed that the question of whether such models should be created must precede any discussion of how to do so. We found that complex outcomes such as epidemics cannot reliably be predicted using simulation models, as there are numerous uncertainties that significantly affect possible future system states. However, models may be still be useful in times of crisis, if created and used appropriately. Furthermore, we have noted that there exists an entire field of study focusing on Decision Making under Deep Uncertainty, and that model analysis methods for situations like this already exist. We have briefly highlighted three methods—exploratory modelling, global sensitivity analysis, and robust decision-making—and given examples for how they might be used in the present context.

Stemming from these two points, we issue our own challenges to the ESSA modelling community and the field of systems simulation in general:

  • COVID-19 prediction distancing challenge: Do not attempt to predict the COVID-19 epidemic.
  • COVID-19 deep uncertainty challenge: If you must predict the COVID-19 epidemic, embrace deep uncertainty principles, including transparent treatment of uncertainties, exploratory modeling, global sensitivity analysis, and robust decision-making.

References

Bankes, S. (1993). Exploratory Modeling for Policy Analysis. Operations Research, 41(3), 435–449. doi: 10.1287/opre.41.3.435

Benessia, A., Funtowicz, S., Giampietro, M., Guimarães Pereira, A., Ravetz, J. R., Saltelli, A., Strand, R., & van der Sluijs, J. P. (2016). Science on the Verge. Consortium for Science, Policy & Outcomes Tempe, AZ and Washington, DC.

Cook, M. (2004). Universality in Elementary Cellular Automata. Complex Systems, 15(1), 1–40.

de Matos Fernandes, C. A., & Keijzer, M. A. (2020). No one can predict the future: More than a semantic dispute. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Hamarat, C., Kwakkel, J., & Pruyt, E. (2012). Adaptive Policymaking under Deep Uncertainty : Optimal Preparedness for the next pandemic. Proceedings of the 30th International Conference of the System Dynamics Society, Nrc 2009.

Herman, J., & Usher, W. (2018). SALib : Sensitivity Analysis Library in Python ( Numpy ). Contains Sobol , SALib : An open-source Python library for Sensitivity Analysis. 41(April), 2015–2017. doi:10.1016/S0010-1

Jaxa-Rozen, M., & Kwakkel, J. H. (2018). PyNetLogo: Linking NetLogo with Python. Journal of Artificial Societies and Social Simulation, 21(2). <http://jasss.soc.surrey.ac.uk/21/2/4.html> doi:10.18564/jasss.3668

Keynes, J. M. (1937). The General Theory of Employment. The Quarterly Journal of Economics, 51(2), 209. doi:10.2307/1882087

Kwakkel, J. H. (2017). The Exploratory Modeling Workbench: An open source toolkit for exploratory modeling, scenario discovery, and (multi-objective) robust decision making. Environmental Modelling & Software, 96, 239–250. doi:10.1016/j.envsoft.2017.06.054

Kwakkel, J. H., Haasnoot, M., & Walker, W. E. (2015). Developing dynamic adaptive policy pathways: a computer-assisted approach for developing adaptive strategies for a deeply uncertain world. Climatic Change, 132(3), 373–386. doi:10.1007/s10584-014-1210-4

Lamontagne, J. R., Reed, P. M., Link, R., Calvin, K. V., Clarke, L. E., & Edmonds, J. A. (2018). Large Ensemble Analytic Framework for Consequence-Driven Discovery of Climate Change Scenarios. Earth’s Future, 6(3), 488–504. doi:10.1002/2017EF000701

Lempert, R. J., Groves, D. G., Popper, S. W., & Bankes, S. C. (2006). A general, analytic method for generating robust strategies and narrative scenarios. Management Science, 52(4), 514–528. doi:10.1287/mnsc.1050.0472

Lempert, R. J., Popper, S., & Bankes, S. (2003). Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. doi:10.7249/mr1626

Manheim, D. (2018). Building Less Flawed Metrics: Dodging Goodhart and Campbell’s Laws. In MPRA.

Marchau, V. A. W. J., Walker, W. E., Bloemen, P. J. T. M., & Popper, S. W. (Eds.). (2019). Decision Making under Deep Uncertainty. Springer International Publishing. doi:10.1007/978-3-030-05252-2

Norman, J., Bar-Yam, Y., & Taleb, N. N. (2020). Systemic Risk of Pandemic via Novel Pathogens – Coronavirus: A Note. New England Complex Systems Institute. http://arxiv.org/abs/1410.5787

Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263(5147), 641–646. doi:10.1126/science.263.5147.641

Page, S. E. (2018). The model thinker: what you need to know to make data work for you. Hachette UK.

Pilkey-Jarvis, L., & Pilkey, O. H. (2008). Useless Arithmetic: Ten Points to Ponder When Using Mathematical Models in Environmental Decision Making. Public Administration Review, 68(3), 470–479. doi:10.1111/j.1540-6210.2008.00883_2.x

Rivers, C., Chretien, J. P., Riley, S., Pavlin, J. A., Woodward, A., Brett-Major, D., Maljkovic Berry, I., Morton, L., Jarman, R. G., Biggerstaff, M., Johansson, M. A., Reich, N. G., Meyer, D., Snyder, M. R., & Pollett, S. (2019). Using “outbreak science” to strengthen the use of models during epidemics. Nature Communications, 10(1), 9–11. doi:10.1038/s41467-019-11067-2

Saltelli, A. (2004). Global sensitivity analysis: an introduction. Proc. 4th International Conference on Sensitivity Analysis of Model Output (SAMO’04), 27–43.

Saltelli, A., Aleksankina, K., Becker, W., Fennell, P., Ferretti, F., Holst, N., Li, S., & Wu, Q. (2019). Why so many published sensitivity analyses are false: A systematic review of sensitivity analysis practices. Environmental Modelling and Software, 114(March 2018), 29–39. doi:10.1016/j.envsoft.2019.01.012

Shortridge, J. E., & Zaitchik, B. F. (2018). Characterizing climate change risks by linking robust decision frameworks and uncertain probabilistic projections. Climatic Change, 151(3–4), 525–539. doi:10.1007/s10584-018-2324-x

Siegenfeld, A. F., & Bar-Yam, Y. (2020). What models can and cannot tell us about COVID-19 (pp. 1–3). New England Complex Systems Institute.

Smaldino, P. E. (2017). Models are stupid, and we need more of them. Computational Social Psychology, 311–331. doi:10.4324/9781315173726

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F., & Gilbert, N. (2020). Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2), 10. <http://jasss.soc.surrey.ac.uk/23/2/10.html> doi:10.18564/jasss.4298

ten Broeke, G., van Voorn, G. A. K., & Ligtenberg, A. (2016). Which Sensitivity Analysis Method Should I Use for My Agent-Based Model? Journal of Artificial Societies and Social Simulation, 19(1), 1–35. <http://jasss.soc.surrey.ac.uk/19/1/5.html> doi:10.18564/jasss.2857

Thompson, E. L., & Smith, L. A. (2019). Escape from model-land. Economics: The Open-Access, Open-Assessment E-Journal. doi:10.5018/economics-ejournal.ja.2019-40

Wolfram, S. (1983). Statistical mechanics of cellular automata. Reviews of Modern Physics, 55(3), 601–644. doi:10.1103/RevModPhys.55.60


Steinmann, P., Wang, J. R., van Voorn, G. A. K. and Kwakkel, J. H. (2020) Don’t try to predict COVID-19. If you must, use Deep Uncertainty methods. Review of Artificial Societies and Social Simulation, 17th April 2020. https://rofasss.org/2020/04/17/deep-uncertainty/


 

No one can predict the future: More than a semantic dispute

By Carlos A. de Matos Fernandes and Marijn A. Keijzer

(A contribution to the: JASSS-Covid19-Thread)

Models are pivotal to battle the current COVID-19 crisis. In their call to action, Squazzoni et al. (2020) convincingly put forward how social simulation researchers could and should respond in the short run by posing three challenges for the community among which is a COVID-19 prediction challenge. Although Squazzoni et al. (2020) stress the importance of transparent communication of model assumptions and conditions, we question the liberal use of the word ‘prediction’ for the outcomes of the broad arsenal of models used to mitigate the COVID-19 crisis by ours and other modelling communities. Four key arguments are provided that advocate using expectations derived from scenarios when explaining our models to a wider, possibly non-academic audience.

The current COVID-19 crisis necessitates that we implement life-changing policies that, to a large extent, build upon predictions from complex, quickly adapted, and sometimes poorly understood models. The examples of models spurring the news to produce catchphrase headlines are abundant (Imperial College, AceMod-Australian Census-based Epidemic Model, IndiaSIM, IHME, etc.). And even though most of these models will be useful to assess the comparative effectiveness of interventions in our aim to ‘flatten the curve’, the predictions that disseminate to news media are those of total cases or timing of the inflection point.

The current focus on predictive epidemiological and behavioural models brings back an important discussion about prediction in social systems. “[T]here is a lot of pressure for social scientists to predict” (Edmonds, Polhill & Hales, 2019), and we might add ‘especially nowadays’. But forecasting in human systems is often tricky (Hofman, Sharma & Watts, 2017). Approaches that take well-understood theories and simple mechanisms often fail to grasp the complexity of social systems, yet models that rely on complex supervised machine learning-like approaches may offer misleading levels of confidence (as was elegantly shown recently by Salganik et al., 2020). COVID-19 models appear to be no exception as a recent review concluded that “[…] their performance estimates are likely to be optimistic and misleading” (Wynants et al., 2020, p. 9). Squazzoni et al. describe these pitfalls too (2020: paragraph 3.3). In the crisis at hand, it may even be counter-productive to rely on complex models that combine well-understood mechanisms with many uncertain parameters (Elsenbroich & Badham, 2020).

Considering the level of confidence we can have about predictive models in general, we believe there is an issue with the way predictions are communicated by the community. Scientists often use ‘prediction’ to refer to some outcome of a (statistical) model where they ‘predict’ aspects of the data that are already known, but momentarily set aside. Edmonds et al. (2019: paragraph 2.4) state that “[b]y ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model”. Predictive accuracy, in this case, can then be computed later on, by comparing the prediction to the truth. Scientists know that when talking about predictions of their models, they don’t claim to generalize to situations outside of the narrow scope of their study sample or their artificial society. We are not predicting the future, and wouldn’t claim we could. However, this is wildly different from how ‘prediction’ is commonly understood: As an estimation of some unknown thing in the future. Now that our models quickly disseminate to the general public, we need to be careful with the way we talk about their outcomes.

Predictions in the COVID-19 crisis will remain imperfect. In the current virus outbreak, society cannot afford to rely on the falsification of models for interventions against empirical data. As the virus remains to spread rapidly, our only option is to rely on models as a basis for policy, ceteris paribus. And it is precisely here – at ‘ceteris paribus’ – where the terminology ‘predictions’ miss the mark. All things will not be equal tomorrow, the next day, or the day after that (Van Bavel et al. [2020] note numerous topics that affect managing the COVID-19 pandemic and its impact on society). Policies around the globe are constantly being tweaked, and people’s behaviour changes dramatically as a consequence (Google, 2020). Relying on predictions too much may give a false sense of security.

We propose to avoid using the word ‘prediction’ too much and talk about scenarios or expectations instead where possible. We identify four reasons why you should avoid talking about prediction right now:

  1. Not everyone is acquainted with noise and emergence. Computational Social Scientists generally understand the effects of noise in social systems (Squazzoni et al., 2020: paragraph 1.8). Small behavioural irregularities can be reinforced in complex systems fostering unexpected outcomes. Yet, scientists not acquainted with studying complex social systems may be unfamiliar with the principles we have internalized by now, and put over-confidence in the median outputs of volatile models that enter the scientific sphere as predictions.
  2. Predictions do not convey uncertainty. The general public is usually unacquainted with academic esoteric concepts. For instance, showing a flatten-the-curve scenario generally builds upon mean or median approximation, oftentimes neglecting to include variability of different scenarios. Still, there are numerous other outcomes, building on different parameter values. We fear that by stating a prediction to an undisciplined public, they expect such a thing to occur for certain. If we forecast a sunny day, but there’s rain, people are upset. Talking about scenarios, expectations, and mechanisms may prevent confusion and opposition when the forecast does not occur.
  3. It’s a model, not a reality. The previous argument feeds into the third notion: Be honest about what you model. A model is a model. Even the most richly calibrated model is a model. That is not to say that such models are not informative (we reiterate: models are not a shot in the dark). Still, richly calibrated models based on poor data may be more misleading than less calibrated models (Elsenbroich & Badham, 2020). Empirically calibrated models may provide more confidence at face value, but it lies in the nature of complex systems that small measurement errors in the input data may lead to big deviations in outputs. Models present a scenario for our theoretical reasoning with a given set of parameter values. We can update a model with empirical data to increase reliability but it remains a scenario about a future state given an (often expansive) set of assumptions (recently beautifully visualized by Koerth, Bronner, & Mithani, 2020).
  4. Stop predicting, start communicating. Communication is pivotal during a crisis. An abundance of research shows that communicating clearly and honestly is a best practice during a crisis, generally comforting the general public (e.g., Seeger, 2006). Squazzoni et al. (2020) call for transparent communication. by stating that “[t]he limitations of models and the policy recommendations derived from them have to be openly communicated and transparently addressed”. We are united in our aim to avert the COVID-19 crisis but should be careful that overconfidence doesn’t erode society’s trust in science. Stating unequivocally that we hope – based on expectations – to avert a crisis by implementing some policy, does not preclude altering our course of action when an updated scenario about the future may require us to do so. Modellers should communicate clearly to policy-makers and the general public that this is the role of computational models that are being updated daily.

Squazzoni et al. (2020) set out the agenda for our community in the coming months and it is an important one. Let’s hope that the expectations from the scenarios in our well-informed models will not fall on deaf ears.

References

Edmonds, B., Polhill, G., & Hales, D. (2019). Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H., & Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html> doi: 10.18564/jasss.3993

Elsenbroich, C., & Badham, J. (2020). Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020.
https://rofasss.org/2020/04/12/focussing-on-our-strengths/

Google. (2020). COVID-19 Mobility Reports. https://www.google.com/covid19/mobility/ (Accessed 15th April 2020)

Hofman, J. M., Sharma, A., & Watts, D. J. (2017). Prediction and Explanation in Social Systems. Science, 355, 486–488. doi: 10.1126/science.aal3856

Koerth, M., Bronner, L., & Mithani, J. (2020, March 31). Why It’s So Freaking Hard To Make A Good COVID-19 Model. FiveThirtyEight. https://fivethirtyeight.com/

Salganik, M. J. et al. (2020). Measuring the Predictability of Life Outcomes with a Scientific Mass Collaboration. PNAS. 201915006. doi: 10.1073/pnas.1915006117

Seeger, M. W. (2006). Best Practices in Crisis Communication: An Expert Panel Process, Journal of Applied Communication Research, 34(3), 232-244.  doi: 10.1080/00909880600769944

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Van Bavel, J. J. et al. (2020). Using social and behavioural science to support COVID-19 pandemic response. PsyArXiv. https://doi.org/10.31234/osf.io/y38m9

Wynants. L., et al. (2020). Prediction models for diagnosis and prognosis of COVID-19 infection: systematic review and critical appraisal. BMJ, 369, m1328. doi: 10.1136/bmj.m1328


de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/


 

Good Modelling Takes a Lot of Time and Many Eyes

By Bruce Edmonds

(A contribution to the: JASSS-Covid19-Thread)

It is natural to want to help in a crisis (Squazzoni et al. 2020), but it is important to do something that is actually useful rather than just ‘adding to the noise’. Usefully modelling disease spread within complex societies is not easy to do – which essentially means there are two options:

  1. Model it in a fairly abstract manner to explore ideas and mechanisms, but without the empirical grounding and validation needed to reliably support policy making.
  2. Model it in an empirically testable manner with a view to answering some specific questions and possibly inform policy in a useful manner.

Which one does depends on the modelling purpose one has in mind (Edmonds et al. 2019). Both routes are legitimate as long as one is clear as to what it can and cannot do. The dangers come when there is confusion –  taking the first route whilst giving policy actors the impression one is doing the second risks deceiving people and giving false confidence (Edmonds & Adoha 2019, Elsenbroich & Badham 2020). Here I am only discussing the second, empirically ambitious route.

Some of the questions that policy-makers might want to ask, include, what might happen if we: close the urban parks, allow children of a specific range of ages go to school one day a week, cancel 75% of the intercity trains, allow people to go to beauty spots, visit sick relatives in hospital or test people as they recover and give them a certificate to allow them to go back to work?

To understand what might happen in these scenarios would require an agent-based model where agents made the kind of mundane, every-day decisions of where to go and who to meet, such that the patterns and outputs of the model were consistent with known data (possibly following the ‘Pattern-Oriented Modelling’ of Grimm & Railsback 2012). This is currently lacking. However this would require:

  1. A long-term, iterative development (Bithell 2018), with many cycles of model development followed by empirical comparison and data collection. This means that this kind of model might be more useful for the next epidemic rather than the current one.
  2. A collective approach rather than one based on individual modellers. In any very complex model it is impossible to understand it all – there are bound to be small errors and programmed mechanisms will subtly interaction with others. As (Siebers & Venkatesan 2020) pointed out this means collaborating with people from other disciplines (which always takes time to make work), but it also means an open approach where lots of modellers routinely inspect, replicate, pull apart, critique and play with other modellers’ work – without anyone getting upset or feeling criticised. This does involve an institutional and normative embedding of good modelling practice (as discussed in Squazzoni et al. 2020) but also requires a change in attitude – from individual to collective achievement.

Both are necessary if we are to build the modelling infrastructure that may allow us to model policy options for the next epidemic. We will need to start now if we are to be ready because it will not be easy.

References

Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H., & Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html> doi: 10.18564/jasss.3993

Elsenbroich, C. and Badham, J. (2020) Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020. https://rofasss.org/2020/04/12/focussing-on-our-strengths/

Grimm, V., & Railsback, S. F. (2012). Pattern-oriented modelling: a ‘multi-scope’for predictive systems ecology. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1586), 298-310. doi:10.1098/rstb.2011.0180

Siebers, P-O. and Venkatesan, S. (2020) Get out of your silos and work together. Review of Artificial Societies and Social Simulation, 8th April 2020. https://rofasss.org/2020/0408/get-out-of-your-silos

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13th April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/


 

Focussing on our Strengths

By Corinna Elsenbroich and Jennifer Badham

(A contribution to the: JASSS-Covid19-Thread)

Understanding a situation is the precondition to make good decisions. In the extraordinary current situation of a global pandemic, the lack of consensus about a good decision path is evident in the variety of government measures in different countries, analyses of decision made and debates on how the future will look. What is also clear is how little we understand the situation and the impact of policy choices. We are faced with the complexity of social systems, our ability to only ever partially understand them and the political pressure to make decisions on partial information.

The JASSS call to arms (Flaminio & al. 2020) is pointing out the necessity for the ABM modelling community to produce relevant models for this kind of emergency situation. Whilst we wholly agree with the sentiment that ABM modelling can contribute to the debate and decision making, we would like to also point out some of the potential pitfalls inherent in a false application and interpretation for ABM.

  1. Small change, big difference: Given the complexity of the real world, there will be aspects that are better and some that are less well understood. Trying to produce a very large model encompassing several different aspects might be counter-productive as we will mix together well understood aspects with highly hypothetical knowledge. It might be better to have different, smaller models – on the epidemic, the economy, human behaviour etc. each of which can be taken with its own level of validation and veracity and be developed by modellers with subject matter understanding, theoretical knowledge and familiarity with relevant data.
  2. Carving up complex systems: If separate models are developed, then we are necessarily making decisions about the boundaries of our models. For a complex system any carving up can separate interactions that are important, for example the way in which fear of the epidemic can drive protective behaviour thereby reducing contacts and limiting the spread. While it is tempting to think that a “bigger model”, a more encompassing one, is necessarily a better carving up of the system because it eliminates these boundaries, in fact it simply moves them inside the model and hides them.
  3. Policy decisions are moral decisions: The decision of what is the right course to take is a decision for the policy maker with all the competing interests and interdependencies of different aspects of the situation in mind. Scientists are there to provide the best information for the understanding of a situation, and models can be used to understand consequences of different courses of action and the uncertainties associated with that action. Models can be used to inform policy decisions but they must not obfuscate that it is a moral choice that has to be made.
  4. Delaying a decision is making a decision to do nothing: Like any other policy option, a decision to maintain the status quo while gathering further information has its own consequences. The Call to Action (paragraph 1.6) refers to public pressure for immediate responses, but this underplays the pressure arising from other sources. It is important to recognise the logical fallacy: “We must do something. This is something. Therefore we must do this.” However, if there are options available that are clearly better than doing nothing, then it is equally illogical to do nothing.

Instead of trying to compete with existing epidemiological models, ABM could focus on the things it is really good at:

  1. Understanding uncertainty in complex systems resulting from heterogeneity, social influence, and feedback. For the case at hand this means not to build another model of the epidemic spread – there are excellent SEIR models doing that – but to explore how the effect of heterogeneity in the infected population (such as in contact patterns or personal behavior in response to infection) can influence the spread. Other possibilities include social effects such as how fear might spread and influence behaviours of panic buying or compliance with the lockdown.
  2. Build models for the pieces that are missing and couple these to the pieces that exist, thereby enriching the debate about the consequences of policy options by making those connections clear.
  3. Visualise and communicate difficult to understand and counterintuitive developments. Right now people are struggling to understand exponential growth, the dynamics of social distancing, the consequences of an overwhelmed health system, and the delays between actions and their consequences. It is well established that such fundamentals of systems thinking are difficult (Booth Sweeney and Sterman https://doi.org/10.1002/sdr.198). Models such as the simple models in the Washington Post or less abstract ones like the routine day activity one from Vermeulen et al (2020) do a wonderful job at this, allowing people to understand how their individual behaviour will contribute to the spread or containment of a pandemic.
  4. Highlight missing data and inform future collection. This unfolding pandemic is defined through the constant assessment using highly compromised data, i.e. infection rates in countries are entirely determined by how much is tested. The most comparable might be the rates of death but even there we have reporting delays and omissions. Trying to build models is one way to identify what needs to be known to properly evaluate consequences of policy options.

The problem we are faced with in this pandemic is one of complexity, not one of ABM, and we must ensure we are honouring the complexity rather than just paying lip service to it. We agree that model transparency, open data collection and interdisciplinary research are important, and want to ensure that all scientific knowledge is used in the best possible way to ensure a positive outcome of this global crisis.

But it is also important to consider the comparative advantage of agent-based modellers. Yes, we have considerable commitment to, and expertise in, open code and data. But so do many other disciplines. Health information is routinely collected in national surveys and administrative datasets, and governments have a great deal of established expertise in health data management. Of course, our individual skills in coding models, data visualisation, and relevant theoretical knowledge can be offered to individual projects as required. But we believe our institutional response should focus on activities where other disciplines are less well equipped, applying systems thinking to understand and communicate the consequences of uncertainty and complexity.

References

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P. , Antosz, P., Scholz, G., Chappin, E., Borit, M., Verhagen, H., Francesca, G. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation 23(2), 10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Booth Sweeney, L., & Sterman, J. D. (2000). Bathtub dynamics: initial results of a systems thinking inventory. System Dynamics Review: The Journal of the System Dynamics Society, 16(4), 249-286.

Stevens, H. (2020) Why outbreaks like coronavirus spread exponentially, and how to “flatten the curve”. Washington Post, 14th of March 2020. (accessed 11th April 2020) https://www.washingtonpost.com/graphics/2020/world/corona-simulator/

Vermeulen, B.,  Pyka, A. and Müller, M. (2020) An agent-based policy laboratory for COVID-19 containment strategies, (accessed 11th April 2020) https://inno.uni-hohenheim.de/corona-modell


Elsenbroich, C. and Badham, J. (2020) Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020. https://rofasss.org/2020/04/12/focussing-on-our-strengths/


 

Get out of your silos and work together!

By Peer-Olaf Siebers and Sudhir Venkatesan

(A contribution to the: JASSS-Covid19-Thread)

The JASSS position paper ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action’ (Squazzoni et al 2020) calls on the scientific community to improve the transparency, access, and rigour of their models. A topic that we think is equally important and should be part of this list is the quest to more “interdisciplinarity”; scientific communities to work together to tackle the difficult job of understanding the complex situation we are currently in and be able to give advice.

The modelling/simulation community in the UK (and more broadly) tend to work in silos. The two big communities that we have been exposed to are the epidemiological modelling community, and social simulation community. They do not usually collaborate with each other despite working on very similar problems and using similar methods (e.g. agent-based modelling). They publish in different journals, use different software, attend different conferences, and even sometimes use different terminology to refer to the same concepts.

The UK pandemic response strategy (Gov.UK 2020) is guided by advice from the Scientific Advisory Group for Emergencies (SAGE), which in turn has comprises three independent expert groups- SPI-M (epidemic modellers), SPI-B (experts in behaviour change from psychology, anthropology and history), and NERVTAG (clinicians, epidemiologists, virologists and other experts). Of these, modelling from member SPI-M institutions has played an important role in informing the UK government’s response to the ongoing pandemic (e.g. Ferguson et al 2020). Current members of the SPI-M belong to what could be considered the ‘epidemic modelling community’. Their models tend to be heavily data-dependent which is justifiable given that their most of their modelling focus on viral transmission parameters. However, this emphasis on empirical data can sometimes lead them to not model behaviour change or model it in a highly stylised fashion, although more examples of epidemic-behaviour models appear in recent epidemiological literature (e.g. Verelst et al 2016; Durham et al 2012; van Boven et al 2008; Venkatesan et al 2019). Yet, of the modelling work informing the current response to the ongoing pandemic, computational models of behaviour change are prominently missing. This, from what we have seen, is where the ‘social simulation’ community can really contribute their expertise and modelling methodologies in a very valuable way. A good resource for epidemiologists in finding out more about the wide spectrum of modelling ideas are the Social Simulation Conference Proceeding Programmes (e.g. SSC2019 2019). But unfortunately, the public health community, including policymakers, are either unaware of these modelling ideas or are unsure of how these are relevant to them.

As pointed out in a recent article, one important concern with how behaviour change has possibly been modelled in the SPI-M COVID-19 models is the assumption that changes in contact rates resulting from a lockdown in the UK and the USA will mimic those obtained from surveys performed in China, which unlikely to be valid given the large political and cultural differences between these societies (Adam 2020). For the immediate COVID-19 response models, perhaps requiring cross-disciplinary validation for all models that feed into policy may be a valuable step towards more credible models.

Effective collaboration between academic communities relies on there being a degree of familiarity, and trust, with each other’s work, and much of this will need to be built up during inter-pandemic periods (i.e. “peace time”). In the long term, publishing and presenting in each other’s journals and conferences (i.e. giving the opportunity for other academic communities to peer-review a piece of modelling work), could help foster a more collaborative environment, ensuring that we are in a much better to position to leverage all available expertise during a future emergency. We should aim to take the best across modelling communities and work together to come up with hybrid modelling solutions that provide insight by delivering statistics as well as narratives (Moss 2020). Working in silos is both unhelpful and inefficient.

References

Adam D (2020) Special report: The simulations driving the world’s response to COVID-19. How epidemiologists rushed to model the coronavirus pandemic. Nature – News Feature. https://www.nature.com/articles/d41586-020-01003-6 [last accessed 07/04/2020]

Durham DP, Casman EA (2012) Incorporating individual health-protective decisions into disease transmission models: A mathematical framework. Journal of The Royal Society Interface. 9(68), 562-570

Ferguson N, Laydon D, Nedjati Gilani G, Imai N, Ainslie K, Baguelin M, Bhatia S, Boonyasiri A, Cucunuba Perez Zu, Cuomo-Dannenburg G, Dighe A (2020) Report 9: Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand. https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf [last accessed 07/04/2020]

Gov.UK (2020) Scientific Advisory Group for Emergencies (SAGE): Coronavirus response. https://www.gov.uk/government/groups/scientific-advisory-group-for-emergencies-sage-coronavirus-covid-19-response [last accessed 07/04/2020]

Moss S (2020) “SIMSOC Discussion: How can disease models be made useful? “, Posted by Scott Moss, 22 March 2020 10:26 [last accessed 07/04/2020]

Squazzoni F, Polhill JG, Edmonds B, Ahrweiler P, Antosz P, Scholz G, Borit M, Verhagen H, Giardini F, Gilbert N (2020) Computational models that matter during a global pandemic outbreak: A call to action, Journal of Artificial Societies and Social Simulation, 23 (2) 10

SSC2019 (2019) Social simulation conference programme 2019. https://ssc2019.uni-mainz.de/files/2019/09/ssc19_final.pdf [last accessed 07/04/2020]

van Boven M, Klinkenberg D, Pen I, Weissing FJ, Heesterbeek H (2008) Self-interest versus group-interest in antiviral control. PLoS One. 3(2)

Venkatesan S, Nguyen-Van-Tam JS, Siebers PO (2019) A novel framework for evaluating the impact of individual decision-making on public health outcomes and its potential application to study antiviral treatment collection during an influenza pandemic. PLoS One. 14(10)

Verelst F, Willem L, Beutels P (2016) Behavioural change models for infectious disease transmission: A systematic review (2010–2015). Journal of The Royal Society Interface. 13(125)


Siebers, P-O. and Venkatesan, S. (2020) Get out of your silos and work together. Review of Artificial Societies and Social Simulation, 8th April 2020. https://rofasss.org/2020/0408/get-out-of-your-silos


 

Go for DATA

By Gérard Weisbuch

(A contribution to the: JASSS-Covid19-Thread)

I totally share the view on the importance of DATA. What we need is data driven models and the reference to weather forecasting and data assimilation is very appropriate. This probably implies the establishment of a center for epidemics forecasting similar to Reading in the UK or Météo-France in Toulouse. The persistence of such an institution in “normal times” would be hard to warrant, but its operation could be organised as the military reserve.

Let me stress three points.

  1. Models are needed not only by National Policy makers but by a wide range of decision makers such as hospitals and even households. These meso-scales units face hard problems of supplies: hospitals have to manage the supplies of material, consumables, personnel to face hard to predict demand from patients. The same holds true for households: e.g. how to program errands in view of the dynamics of the epidemics? All the supply chain issues also exist for firms, including the chain of deliveries of consumables to hospitals. Hence the importance of available data provided by a center for epidemics forecasting.
  2. The JASSS call (Flaminio et al. 2020) stresses the importance DATA, but does not provide many clues about how to get them. One can hope that some institutions would provide them, but my limited experience is that you have to dig for them. Do It Yourself is a leitmotiv of the Big Data industry. I am thinking of processing patient records to build models of the disease, or private diaries and tweets to model individual behaviour. One then needs collaboration from the NLP (Natural Language Processing) community.
  3. The public and even the media have a very low understanding of dynamical systems and of exponential growth. We know since D. Kahneman book “Thinking, Fast and Slow” (2011) that we have a hard time reasoning on probabilities for instance, but this also applies to dynamics and exponential. We face situations that mandate different actions at different stage of the epidemics such as doing errands or moving to the country-side for town dwellers. The issue is even more difficult for firms, who have to manage employment. Simple models and experimental cognitive science results should be brought to journalists and the general public concerning these issues, in the style of Kahneman if possible.

References

Kahneman, D., & Patrick, E. (2011). Thinking, fast and slow. Allen Lane.

Squazzoni, Flaminio, Polhill, J. Gareth, Edmonds, Bruce, Ahrweiler, Petra, Antosz, Patrycja, Scholz, Geeske, Chappin, Émile, Borit, Melania, Verhagen, Harko, Giardini, Francesca and Gilbert, Nigel (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Weisbuch, G. (2020) Go for DATA. Review of Artificial Societies and Social Simulation, 7th April 2020. https://rofasss.org/2020/04/07/go-for-data/


 

Predicting Social Systems – a Challenge

By Bruce Edmonds, Gary Polhill and David Hales

(Part of the Prediction-Thread)

There is a lot of pressure on social scientists to predict. Not only is an ability to predict implicit in all requests to assess or optimise policy options before they are tried, but prediction is also the “gold standard” of science. However, there is a debate among modellers of complex social systems about whether this is possible to any meaningful extent. In this context, the aim of this paper is to issue the following challenge:

Are there any documented examples of models that predict useful aspects of complex social systems?

To do this the paper will:

  1. define prediction in a way that corresponds to what a wider audience might expect of it
  2. give some illustrative examples of prediction and non-prediction
  3. request examples where the successful prediction of social systems is claimed
  4. and outline the aspects on which these examples will be analysed

About Prediction

We start by defining prediction, taken from (Edmonds et al. 2019). This is a pragmatic definition designed to encapsulate common sense usage – what a wider public (e.g. policy makers or grant givers) might reasonably expect from “a prediction”.

By ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model.

Let us clarify the language in this.

  • It has to be reliable. That is, one can rely upon its prediction as one makes this – a model that predicts erratically and only occasionally predicts is no help, since one does not whether to believe any particular prediction. This usually means that (a) it has made successful predictions for several independent cases and (b) the conditions under which it works is (roughly) known.
  • What is predicted has to be unknown at the time of prediction. That is, the prediction has to be made before the prediction is verified. Predicting known data (as when a model is checked on out-of-sample data) is not sufficient [1]. Nor is the practice of looking for phenomena that is consistent with the results of a model, after they have been generated (due to ignoring all the phenomena that is not consistent with the model in this process).
  • What is being predicted is well defined. That is, How to use the model to make a prediction about observed data is clear. An abstract model that is very suggestive – appears to predict phenomena but in a vague and undefined manner but where one has to invent the mapping between model and data to make this work – may be useful as a way of thinking about phenomena, but this is different from empirical prediction.
  • Which aspects of data about being predicted is open. As Watts (2014) points out, this is not restricted to point numerical predictions of some measurable value but could be a wider pattern. Examples of this include: a probabilistic prediction, a range of values, a negative prediction (this will not happen), or a second-order characteristic (such as the shape of a distribution or a correlation between variables). What is important is that (a) this is a useful characteristic to predict and (b) that this can be checked by an independent actor. Thus, for example, when predicting a value, the accuracy of that prediction depends on its use.
  • The prediction has to use the model in an essential manner. Claiming to predict something obviously inevitable which does not use the model is insufficient – the model has to distinguish which of the possible outcomes is being predicted at the time.

Thus, prediction is different from other kinds of scientific/empirical uses, such as description and explanation (Edmonds et al. 2019). Some modellers use “prediction” to mean any output from a model, regardless of its relationship to any observation of what is being modelled [2]. Others use “prediction” for any empirical fitting of data, regardless of whether that data is known before hand. However here we wish to be clearer and avoid any “post-truth” softening of the meaning of the word for two reasons (a) distinguishing different kinds of model use is crucial in matters of model checking or validation and (b) these “softer” kinds of empirical purpose will simply confuse the wider public when if talk to themabout “prediction”. One suspects that modellers have accepted these other meanings because it then allows them to claim they can predict (Edmonds 2017).

Some Examples

Nate Silver and his team aim to predict future social phenomena, such as the results of elections and the outcome of sports competitions. He correctly predicted the outcomes of all 50 electoral colleges in Obama’s election before it happened. This is a data-hungry approach, which involves the long-term development of simulations that carefully see what can be inferred from the available data, with repeated trial and error. The forecasts are probabilistic and repeated many times. As well as making predictions, his unit tries to establish the level of uncertainty in those predictions – being honest about the probability of those predictions coming about given the likely levels of error and bias in the data. These models are not agent-based in nature but tend to be of a mostly statistical nature, thus it is debatable whether these are treated as complex systems – it certainly does not use any theory from complexity science. His book (Silver 2012) describes his approach. Post hoc analysis of predictions – explaining why it worked or not – is kept distinct from the predictive models themselves – this analysis may inform changes to the predictive model but is not then incorporated into the model. The analysis is thus kept independent of the predictive model so it can be an effective check.

Many models in economics and ecology claim to “predict” but on inspection, this only means there is a fit to some empirical data. For example, (Meese & Rogoff 1983) looked at 40 econometric models where they claimed they were predicting some time-series. However, 37 out of 40 models failed completely when tested on newly available data from the same time series that they claimed to predict. Clearly, although presented as being predictive models, they could not predict unknown data. Although we do not know for sure, presumably what happened was that these models had been (explicitly or implicitly) fitted to the out-of-sample data, because the out-of-sample data was already known to the modeller. That is, if the model failed to fit the out-of-sample data when the model was tested, it was then adjusted until it did work, or alternatively, only those models that fitted the out-of-sample data were published.

The Challenge

The challenge is envisioned as happening like this.

  1. We publicise this paper requesting that people send us example of prediction or near-prediction on complex social systems with pointers to the appropriate documentation.
  2. We collect these and analyse them according to the characteristics and questions described below.
  3. We will post some interim results in January 2020 [3], in order to prompt more examples and to stimulate discussion. The final deadline for examples is the end of March 2020.
  4. We will publish the list of all the examples sent to us on the web, and present our summary and conclusions at Social Simulation 2020 in Milan and have a discussion there about the nature and prospects for the prediction of complex social systems. Anyone who contributed an example will be invited to be a co-author if they wish to be so-named.

How suggestions will be judged

For each suggestion, a number of answers will be sought – namely to the following questions:

  • What are the papers or documents that describe the model?
  • Is there an explicit claim that the model can predict (as opposed to might in the future)?
  • What kind of characteristics are being predicted (number, probabilistic, range…)?
  • Is there evidence of a prediction being made before the prediction was verified?
  • Is there evidence of the model being used for a series of independent predictions?
  • Were any of the predictions verified by a team that is independent of the one that made the prediction?
  • Is there evidence of the same team or similar models making failed predictions?
  • To what extent did the model need extensive calibration/adjustment before the prediction?
  • What role does theory play (if any) in the model?
  • Are the conditions under which predictive ability claimed described?

Of course, negative answers to any of the above about a particular model does not mean that the model cannot predict. What we are assessing is the evidence that a model can predict something meaningful about complex social systems. (Silver 2012) describes the method by which they attempt prediction, but this method might be different from that described in most theory-based academic papers.

Possible Outcomes

This exercise might shed some light of some interesting questions, such as:

  • What kind of prediction of complex social systems has been attempted?
  • Are there any examples where the reliable prediction of complex social systems has been achieved?
  • Are there certain kinds of social phenomena which seem to more amenable to prediction than others?
  • Does aiming to predict with a model entail any difference in method than projects with other aims?
  • Are there any commonalities among the projects that achieve reliable prediction?
  • Is there anything we could (collectively) do that would encourage or document good prediction?

It might well be that whether prediction is achievable might depend on exactly what is meant by the word.

Acknowledgements

This paper resulted from a “lively discussion” after Gary’s (Polhill et al. 2019) talk about prediction at the Social Simulation conference in Mainz. Many thanks to all those who joined in this. Of course, prior to this we have had many discussions about prediction. These have included Gary’s previous attempt at a prediction competition (Polhill 2018) and Scott Moss’s arguments about prediction in economics (which has many parallels with the debate here).

Notes

[1] This is sufficient for other empirical purposes, such as explanation (Edmonds et al. 2019)

[2] Confusingly they sometimes the word “forecasting” for what we mean by predict here.

[3] Assuming we have any submitted examples to talk about

References

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1 (see also http://cfpm.org/discussionpapers/236)

Edmonds, B. (2017) The post-truth drift in social simulation. Social Simulation Conference (SSC2017), Dublin, Ireland. (http://cfpm.org/discussionpapers/195)

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Grimm V, Revilla E, Berger U, Jeltsch F, Mooij WM, Railsback SF, Thulke H-H, Weiner J, Wiegand T, DeAngelis DL (2005) Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science 310: 987-991.

Meese, R.A. & Rogoff, K. (1983) Empirical Exchange Rate models of the Seventies – do they fit out of sample? Journal of International Economics, 14:3-24.

Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/

Polhill, G., Hare, H., Anzola, D., Bauermann, T., French, T., Post, H. and Salt, D. (2019) Using ABMs for prediction: Two thought experiments and a workshop. Social Simulation 2019, Mainz.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Thorngate, W. & Edmonds, B. (2013) Measuring simulation-observation fit: An introduction to ordinal pattern analysis. Journal of Artificial Societies and Social Simulation, 16(2):14. http://jasss.soc.surrey.ac.uk/16/2/4.html

Watts, D. J. (2014). Common Sense and Sociological Explanations. American Journal of Sociology, 120(2), 313-351.


Edmonds, B., Polhill, G and Hales, D. (2019) Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge


 

What are the best journals for publishing re-validations of existing models?

By Annamaria Berea

A few years ago I worked on an ABM that I eventually published in a book. Recently, I have conducted new experiments with the same model, re-analyzed the data and had a different dataset that I used for validation of the model. Where can I publish this new work on an older model? I submitted it to a special issue of a journal, but was rejected as “the model was not original”. While the model is not, the new data analysis and validation are and I think it is even more important within the current discussions about replication crises in science.


Berea, A. (2019) What are the best journals or publishers for reports of re-validations of existing models? Review of Artificial Societies and Social Simulation, 31st October 2019. https://rofasss.org/2019/10/30/best-journal/