Tag Archives: comment

Designing social simulation to (seriously) support decision-making: COMOKIT, an agent-based modelling toolkit to analyse and compare the impacts of public health interventions against COVID-19

By Alexis Drogoul1, Patrick Taillandier2, Benoit Gaudou1,3, Marc Choisy4,8, Kevin Chapuis1,5,  Quang Nghi Huynh 1,6, Ngoc Doanh Nguyen1,7, Damien Philippon10, Arthur Brugière1, and Pierre Larmande8

1 UMI 209, UMMISCO, IRD, Sorbonne Université, Bondy, France. 2 UR 875, MIAT, INRAE, Toulouse University, Castanet Tolosan, France. 3 UMR 5505, IRIT, Université Toulouse 1 Capitole, Toulouse, France. 4 UMR 5290, MIVEGEC, IRD/CNRS/Univ. Montpellier, Montpellier, France. 5 UMR 228, ESPACE-DEV, IRD, Montpellier, France. 6 CICT, Can Tho University, Can Tho, Vietnam. 7 MSLab / WARM, Thuyloi University, Hanoi, Vietnam. 8 UMR 232, DIADE, IRD, Univ. Montpellier, Montpellier, France. 9 OUCRU, Centre for Tropical Medicine, Ho Chi Minh City, Viet Nam. 10 WHO Collaborating Centre for Infectious Disease Epidemiology and Control, School of Public Health, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong Special Administrative Region, China.

(A contribution to the: JASSS-Covid19-Thread)

In less than 4 months after its emergence in China, the COVID-19 pandemic has spread worldwide. In response to this health crisis, unprecedented in modern history, researchers have mobilized to produce knowledge and models in order to inform and support public decision-making, sometimes in real-time (Adam, D. 2020). However, the social modelling community is facing two challenges in this endeavour: the first one is its capacity to provide robust scientific knowledge and to translate it into evidences on concrete cases (and not only general principles) within a short time range; and the second one is to do it knowing (and anticipating the fact) that these evidences may have concrete social, economic or clinical impacts in the “real” world.

These two challenges require the design of realistic models that provide what B. Edmonds, in response to (Squazzoni & al. 2020), calls the “empirical grounding and validation needed to reliably support policy making” (Edmonds, 2020); in other words, spatially explicit, demographically realistic, data driven models that can be fed with both quantitative and qualitative (behavioural) data, and that can be easily experimented in huge numbers of scenarios so as to provide statistically sound results and evidences.

It is difficult to deny these requirements, but it is easier said than done. What we have witnessed, instead, these last 4 months, is an explosion of agent-based toy models representing, ad nauseam, the spread of the virus or similar dynamics within artificial populations without space, without behaviours, without friend nor family relations, without social networks, without even remotely realistic activities or mobility schemes; in short, populations of artificial agents devoid of everything that makes a human population slightly different from a mixture of homogeneous particles. How we, as a community, can claim to inform policy makers, in such a critical context, with such abstract and simplistic constructions is difficult to justify. Are public health decision makers really that interested, these days, in models that help them to understand the general principles, the inner mechanisms or hidden dynamics of this crisis? Or would they feel better supported if we could answer their questions on which interventions, at which place, at which spatial and temporal scale and on which populations, would have the best impact on the pandemic?

We tend to forget, however, that agent-based modelling (ABM), among other benefits, does not oppose these two objectives when building a model. And from the outset of the crisis, many of us were quick to advocate a modelling approach that would:

  • Be as close as possible to public decision making by having the possibility to answer to concrete, practical questions;
  • Be based on a detailed and realistic representation of space, as the spread of the epidemic is spatial and public health policies are also predominantly spatial (containment, social distancing, reduction of mobility, etc.);
  • Rely on spatial and social data that can be collected easily and, above all, quickly, and not be too dependent on the availability of large datasets (which may not be opened nor shared depending on the country of intervention);
  • Make it possible to represent as faithfully as possible the complexity of the social and ecological environments in which the pandemic is spreading;
  • Be generic, flexible and applicable to any case study, but also trustable as it relies on inner mechanisms that can be isolated and validated separately;
  • Be open and modular enough to support the cooperation of researchers across different disciplines while relying on rigorous scientific and computational principles;
  • Offer an easy access to large-scale experimentation and statistical validation by facilitating the exploration of its parameters;

This approach is currently being implemented by an interdisciplinary group of modellers, all signatories of this response, who have started to design and implement on the GAMA platform a generic model called COMOKIT, around which they now wish to gather the maximum number of modellers and researchers in epidemiology and social sciences. Being generic here means that COMOKIT is portable for almost any case study imaginable, from small towns to provinces or even countries, the only real limit to its application being the available RAM and computing power[1].

COMOKIT is an integrated model that, in its simplest incarnation, dynamically combines five sub-models:

  1. a sub-model of the individual clinical dynamics and epidemiological status of agents
  2. a sub-model of agent-to-agent direct transmission of the infection,
  3. a sub-model of environmental transmission through the built environment,
  4. a sub-model of policy design and implementation,
  5. an agenda-based model of people activities at a one-hour time step.

It allows, of course, to represent heterogeneity in individual characteristics (sex, age, household), agendas (depending on social structures, available services or age categories), social relationships and behaviours (e.g. respect of regulations).

COMOKIT has been designed as modular enough to allow modellers and users to represent different strategies and study their impacts in multiple scenarios. Using the experimental features provided by the underlying GAMA platform (Taillandier & al. 2019) (like advanced visualization, multi-simulation, batch experiments, easy large-scale explorations of parameters spaces on HPC infrastructures), it is made particularly easy and effective to compare the outcomes of these strategies. Modularity is also a key to facilitating its adoption by other modellers and users: COMOKIT is a basis that can be very easily extended (to new policies, people activities, actors, spatial features, etc.). For instance, more detailed socio-psychological models, like the ones described in ASSOCC (Ghorbani & al. 2020), could be interesting to test within realistic models. In that respect, COMOKIT is both a framework (for deriving new concrete models) and a model (that can be instantiated by itself on arbitrary datasets).

Finally, COMOKIT has been thought of as incrementally expandable: because of the urgency usually associated with its use, it can be instantiated on new case studies in a matter of minutes, by generating the built environment of an area and its synthetic population using a simple geolocalised boundary and reasonable defaults (which can of course be parametrized, or even, in the case of the population generation, be driven by a plugin called Gen* (Chapuis & al. 2018)). When more detailed data becomes available (about the population, peoples’ occupations, economic activities, public health policies, …) the same model can be fed with it in order to refine its initial outcomes.

 A screenshot of the experiments’ UI in COMOKIT

Figure 1. A screenshot of the experiments’ UI in COMOKIT: six scenarios of partial confinement are being compared with respect to the number of cases during and after a 3 months-long period. Son Loi case study, 9988 inhabitants from the 2019 Vietnamese census.

Up to now, COMOKIT has been implemented and evaluated on two cases of city confinement in Vietnam (i.e. Son Loi (Thanh & al. 2020) and Thua Duc). In these cases, which have served as testbeds to verify the correctness of the individual sub-models and their interactions, we have compared the impacts of a number of social-distancing strategies (e.g. with a ratio of the population allowed to move outside, for various durations, to various geographical extents, by activities, and so on), and other non-pharmaceutical interventions such as advising the population to wear masks, or closing the schools and public places. These studies have shown in particular that the process of ending an intervention is as much impactful as the process of starting it, in particular to avoid a second epidemic wave

We need you: social scientists, epidemiologists, modellers, computer scientists, web designers…

As the epidemic moves to countries with more limited health infrastructure and economic space, it becomes critical to devise, test and compare original public interventions that are adapted to these constraints, for instance interventions that would be more geographically and socially targeted than an entire lockdown of the whole population. COMOKIT, which is used since the beginning of April 2020 within the Rapid Response Team of the Steering Committee against COVID-19 of the Ministry of Health in Vietnam, can become an invaluable help in this endeavour. However, it must become even more realistic, reliable and robust than it is at present, so that decision-makers can build a relationship of trust with this new tool and hopefully with agent-based modelling in general.

All the documentation (with a complete ODD description and UML diagrams), commented source code (of the models and utilities), as well as five example datasets, are made available on the project’s webpage and Github repository to be shared, reused and adapted to other case studies. We strongly encourage anyone interested to try COMOKIT, apply it on their own case studies, improve it by adding new policies, activities, agents or scenarios, and share their studies, proposals, and results. Any help will be appreciated to show that we can collectively contribute, as a community, to the fight against this pandemic (and maybe the next ones): analysing the sub-models, documenting them, proposing access to data, fixing bugs, adding new sub-models, testing their integration, proposing HPC infrastructures to run large-scale experiments, everything can be helpful!


[1] To give a very rough idea, it takes approximately 15mn and 800Mb of RAM on one core of a laptop to simulate 6 months of a town of 10.000 inhabitants, at a 1-hour step, while displaying a 3D view and charts.


Adam, D. (2020). Special report: The simulations driving the world’s response to COVID-19. Nature. doi:10.1038/d41586-020-01003-6

Chapuis, K., Taillandier, P., Renaud, M., & Drogoul, A. (2018). Gen*: a generic toolkit to generate spatially explicit synthetic populations. International Journal of Geographical Information Science, 32(6), 1194-1210. doi:10.1080/13658816.2018.1440563

Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13rd April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/

Ghorbani, A., Lorig, F., de Bruin, B., Davidsson, P., Dignum, F., Dignum, V., van der Hurk, M., Jensen, M., Kammler, C., Kreulen, K., Ludescher, L. G., Melchior, A., Mellema, R., Păstrăv, C., Vanhée, L. and Verhagen, H. (2020) The ASSOCC Simulation Model: A Response to the Community Call for the COVID-19 Pandemic. Review of Artificial Societies and Social Simulation, 25th April 2020. https://rofasss.org/2020/04/25/the-assocc-simulation-model/

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Taillandier, P., Gaudou, P. Grignard, A. Huynh, Q.N., Marilleau, N., Caillou, P., Philippon, D., Drogoul, A. (2019) Building, Composing and Experimenting Complex Spatial Models with the GAMA Platform. GeoInformatica 23, 299-322. doi:10.1007/s10707-018-00339-6

Thanh, H. N., Van, T. N., Thu, H. N. T., Van, B. N., Thanh, B. D., Thu, H. P. T., … & Nguyen, T. A. (2020). Outbreak investigation for COVID-19 in northern Vietnam. The Lancet Infectious Diseases. DOI:10.1016/S1473-3099(20)30159-6

Drogoul, A., Taillandier, P., Gaudou, B., Choisy, M., Chapuis, K., Huynh, N. Q. , Nguyen, N. D., Philippon, D., Brugière, A., and Larmande, P. (2020) Designing social simulation to (seriously) support decision-making: COMOKIT, an agent-based modelling toolkit to analyze and compare the impacts of public health interventions against COVID-19 . Review of Artificial Societies and Social Simulation, 27th April 2020. https://rofasss.org/2020/04/27/comokit/


The ASSOCC Simulation Model: A Response to the Community Call for the COVID-19 Pandemic

Amineh Ghorbani1 , Fabian Lorig2 , Bart de Bruin1 , Paul Davidsson2, Frank Dignum3, Virginia Dignum3, Mijke van der Hurk4, Maarten Jensen3, Christian Kammler3, Kurt Kreulen1, Luis Gustavo Ludescher3, Alexander Melchior4, René Mellema3, Cezara Păstrăv3, Loïs Vanhée5, and Harko Verhagen6

1TU Delft, Netherlands, 
2Malmö University, Sweden, 3Umeå University, Sweden, 4Utrecht University, Netherlands, 5University of Caen, France,6Stockholm University, Sweden

(A contribution to the: JASSS-Covid19-Thread)

Abstract: This article is a response to the call for action to the social simulation community to contribute to research on the COVID-19 pandemic crisis. We introduce the ASSOCC model (Agent-based Social Simulation for the COVID-19 Crisis), a model that has specifically been designed and implemented to address the societal challenges of this pandemic. We reflect on how the model addresses many of the challenges raised in the call for action. We conclude by pointing out that the focus of the efforts of the social simulation community should be less on the data and prediction-based simulations but rather on the explanation of mechanisms and exploration of social dependencies and impact of interventions.


The COVID-19 crisis is a pandemic that is currently spreading all over the world. It has already had a dramatic toll on humanity affecting the daily life of billions of people and causing a global economic crisis resulting in deficits and unemployment rates never experienced before. Decision makers as well as the general public are in dire need of support to understand the mechanisms and connections in the ongoing crisis as well as support for potentially life-threatening and far-reaching decisions that are to be made with unknown consequences. Many countries and regions are struggling to deal with the impacts of the COVID-19 crisis on healthcare, economy and social well-being of communities, resulting in many different interventions. Examples are the complete lock-down of cities and countries, appeals to the individual responsibility of citizens, and suggestions to use digital technology for tracking and tracing of the disease spread. All these strategies require considerable behavioural changes by all individuals.

In such an unprecedented situation, agent-based social simulation seems to be a very suitable technique for achieving a better understanding of the situation and for providing decision-making support. Most of the available simulations for pandemics focus either on specific aspects of the crisis, such as epidemiology (Chang et al., 2020) or simplified general agglomerated mechanics (e.g., IndiaSIM). Many models, repurposing existing models that were originally developed for other pandemics such as influenza are mostly illustrative and intend to provide theory exposition (Squazzoni et al., 2020). Although current simulations are based on advanced statistical modelling that enables sound predictions of specific aspects of the disease, they use very limited models of human motives and cultural differences. Yet, understanding the possible consequences of drastic policy measures requires more than statistical analysis such as R0 factor (the basic reproduction number, which denotes the expected number of cases directly generated by one case in a population) or economic variables. Measures impact people and thus need to consider individuals’ needs (e.g., affiliation, control, or self-fulfilment), social networks (norms, relationships), and how these attributes and conditions can quickly change during difficult situations (e.g., need for job and food security, overloaded hospitals, loss of relatives).

In this context we have developed ASSOCC (Agent-based Social Simulation for the COVID-19 Crisis; see Figure 1) as a many-faceted observatory of scenarios. In ASSOCC, we connect the many involved aspects in a cohesive simulation, for helping stakeholders to raise their general awareness on all critical aspects of the problem and especially the dependencies between them. Of course, one can hardly aim to cover a large variety of aspects and have very complete models on each of them. Thus, we strike a balance between broadness of the model and accuracy on all aspects. This simulation delivers a complementary perspective to state of the art disciplinary models. Where most of other simulations offer sharp yet isolated pieces of the image, our approach is valuable for combining the pieces of the puzzle since a specific modelling focus can limit space for debate (ní Aodha & Edmonds, 2017).

The ASSOCC approach puts the human behaviour central as a linking pin between many disciplines and aspects: psychology (needs, values, beliefs, plans), social sensitivity (norms, social networks, work relationships), infrastructures (transportation, supplies), epidemiology (spreading), economy (transactions, bankruptcy), cultural influences and public measures (closing activities, lock-down, social distancing, testing). The already complex model is extended on a daily basis. This is done in a largely modular fashion such that specific aspects can be switched on and off during the runs. This leads to some limitations and also requires re-calibration of variables, but overall it seems worth the effort when looking at the first results of the scenarios we have simulated.

In this article, we aim to share our approach to simulating the COVID-19 pandemic, outline how the building and use of ASSOCC takes up a number of the challenges that were posed in (Squazzoni et al., 2020), and emphasize the potentials of agent-based simulation as method in mastering pandemics.

Figure 1: A screenshot of the Graphical User Interface of the ASSOCC simulation

Figure 1: A screen shot of the Graphical User Interface of the ASSOCC simulation

Introducing the ASSOCC Model

The goal of the ASSOCC simulation model is to integrate different parts of our daily life that are affected by the pandemic in order to support decision makers when trading off different policies against each other. It facilitates the identification of potential interdependencies that might exist and need to be addressed. This is important as different countries, cultures and populations affect the suitability and consequences of measures thus requiring local conditions to be taken into account. The model allows stakeholders to study individual and social reactions to different policies, to explore different scenarios, and to analyse their potential effects.

Figure 2: A screenshot of the base simulation model.

Figure 2: A screen shot of the base simulation model.

How it works

The ASSOCC simulation model is based on a synthetic population that consists of a set of artificial individuals (see Figure 1), each with given needs, demographic characteristics and attitude towards regulations and risks. By having all these agents decide over time what they should be doing, we can analyse their reactions to many different policies, such as total lock-down or voluntary isolation. Agents can move, perceive other agents, and decide on their actions based on their individual characteristics and their perception of the environment. The environment constrains the physical actions of the agents but can also impose norms and regulations on their behaviour. Through interaction, agents can take over characteristics from the other agents, such as becoming infected with COVID-19, or receiving information.


In the ASSOCC model, there are four types of agents: children, students, workers, and retirees. These types represent different age groups with different socio-demographic attributes, common activities, infection risks and behaviours. Each agent has a health status that represents being infected, symptomatic or asymptomatic contagiousness, and a critical state. Moreover, agents have needs and capabilities as well as personal characteristics such as risk aversion and the propensity to follow the law. Needs of the agent include health, wealth and belonging. They are modelled using the water tank model introduced by Dörner et al. (2006). Agent capabilities capture for instance their jobs or family situations. Agents need a minimum wealth value to survive which they receive by working or through subsidies (or by living together with a working agent). In shops and workplaces, agents trade wealth for products and services. Agents pay tax to a central government that then uses this money for subsidies and the maintenance of public services such as hospitals and schools.


During the simulation, agents can move between different places according to their needs and obligations. Places represent homes, shops, hospitals, workplaces, schools, airports and stations. By assigning agents to homes, different households can be represented: single adults, families, retirement homes, and multi-generational households with children, adults and elderly people. The configuration of households is assumed to have an impact on the spreading of COVID-19 and great differences in household configurations exist between countries. Thus, the distribution of these households can be set in the simulation to analyse the situation in different cities 
or countries.


Policies describe interventions that can be taken by decision makers such as social distancing, infection and immunity testing or closing of schools and workplaces. Policies have complex effects on health, wealth and well-being of all agents. Policies can be extended in many different ways to provide an experimentation environment for decision makers. It is not only the decision of whether or not to implement certain policies but also the point in time when the policy is implemented that influences its success.

Conceptual Design

The ASSOCC model has been conceptualized based on many theories from various scientific disciplines, including psychology (basic motives and needs (McClelland, 1987; Jerome, 2013)), sociology (Schwartz value system (Schwartz, 2012)), culture (Hofstede’s cultural dimensions (Hofstede et al., 2010)), economy (circular flow of income (Murphy, 1993)), and epidemiology (the SEIR model (Cope et al., 2018)). For the disease model, we looked at the following sources: a case study of a corona time lapse (Xu et al., 2020), a cohort study showing the general time lapse of the disease with and without fatality (Zhou et al., 2020) and the incubation period determined by confirmed cases (Lauer et al., 2020). This theory-driven model, determines the reaction of agents to policies and their physical and social context.

A short description of the conceptual architecture of ASSOCC as well as an overview of the agent architecture are available at the project website.


The simulation is built in Netlogo (see Figure 2 with a visual interface in Unity (see Figure 1. The Netlogo model can be used as a standalone simulation model. For the scenarios, we use the Unity interface for better visualisation of the simulation. The complete source code is available on Github under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Note that at the time of publication of this article, this is still a beta version of the model, which we are continuously developing. The complete description of the agent-based model using the ODD protocol can as well be found on the ASSOCC website.

Addressing Key Challenges

Having explained the ASSOCC framework, in this section, we explain how our modelling effort addresses the 
challenges raised by Squazzoni et al. (2020).

Like any model, the ASSOCC model cannot be a complete representation of reality and has its own limitations. Yet, we believe that the dimensions of social complexity that we have included provide a promising ground to draw useful insights. As rightfully highlighted by Squazzoni et al. (2020), the quality of a model depends on its purpose, its theoretical assumptions, the level of abstraction, and the quality of data.

The purpose of the ASSOCC model is to illustrate and investigate mechanisms. Through the simulation of scenarios, ASSOCC shows dependencies between human behaviour and the spread of the virus, the economic incentives and the psychological needs of people.

In the next sections we aim to explain how the ASSOCC model addresses the main issues raised in (Squazzoni et al., 2020).

Social Complexity

In order to incorporate pre-existing behavioral attitudes, network effects, social norms and culture that influence people’s response to policy measure, we have built a cross-disciplinary extended team of researchers. We have spent extra time and effort to construct a complex model where social complexity is extensively taken into account. As an example, the Maslow theory for individual needs takes pre-existing behavioral attitudes of individuals into account (Jerome, 2013). By connecting this theory to Schwartz value dimensions (Schwartz, 2012) and connecting these dimensions to the cultural dimensions of Hofstede (Hofstede et al., 2010), we incorporate a whole spectrum of individual biological and social needs all the way to cultural diversity among nations.

Yet, the limitations of ASSOCC are in the richness of each of the societal dimensions. We use some rather simple models, for example, in the economic, culture, social network and transport aspects. We document which choices have been made to indicate which complexities we left out and why they were left out and why we think this does not affect the validity of our results. For example, in the transport dimension we do not distinguish between cars and bikes. We do not need that as we do not have large distances and both cars and bikes can be used as solo transport means. We are aware that there are differences in economic terms and also in values for choosing between the two means of transport, but these aspects are not very relevant for the spread of the virus.


Although there is pressure on the community to respond to this crisis and to provide expert judgement, we have not sacrificed the complexity of our model, nor it’s transparency to provide rapid answers. In fact, we have aimed to make our modelling process as transparent as possible. Starting from low level programming code, ASSOCC uses Github repository to make the code publicly available. Besides code documentation, our large scale model makes use of the ODD protocol to make the model transparent at the conceptual level. Additionally, by building the Unity interface layer on the Netlogo model, we aim to connect policy scenarios to the parameter setup of the model, so that policy makers themselves can see how changes to scenarios leads to various outcomes.

By emphasizing that ASSOCC creates simulations of policy scenarios, we step away from giving a particular advice for a “best” policy. Rather we highlight the fundamental questions and priorities that have to be dealt with to choose among various policies. This is done by showing the consequences of the implementation of various scenarios and comparing them. This comparison can for example show how different groups of people are affected economically and health-wise by a policy. The most appropriate policy thus depends on the outcomes that are deemed more desirable.


Given the short time since the outbreak, accurate data on the COVID-19 outbreak suitable for complex agent-based models is not yet available. It is not clear how various cases are defined and how the data is collected. However, in our view, this should not limit our modelling abilities for this much-needed rapid response.

In our view, detailed data is not required to build a useful model. In fact, our model is a ’SimCity’ to study various policy scenarios rather than actual data-driven representation of cities. While we have made sure that our model can show similar patterns to the ones observed in reality for overall validity, small fine-grain data is not included. The data used for the simulation comes from particular epidemiological models, from economic models and from calibration of the model against known, normal situations.

As illustrated in models that were described in (Squazzoni et al., 2020), even models that are calibrated with real-world data fail to capture important aspects such as network effects as these changes are still based on stochastic randomized processes. Therefore, being aware that the current data is not yet available nor reliable, 
we have built our model on strong theoretical basis in order to avoid oversimplification of factors that play important roles in this crisis.

Interface between modelling and policy

As highlighted by Squazzoni et al. (2020), “good pandemic models are not always good policy advice models”. We fully agree with this point, which is central to our modelling efforts. A user-interface has been especially developed in Unity (see Figure 1) to support comprehension of the model by policy makers and to facilitate experimentation. In the Unity interface, one can explore the different parameters of a scenario, see the results of the simulations in graph form and also follow several aspects live through the elements available in the spatial representation of the town. This spatial interface is meant purely for better understanding of the model. We believe that having clarity regarding our modelling goal increases policy makers trust in our insights.

In addition, we have been in close contact with policy makers around the world to, on the one hand, understand their needs and immediate and long-term concerns, and on the other hand, communicate our model’s capabilities in the most concise manner to support their decisions. To date, we have engaged with policy makers in the Netherlands, Italy and Sweden.

Predictive Power

In our interactions with policy makers and other users, we make clear that the ASSOCC platform is not meant for giving detailed predictions, but to support the generation of insights. Such a broad model is best used to indicate dependencies and trends between different aspects of the society. Due to the computing power needed for each agent running the complex reasoning, it is difficult to scale this type of model to more than a few thousand agents, at least in NetLogo. 
The validation of the model can be done through the causal chains that can be followed throughout the model. I.e. certain outcomes can be linked through agent states to certain causes in the environment or the actions of other agents. If these causal chains can be interpreted as plausible stories that can be confirmed by the theories of those respective aspects, it is possible to achieve a certain type of high level validation. So, this is not a validation on data, but validation based on expert opinion.

A second type of validation that can be done on this type of ABM is to make a detailed comparison with established epidemiological models. For instance, we are comparing our simulation with the one used for (Ferretti et al., 2020) in a particular scenario where the effect of using tracking and tracing apps is investigated. By translating the assumptions and parameters very carefully to ASSOCC parameters and comparing the resulting simulations, we can validate the underlying models against more traditional ones and also show possible deviations that might come up and that highlights advantages or lacunas in the ASSOCC model. The results of this comparison will be published jointly by the two groups. Finally, we are calibrating ASSOCC parameters by using statistical data, such as R0, number of deaths, and demographic data as means to improve validity.


In this article, we presented the ASSOCC model as a comprehensive modelling endeavour that aims to contribute to the efforts for managing the COVID-19 crisis. By modelling multiple aspects of the society and interrelating them, we provide insights into the underlying mechanisms in the society that are influenced both by the outbreak as well as policy measures that aim to control it.

Being aware of the challenges, we have aimed to include as much social complexity as possible in the model to avoid biases and oversimplification. At the same time, by being in close contact with policy makers around the world, we have taken the actual needs and considerations into account, while providing a traceable, usable and comprehensible user interface that brings the modelling insights within the reach of policy makers. In our modelling efforts, we have paid extra attention to transparency, providing well-documented and open-source code that can be used by the rest of the simulation community.

All the assumptions, underlying theories and the source code of ASSOCC are available on the project website and on Github. We invite people to use it, give feedback and based on this feedback we continuously improve the model and its parameters. According to the development of the pandemic and the state of discussion, new scenarios will be added as well.

We hope that the ASSOCC model can contribute to handling this crisis in a way that shows the capabilities and usefulness of agent-based modelling.


Chang, S. L., Harding, N., Zachreson, C., Cliff, O. M. & Prokopenko, M. (2020). Modelling transmission and control of the covid-19 pandemic in australia. arXiv preprint arXiv:2003.10218 <https://arxiv.org/abs/2003.10218>

Cope, R. C., Ross, J. V., Chilver, M., Stocks, N. P., & Mitchell, L. (2018). Characterising seasonal influenza epidemiology using primary care surveillance data. PLoS computational biology, 14(8), e1006377. doi:10.1371/journal.pcbi.1006377

Dörner, D., Gerdes, J., Mayer, M., & Misra, S. (2006, April). A simulation of cognitive and emotional effects of overcrowding. In Proceedings of the Seventh International Conference on Cognitive Modeling (pp. 92-98). Triest, Italy: Edizioni Goliardiche.

Ferretti, L., Wymant, C., Kendall, M., Zhao, L., Nurtay, A., Abeler-Dörner, L., Parker, M., Bonsall, D. & Fraser, C. (2020). Quantifying sars-cov-2 transmission suggests epidemic control with digital contact tracing. Science,  31 Mar 2020:eabb6936. doi:10.1126/science.abb6936

Hofstede, G., Hofstede, G. J. & Minkov, M. (2010). Cultures and organizations: Software of the mind. revised and expanded 3rd edition. N.-Y.: McGraw-Hill.

Jerome, N. (2013). Application of the Maslow’s hierarchy of need theory; impacts and implications on organizational culture, human resource and employee’s performance. International Journal of Business and Management Invention, 2(3), 39–45.

Lauer, S. A., Grantz, K. H., Bi, Q., Jones, F. K., Zheng, Q., Meredith, H. R., Azman, A. S., Reich, N. G. & Lessler, J. (2020). The incubation period of coronavirus disease 2019 (covid-19) from publicly reported confirmed cases: estimation and application. Annals of internal medicine

McClelland, D. (1987). Human Motivation. Cambridge Univ. Press
Murphy, A. E. (1993). John law and richard cantillon on the circular flow of income. The European Journal of the History of Economic Thought, 1(1), 47–62.

Aodha, L. & Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Xu, Z., Shi, L., Wang, Y., Zhang, J., Huang, L., Zhang, C., Liu, S., Zhao, P., Liu, H., Zhu, L. et al. (2020). Pathological findings of covid-19 associated with acute respiratory distress syndrome. The Lancet respiratory medicine, 8(4), 420–422

Zhou, F., Yu, T., Du, R., Fan, G., Liu, Y., Liu, Z., Xiang, J., Wang, Y., Song, B., Gu, X. et al. (2020). Clinical course and risk factors for mortality of adult inpatients with covid-19 in wuhan, china: a retrospective cohort study. The Lancet, 395(10229), 1054-1062. doi:10.1016/S0140-6736(20)30566-3

Ghorbani, A., Lorig, F., de Bruin, B., Davidsson, P., Dignum, F., Dignum, V., van der Hurk, M., Jensen, M., Kammler, C., Kreulen, K., Ludescher, L. G., Melchior, A., Mellema, R., Păstrăv, C., Vanhée, L. and Verhagen, H. (2020) The ASSOCC Simulation Model: A Response to the Community Call for the COVID-19 Pandemic. Review of Artificial Societies and Social Simulation, 25th April 2020. https://rofasss.org/2020/04/25/the-assocc-simulation-model/


Sound behavioural theories, not data, is what makes computational models useful

By Umberto Gostoli and Eric Silverman

(A contribution to the: JASSS-Covid19-Thread)

The paper “Computational Models that Matter During a Global Pandemic Outbreak: A Call to Action” by Squazzoni et al. (2020) is a valuable contribution to the ongoing self-reflection in the social simulation community regarding the role of ABM in the broader social-scientific enterprise. In this paper the authors try to assess the potential capacity of ABM to provide policy makers with a tool allowing them to predict the evolution of the pandemic and the effects of alternative policy responses. Their conclusions suggest a role for computational modelling during the pandemic, but also have implications regarding the position of ABM within the scientific and policy arenas, and its added value relative to other methodologies of scientific inquiry.

We agree with the authors that ABM has an important (and urgent) role to play to help policy makers to take more informed decisions, provided that the models are based on reliable and robust theories of human behaviour and social interaction. However, following in the footsteps of Joshua Epstein (2008), we claim that the importance and relevance of ABM goes beyond the capacity of the models to make point predictions (i.e. in the form of ‘There will be X infections/deaths in Y days time’). We propose that the ability of ABM to develop, inform, and test relevant theory is of particular relevance during this global crisis.

This does not mean that additional data allowing for the models’ calibration and validation are not important, as they can certainly help reduce the uncertainty associated with the models’ outputs, but in our view they are not essential to what agent-based models have to offer. With that in mind, the lack of these data should not prevent the ABM community from participating in the mass mobilization of the scientific community, which is working at unprecedented speed to develop models to inform the vital policy decisions being taken during this pandemic.

As we argue in a recent position paper (Silverman et al. 2020), it is precisely when we have limited data, or no data at all, that simulations provide greater value than traditional methodologies like statistical inference; indeed, the less data we have the more important is the role that agent-based (and other computational) simulations have to play. Computational models provide a way to say something about the evolution of complex systems by delimiting the set of possible outcomes through the constraints imposed by the theoretical framework which is encoded in the model. When we find ourselves in new situations such as the Covid-19 pandemic, where the data (i.e., our past experience) cannot give us any clue regarding the future evolution of the system, we find that theories become the only tool we have to make educated guesses about what could (and could not) possibly happen. Models of complex systems have typically hundreds, if not thousands, of parameters, many of which have unknown values, and some of which have values we cannot know. If we wait for the data we need to make point predictions, we would never have a say in the policy arena, and probably if these data were available other methodologies would serve the purpose better than computational models. Delimiting and quantifying the uncertainty associated with future scenarios in the face of limited data is where computational models can make a vital contribution, as they can give policy-makers useful information for risk management.

By no means are we saying that the development and effective deployment of computational models is without challenges. But we claim that the main challenge lies in the identification and inclusion of sound behavioural theories, as the outputs we get will depend upon the reliability of our models’ theoretical input. Identifying such theories is a significant challenge, requiring theoretical contributions from a number of different fields, ranging from epidemiology and urban studies to sociology and economics.

Further, putting scholars from those disciplines into the same room will not be sufficient; we must create a multidisciplinary community of people sharing the same conceptual framework, an endeavour that takes a lot of dedication, perseverance and, crucially, time. The lack of such multidisciplinary research groups strongly limits the ABM community’s capacity to develop an effective computational model of the pandemic, and we hope that at least this crisis will prove that developing such a community is necessary to improve our capacity for a timely response to the next one.

In relation to this challenge, we are aiming to develop and support a global community of agent-based modellers focused on population health concerns, via the PHASE Network project funded by the UK Prevention Research Partnership. We urge readers to join the network via our website at https://phasenetwork.org/, and help us build a multidisciplinary health modelling community that can contribute to global efforts in improving health both during and after the Covid-19 pandemic.

We must also remember that the current crisis is very unlikely to be over quickly, and its longer-term effects on society will be substantial. At the time of writing more than 80 separate groups and institutions are embarking on efforts to build a vaccine for the coronavirus, but even with such concerted efforts there are no guarantees that a vaccine will be found. As Kissler et al. have shown, even if the virus appears to abate, further waves of infections could arise years afterwards (Kissler et al. 2020). Because of the resources and time it takes to develop theoretically sound computational models, in our view this methodology is better suited to address these longer-term questions of how society can reorganize itself to increase resilience against future pandemics – and here the ability of computational models to implement and test behavioural theories is of paramount importance. The questions that must be asked in the years to come are numerous and profound: How can the world of work change to be more robust to future crises and global shut-downs? Can welfare policies like universal basic income help prevent widespread economic devastation in future crises? How must our health and care systems evolve to better protect the most vulnerable in society?

We propose that computational models can make a particularly valuable contribution in this area. At the present time there is ample evidence of the disastrous effects of delayed or insufficient policy responses to a pandemic. Economic projections already suggest we are due to enter a post-pandemic collapse to rival the Great Depression. We can, and should, begin to develop theories and models about how we may adjust society for the post-Covid world. Models could be valuable tools for testing and developing ambitious socio-economic policy ideas in silico, in order to prepare for this new reality.

To conclude, in principle we share with the authors of the paper the belief that computational models have an important role to play to inform policy makers during crisis (such as pandemics). However, we wish to emphasize the need for sound and robust theoretical frameworks ready to be included in these models, rather than on the existence and availability of data. In practice, the lack of such frameworks is more critical for ensuring that the computational modelling community can make a useful contribution during this pandemic.


Epstein, J. M. (2008) Why model? Journal of Artificial Societies and Social Simulation, 11(4):12. <http://jasss.soc.surrey.ac.uk/11/4/12.html>.

Kissler, S. M., Tedijanto, C., Goldstein, E. Grad, Y. H.  and Lipsitch, M. (2020) Projecting the transmission dynamics of sars-cov-2 through the postpandemic period. Science. doi:10.1126/science.abb5793. <https://science.sciencemag.org/content/early/2020/04/14/science.abb5793>

Silverman, E., Gostoli, U., Picascia, S., Almagor, J., McCann, M., Shaw, R., & Angione, C. (2020). Situating Agent-Based Modelling in Population Health Research. arXiv preprint arXiv:2002.02345. <https://arxiv.org/abs/2002.02345&gt;

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Gostoli, U. and Silverman, E. (2020) Sound behavioural theories, not data, is what makes computational models useful. Review of Artificial Societies and Social Simulation, 22th April 2020. https://rofasss.org/2020/04/22/sound-behavioural-theories/


Get out of your silos and work together!

By Peer-Olaf Siebers and Sudhir Venkatesan

(A contribution to the: JASSS-Covid19-Thread)

The JASSS position paper ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action’ (Squazzoni et al 2020) calls on the scientific community to improve the transparency, access, and rigour of their models. A topic that we think is equally important and should be part of this list is the quest to more “interdisciplinarity”; scientific communities to work together to tackle the difficult job of understanding the complex situation we are currently in and be able to give advice.

The modelling/simulation community in the UK (and more broadly) tend to work in silos. The two big communities that we have been exposed to are the epidemiological modelling community, and social simulation community. They do not usually collaborate with each other despite working on very similar problems and using similar methods (e.g. agent-based modelling). They publish in different journals, use different software, attend different conferences, and even sometimes use different terminology to refer to the same concepts.

The UK pandemic response strategy (Gov.UK 2020) is guided by advice from the Scientific Advisory Group for Emergencies (SAGE), which in turn has comprises three independent expert groups- SPI-M (epidemic modellers), SPI-B (experts in behaviour change from psychology, anthropology and history), and NERVTAG (clinicians, epidemiologists, virologists and other experts). Of these, modelling from member SPI-M institutions has played an important role in informing the UK government’s response to the ongoing pandemic (e.g. Ferguson et al 2020). Current members of the SPI-M belong to what could be considered the ‘epidemic modelling community’. Their models tend to be heavily data-dependent which is justifiable given that their most of their modelling focus on viral transmission parameters. However, this emphasis on empirical data can sometimes lead them to not model behaviour change or model it in a highly stylised fashion, although more examples of epidemic-behaviour models appear in recent epidemiological literature (e.g. Verelst et al 2016; Durham et al 2012; van Boven et al 2008; Venkatesan et al 2019). Yet, of the modelling work informing the current response to the ongoing pandemic, computational models of behaviour change are prominently missing. This, from what we have seen, is where the ‘social simulation’ community can really contribute their expertise and modelling methodologies in a very valuable way. A good resource for epidemiologists in finding out more about the wide spectrum of modelling ideas are the Social Simulation Conference Proceeding Programmes (e.g. SSC2019 2019). But unfortunately, the public health community, including policymakers, are either unaware of these modelling ideas or are unsure of how these are relevant to them.

As pointed out in a recent article, one important concern with how behaviour change has possibly been modelled in the SPI-M COVID-19 models is the assumption that changes in contact rates resulting from a lockdown in the UK and the USA will mimic those obtained from surveys performed in China, which unlikely to be valid given the large political and cultural differences between these societies (Adam 2020). For the immediate COVID-19 response models, perhaps requiring cross-disciplinary validation for all models that feed into policy may be a valuable step towards more credible models.

Effective collaboration between academic communities relies on there being a degree of familiarity, and trust, with each other’s work, and much of this will need to be built up during inter-pandemic periods (i.e. “peace time”). In the long term, publishing and presenting in each other’s journals and conferences (i.e. giving the opportunity for other academic communities to peer-review a piece of modelling work), could help foster a more collaborative environment, ensuring that we are in a much better to position to leverage all available expertise during a future emergency. We should aim to take the best across modelling communities and work together to come up with hybrid modelling solutions that provide insight by delivering statistics as well as narratives (Moss 2020). Working in silos is both unhelpful and inefficient.


Adam D (2020) Special report: The simulations driving the world’s response to COVID-19. How epidemiologists rushed to model the coronavirus pandemic. Nature – News Feature. https://www.nature.com/articles/d41586-020-01003-6 [last accessed 07/04/2020]

Durham DP, Casman EA (2012) Incorporating individual health-protective decisions into disease transmission models: A mathematical framework. Journal of The Royal Society Interface. 9(68), 562-570

Ferguson N, Laydon D, Nedjati Gilani G, Imai N, Ainslie K, Baguelin M, Bhatia S, Boonyasiri A, Cucunuba Perez Zu, Cuomo-Dannenburg G, Dighe A (2020) Report 9: Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand. https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf [last accessed 07/04/2020]

Gov.UK (2020) Scientific Advisory Group for Emergencies (SAGE): Coronavirus response. https://www.gov.uk/government/groups/scientific-advisory-group-for-emergencies-sage-coronavirus-covid-19-response [last accessed 07/04/2020]

Moss S (2020) “SIMSOC Discussion: How can disease models be made useful? “, Posted by Scott Moss, 22 March 2020 10:26 [last accessed 07/04/2020]

Squazzoni F, Polhill JG, Edmonds B, Ahrweiler P, Antosz P, Scholz G, Borit M, Verhagen H, Giardini F, Gilbert N (2020) Computational models that matter during a global pandemic outbreak: A call to action, Journal of Artificial Societies and Social Simulation, 23 (2) 10

SSC2019 (2019) Social simulation conference programme 2019. https://ssc2019.uni-mainz.de/files/2019/09/ssc19_final.pdf [last accessed 07/04/2020]

van Boven M, Klinkenberg D, Pen I, Weissing FJ, Heesterbeek H (2008) Self-interest versus group-interest in antiviral control. PLoS One. 3(2)

Venkatesan S, Nguyen-Van-Tam JS, Siebers PO (2019) A novel framework for evaluating the impact of individual decision-making on public health outcomes and its potential application to study antiviral treatment collection during an influenza pandemic. PLoS One. 14(10)

Verelst F, Willem L, Beutels P (2016) Behavioural change models for infectious disease transmission: A systematic review (2010–2015). Journal of The Royal Society Interface. 13(125)

Siebers, P-O. and Venkatesan, S. (2020) Get out of your silos and work together. Review of Artificial Societies and Social Simulation, 8th April 2020. https://rofasss.org/2020/0408/get-out-of-your-silos


Predicting Social Systems – a Challenge

By Bruce Edmonds, Gary Polhill and David Hales

(Part of the Prediction-Thread)

There is a lot of pressure on social scientists to predict. Not only is an ability to predict implicit in all requests to assess or optimise policy options before they are tried, but prediction is also the “gold standard” of science. However, there is a debate among modellers of complex social systems about whether this is possible to any meaningful extent. In this context, the aim of this paper is to issue the following challenge:

Are there any documented examples of models that predict useful aspects of complex social systems?

To do this the paper will:

  1. define prediction in a way that corresponds to what a wider audience might expect of it
  2. give some illustrative examples of prediction and non-prediction
  3. request examples where the successful prediction of social systems is claimed
  4. and outline the aspects on which these examples will be analysed

About Prediction

We start by defining prediction, taken from (Edmonds et al. 2019). This is a pragmatic definition designed to encapsulate common sense usage – what a wider public (e.g. policy makers or grant givers) might reasonably expect from “a prediction”.

By ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model.

Let us clarify the language in this.

  • It has to be reliable. That is, one can rely upon its prediction as one makes this – a model that predicts erratically and only occasionally predicts is no help, since one does not whether to believe any particular prediction. This usually means that (a) it has made successful predictions for several independent cases and (b) the conditions under which it works is (roughly) known.
  • What is predicted has to be unknown at the time of prediction. That is, the prediction has to be made before the prediction is verified. Predicting known data (as when a model is checked on out-of-sample data) is not sufficient [1]. Nor is the practice of looking for phenomena that is consistent with the results of a model, after they have been generated (due to ignoring all the phenomena that is not consistent with the model in this process).
  • What is being predicted is well defined. That is, How to use the model to make a prediction about observed data is clear. An abstract model that is very suggestive – appears to predict phenomena but in a vague and undefined manner but where one has to invent the mapping between model and data to make this work – may be useful as a way of thinking about phenomena, but this is different from empirical prediction.
  • Which aspects of data about being predicted is open. As Watts (2014) points out, this is not restricted to point numerical predictions of some measurable value but could be a wider pattern. Examples of this include: a probabilistic prediction, a range of values, a negative prediction (this will not happen), or a second-order characteristic (such as the shape of a distribution or a correlation between variables). What is important is that (a) this is a useful characteristic to predict and (b) that this can be checked by an independent actor. Thus, for example, when predicting a value, the accuracy of that prediction depends on its use.
  • The prediction has to use the model in an essential manner. Claiming to predict something obviously inevitable which does not use the model is insufficient – the model has to distinguish which of the possible outcomes is being predicted at the time.

Thus, prediction is different from other kinds of scientific/empirical uses, such as description and explanation (Edmonds et al. 2019). Some modellers use “prediction” to mean any output from a model, regardless of its relationship to any observation of what is being modelled [2]. Others use “prediction” for any empirical fitting of data, regardless of whether that data is known before hand. However here we wish to be clearer and avoid any “post-truth” softening of the meaning of the word for two reasons (a) distinguishing different kinds of model use is crucial in matters of model checking or validation and (b) these “softer” kinds of empirical purpose will simply confuse the wider public when if talk to themabout “prediction”. One suspects that modellers have accepted these other meanings because it then allows them to claim they can predict (Edmonds 2017).

Some Examples

Nate Silver and his team aim to predict future social phenomena, such as the results of elections and the outcome of sports competitions. He correctly predicted the outcomes of all 50 electoral colleges in Obama’s election before it happened. This is a data-hungry approach, which involves the long-term development of simulations that carefully see what can be inferred from the available data, with repeated trial and error. The forecasts are probabilistic and repeated many times. As well as making predictions, his unit tries to establish the level of uncertainty in those predictions – being honest about the probability of those predictions coming about given the likely levels of error and bias in the data. These models are not agent-based in nature but tend to be of a mostly statistical nature, thus it is debatable whether these are treated as complex systems – it certainly does not use any theory from complexity science. His book (Silver 2012) describes his approach. Post hoc analysis of predictions – explaining why it worked or not – is kept distinct from the predictive models themselves – this analysis may inform changes to the predictive model but is not then incorporated into the model. The analysis is thus kept independent of the predictive model so it can be an effective check.

Many models in economics and ecology claim to “predict” but on inspection, this only means there is a fit to some empirical data. For example, (Meese & Rogoff 1983) looked at 40 econometric models where they claimed they were predicting some time-series. However, 37 out of 40 models failed completely when tested on newly available data from the same time series that they claimed to predict. Clearly, although presented as being predictive models, they could not predict unknown data. Although we do not know for sure, presumably what happened was that these models had been (explicitly or implicitly) fitted to the out-of-sample data, because the out-of-sample data was already known to the modeller. That is, if the model failed to fit the out-of-sample data when the model was tested, it was then adjusted until it did work, or alternatively, only those models that fitted the out-of-sample data were published.

The Challenge

The challenge is envisioned as happening like this.

  1. We publicise this paper requesting that people send us example of prediction or near-prediction on complex social systems with pointers to the appropriate documentation.
  2. We collect these and analyse them according to the characteristics and questions described below.
  3. We will post some interim results in January 2020 [3], in order to prompt more examples and to stimulate discussion. The final deadline for examples is the end of March 2020.
  4. We will publish the list of all the examples sent to us on the web, and present our summary and conclusions at Social Simulation 2020 in Milan and have a discussion there about the nature and prospects for the prediction of complex social systems. Anyone who contributed an example will be invited to be a co-author if they wish to be so-named.

How suggestions will be judged

For each suggestion, a number of answers will be sought – namely to the following questions:

  • What are the papers or documents that describe the model?
  • Is there an explicit claim that the model can predict (as opposed to might in the future)?
  • What kind of characteristics are being predicted (number, probabilistic, range…)?
  • Is there evidence of a prediction being made before the prediction was verified?
  • Is there evidence of the model being used for a series of independent predictions?
  • Were any of the predictions verified by a team that is independent of the one that made the prediction?
  • Is there evidence of the same team or similar models making failed predictions?
  • To what extent did the model need extensive calibration/adjustment before the prediction?
  • What role does theory play (if any) in the model?
  • Are the conditions under which predictive ability claimed described?

Of course, negative answers to any of the above about a particular model does not mean that the model cannot predict. What we are assessing is the evidence that a model can predict something meaningful about complex social systems. (Silver 2012) describes the method by which they attempt prediction, but this method might be different from that described in most theory-based academic papers.

Possible Outcomes

This exercise might shed some light of some interesting questions, such as:

  • What kind of prediction of complex social systems has been attempted?
  • Are there any examples where the reliable prediction of complex social systems has been achieved?
  • Are there certain kinds of social phenomena which seem to more amenable to prediction than others?
  • Does aiming to predict with a model entail any difference in method than projects with other aims?
  • Are there any commonalities among the projects that achieve reliable prediction?
  • Is there anything we could (collectively) do that would encourage or document good prediction?

It might well be that whether prediction is achievable might depend on exactly what is meant by the word.


This paper resulted from a “lively discussion” after Gary’s (Polhill et al. 2019) talk about prediction at the Social Simulation conference in Mainz. Many thanks to all those who joined in this. Of course, prior to this we have had many discussions about prediction. These have included Gary’s previous attempt at a prediction competition (Polhill 2018) and Scott Moss’s arguments about prediction in economics (which has many parallels with the debate here).


[1] This is sufficient for other empirical purposes, such as explanation (Edmonds et al. 2019)

[2] Confusingly they sometimes the word “forecasting” for what we mean by predict here.

[3] Assuming we have any submitted examples to talk about


Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1 (see also http://cfpm.org/discussionpapers/236)

Edmonds, B. (2017) The post-truth drift in social simulation. Social Simulation Conference (SSC2017), Dublin, Ireland. (http://cfpm.org/discussionpapers/195)

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Grimm V, Revilla E, Berger U, Jeltsch F, Mooij WM, Railsback SF, Thulke H-H, Weiner J, Wiegand T, DeAngelis DL (2005) Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science 310: 987-991.

Meese, R.A. & Rogoff, K. (1983) Empirical Exchange Rate models of the Seventies – do they fit out of sample? Journal of International Economics, 14:3-24.

Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/

Polhill, G., Hare, H., Anzola, D., Bauermann, T., French, T., Post, H. and Salt, D. (2019) Using ABMs for prediction: Two thought experiments and a workshop. Social Simulation 2019, Mainz.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Thorngate, W. & Edmonds, B. (2013) Measuring simulation-observation fit: An introduction to ordinal pattern analysis. Journal of Artificial Societies and Social Simulation, 16(2):14. http://jasss.soc.surrey.ac.uk/16/2/4.html

Watts, D. J. (2014). Common Sense and Sociological Explanations. American Journal of Sociology, 120(2), 313-351.

Edmonds, B., Polhill, G and Hales, D. (2019) Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge


Some Philosophical Viewpoints on Social Simulation

By Bruce Edmonds

How one thinks about knowledge can have a significant impact on how one develops models as well as how one might judge a good model.

  • Pragmatism. Under this view a simulation is a tool for a particular purpose. Different purposes will imply different tests for a good model. What is useful for one purpose might well not be good for another – different kinds of models and modelling processes might be good for each purpose. A simulation whose purpose is to explore the theoretical implications of some assumptions might well be very different from one aiming to explain some observed data. An example of this approach is (Edmonds & al. 2019).
  • Social Constructivism. Here knowledge about social phenomena (including simulation models) are collectively constructed. There is no other kind of knowledge than this. Each simulation is a way of thinking about social reality and plays a part in constructing it so. What is a suitable construction may vary over time and between cultures etc. What a group of people construct is not necessarily limited to simulations that are related to empirical data. (Ahrweiler & Gilbert 2005) seem to take this view but this is more explicit in some of the participatory modelling work, where the aim is to construct a simulation that is acceptable to a group of people, e.g. (Etienne 2014).
  • Relativism. There are no bad models, only different ways of mediating between your thought and reality (Morgan 1999). If you work hard on developing your model, you do not get a better model, only a different one. This might be a consequence of holding to an Epistemological Constructivist position.
  • Descriptive Realism. A simulation is a picture of some aspect of reality (albeit at a much lower ‘resolution’ and imperfectly). If one obtains a faithful representation of some aspect of reality as a model, one can use it for many different purposes. Could imply very complicated models (depending on what one observes and decides is relevant), which might themselves be difficult to understand. I suspect that many people have this in mind as they develop models, but few explicitly take this approach. Maybe an example is (Fieldhouse et al. 2016).
  • Classic Positivism. Here, the empirical fit and the analytic understanding of the simulation is all that matters, nothing else. Models should be tested against data and discarded if inadequate (or they compete and one is currently ahead empirically). Also they should be simple enough that they can be thoroughly understood. There is no obligation to be descriptively realistic. Many physics approaches to social phenomena follow this path (e.g Helbing 2010, Galam 2012).

Of course, few authors make their philosophical position explicit – usually one has to infer it from their text and modelling style.


Ahrweiler, P. and Gilbert, N. (2005). Caffè Nero: the Evaluation of Social Simulation. Journal of Artificial Societies and Social Simulation 8(4):14. http://jasss.soc.surrey.ac.uk/8/4/14.html

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. and Squazzoni. F. (in press) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Etienne, M. (ed.) (2014) Companion Modelling: A Participatory Approach to Support Sustainable Development. Springer

Fieldhouse, E., Lessard-Phillips, L. and Edmonds, B. (2016) Cascade or echo chamber? A complex agent-based simulation of voter turnout. Party Politics. 22(2):241-256. DOI:10.1177/1354068815605671

Galam, S. (2012) Sociophysics: A Physicist’s modeling of psycho-political phenomena. Springer.

Helbing, D. (2010). Quantitative sociodynamics: stochastic methods and models of social interaction processes. Springer.

Morgan, M. S., Morrison, M., & Skinner, Q. (Eds.). (1999). Models as mediators: Perspectives on natural and social science (Vol. 52). Cambridge University Press.

Edmonds, B. (2019) Some Philosophical Viewpoints on Social Simulation. Review of Artificial Societies and Social Simulation, 2nd July 2019. https://rofasss.org/2019/07/02/phil-view/


Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM

By Sebastian Achter, Melania Borit, Edmund Chattoe-Brown, Christiane Palaretti and Peer-Olaf Siebers

The initiative presented below arose from a Lorentz Center workshop on Integrating Qualitative and Quantitative Evidence using Social Simulation (8-12 April 2019, Leiden, the Netherlands). At the beginning of this workshop, the attenders divided themselves into teams aiming to work on specific challenges within the broad domain of the workshop topic. Our team took up the challenge of looking at “Rigour, Transparency, and Reuse”. The aim that emerged from our initial discussions was to create a framework for augmenting rigour and transparency (RAT) of data use in ABM when both designing, analysing and publishing such models.

One element of the framework that the group worked on was a roadmap of the modelling process in ABM, with particular reference to the use of different kinds of data. This roadmap was used to generate the second element of the framework: A protocol consisting of a set of questions, which, if answered by the modeller, would ensure that the published model was as rigorous and transparent in terms of data use, as it needs to be in order for the reader to understand and reproduce it.

The group (which had diverse modelling approaches and spanned a number of disciplines) recognised the challenges of this approach and much of the week was spent examining cases and defining terms so that the approach did not assume one particular kind of theory, one particular aim of modelling, and so on. To this end, we intend that the framework should be thoroughly tested against real research to ensure its general applicability and ease of use.

The team was also very keen not to “reinvent the wheel”, but to try develop the RAT approach (in connection with data use) to augment and “join up” existing protocols or documentation standards for specific parts of the modelling process. For example, the ODD protocol (Grimm et al. 2010) and its variants are generally accepted as the established way of documenting ABM but do not request rigorous documentation/justification of the data used for the modelling process.

The plan to move forward with the development of the framework is organised around three journal articles and associated dissemination activities:

  • A literature review of best (data use) documentation and practice in other disciplines and research methods (e.g. PRISMA – Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
  • A literature review of available documentation tools in ABM (e.g. ODD and its variants, DOE, the “Info” pane of NetLogo, EABSS)
  • An initial statement of the goals of RAT, the roadmap, the protocol and the process of testing these resources for usability and effectiveness
  • A presentation, poster, and round table at SSC 2019 (Mainz)

We would appreciate suggestions for items that should be included in the literature reviews, “beta testers” and critical readers for the roadmap and protocol (from as many disciplines and modelling approaches as possible), reactions (whether positive or negative) to the initiative itself (including joining it!) and participation in the various activities we plan at Mainz. If you are interested in any of these roles, please email Melania Borit (melania.borit@uit.no).


Grimm, V., Berger, U., DeAngelis, D. L., Polhill, J. G., Giske, J. and Railsback, S. F. (2010) ‘The ODD Protocol: A Review and First Update’, Ecological Modelling, 221(23):2760–2768. doi:10.1016/j.ecolmodel.2010.08.019

Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. and Siebers, P.-O.(2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/


Vision for a more rigorous “replication first” modelling journal

By David Hales

A proposal for yet another journal? My first reaction to any such suggestion is to argue that we already have far too many journals. However, hear me out.

My vision is for a modelling journal that is far more rigorous than what we currently have. It would be aimed at work in which a significant aspect of the result is derived from the output of a complex system type computer model in an empirical way.

I propose that the journal would incorporate, as part of the reviewing process, at least one replication of the model by an independent reviewer. Hence models would be verified as independently replicated before being published.

In addition the published article would include an appendix detailing the issues raised during the replication process.

Carrying out such an exercise would almost certainly lead to clarifications of the original article such that it would easier to replicate by others and give more confidence in the results. Both readers and authors would gain significantly from this.

I would be much more willing to take modelling articles seriously if I knew they had already been independently replicated.

Here is a question that immediately springs to mind: replicating a model is a time consuming and costly business requiring significant expertise. Why would a reviewer do this?

One possible solution would be to provide an incentive in the following form. Final articles published in the journal would include the replicators as co-authors of the paper – specifically credited with the independent replication work that they write up in the appendix.

This would mean that good, clear and interesting initial articles would be desirable to replicate since the reviewer / replicator would obtain citations.

This could be a good task for an able graduate student allowing them to gain experience, contacts and citations.

Why would people submit good work to such a journal? This is not as easy to answer. It would almost certainly mean more work from their perspective and a time delay (since replication would almost certainly take more time than traditional review). However there is the benefit of actually getting a replication of their model and producing a final article that others would be able to engage with more easily.

Also I think it would be necessary, given the above aspects, to put quite a high bar on what is accepted for review / replication in the first place. Articles reviewed would have to present significant and new results in areas of fairly wide interest. Hence incremental or highly specific models would be ruled out. Also articles that did not contain enough detail to even attempt a replication would be rejected on that basis. Hence one can envisage a two stage review process where the editors decide if the submitted paper is “right” for a full replication review before soliciting replications.

My vision is of a low output, high quality, high initial rejection journal. Perhaps publishing 3 articles every 6 months. Ideally this would support a reputation for high quality over time.

Hales, D. (2018) Vision for a more rigorous “replication first” modelling journal. Review of Artificial Societies and Social Simulation, 5th November 2018. https://rofasss.org/2018/11/05/dh/


Escaping the modelling crisis

By Emile Chappin

Let me explain something I call the ‘modelling crisis’. It is something that many modellers in one way or another encounter. By being aware we may resolve such a crisis, avoid frustration, and, hopefully, save the world from some bad modelling.

Views on modelling

I first present two views on modelling. Bear with me!

[View 1: Model = world] The first view is that models capture things in the real world pretty well and some models are pretty much representative. And of course this is true. You can add many things to the model and you may have. But if you think along this line, you start seeing the model as if it is the world. At one point you may become rather optimistic about modelling. Well, I really mean to say, you become naive: the model is fabulous. The model can help anyone with any problem only somewhat related to the original idea behind this model. You don’t waste time worrying about the details and sell the model to everyone listening, and you’re quite convinced in the way you do this. You may come to a belief that the model is the truth.

[View 2: Model ≠ world] The second view is that the model can never represent the world adequately enough to really predict what is going on. And of course this is true. But if you think along this line, you can get pretty frustrated: the model is never good enough, because factor A is not in there, mechanism B is biased, etc. At one point you may become quite pessimistic about ‘the model’: will it help anyone anytime soon? You may come to the belief that the model is nonsense (and that modelling itself is nonsense).

As a modeller, you may encounter these views in your modelling journey: in how your model is perceived, in how your model is compared to other models and in the questions you’re asked about your model. And it may the case that you get stuck in either one of the views yourself. And you may not be aware, but you might still behave accordingly.

Possible consequences

Let’s conceive the idea of having a business doing modelling: we are ambitious and successful! What might happen over time with our business and with our clients?

  • Your clients love your business – Clients can ask us any question and they will get a very precise answer back! Anytime we give a good result, a result that comes true in some sense, we are praised, and our reputation grows. Anytime we give a bad result, something that turns out quite different from what we’d expected, we can blame the particular circumstances which could not have been foreseen or argue that this result is basically out of the original scope. Our modesty makes our reputation grow! And it makes us proud!
  • Assets need protection – Over time, our model/business reputation becomes more and more important. You should ask us for any modelling job because we’ve modelled (this) for decades. Any question goes into our fabulous model that can Answer Any Question In A Minute (AAQIAM). Our models became patchworks because of questions that were not so easy to fit in. But obviously, as a whole, the model is great. More than great: it is the best! The models are our key assets: they need to be protected. In a board meeting we decide that we should not show the insides of our models anymore. We should keep them secret.
  • Modelling schools – Habits emerge of how our models are used, what kind of analysis we do, and which we don’t. Core assumptions that we always make with our model are accepted and forgotten. We get used to those assumptions, we won’t change them anyway and probably we can’t. It is not really needed to think about the consequences of those assumptions anyway. We stick to the basics, represent the results in the way that the client can use it, and mention in footnotes how much detail is underneath, and that some caution is warranted in interpretation of the results. Other modelling schools may also emerge, but they really can’t deliver the precision/breadth of what we have been doing for decades, so they are not relevant, not really, anyway.
  • Distrusting all models – Another kind of people, typically not your clients, start distrusting the modelling business completely. They get upset in discussions: why worry about discussing the model details: there is always something missing anyway. And it is impossible to quantify anything, really. They decide that it is better to ignore model geeks completely and just follow their own reasoning. It doesn’t matter that this reasoning can’t be backed up with facts (such as a modelled reality). They don’t believe that it be done could anyway. So the problem is not their reasoning, it is the inability of quantitative science.

Here is the crisis

At this point, people stop debating the crucial elements in our models and the ambition for model innovation goes out of the window. I would say, we end up in a modelling crisis. At some point, decisions have to be made in the real world, and they can either be inspired by good modelling, by bad modelling, or not by modelling at all.

The way out of the modelling crisis

How can such a modelling crisis be resolved? First, we need to accept that the model ≠ world, so we don’t necessarily need to predict. We also need to accept that modelling can certainly be useful, for example when it helps to find clear and explicit reasoning/underpinning of an argument.

  • We should focus more on the problem that we really want to address, and for that problem, argue how modelling can actually contribute to a solution for that problem. This should result in better modelling questions, because modelling is a means, not an end. We should stop trying to outsource the thinking to a model.
  • Following from this point, we should be very explicit about the modelling purpose: in what way does the modelling contribute to solving the problem identified earlier? We have to be aware that different kinds of purposes lead to different styles of reasoning, and, consequently, to different strengths and weaknesses in the modelling that we do. Consider the differences between prediction, explanation, theoretical exposition, description and illustration as types of modelling purpose, see (Edmonds 2017), (more types are possible).
  • Following this point, we should accept the importance of creativity and the process in modelling. Science is about reasoned, reproducible work. But, paradoxically, good science does not come from a linear, step-by-step approach. Accepting this, modelling can help both in the creative process by exploring possible ideas, explicating an intuition as well as in justification and underpinning of a very particular reasoning. Next, it is important avoid mixing these perspectives up. The modelling process is as relevant as the model outcome. In the end, the reasoning should be standalone and strong (also without the model). But you may have needed the model to find it.
  • We should adhere to better modelling practices and develop the tooling to accommodate them. For ABM, many successful developments are ongoing: we should be explicit and transparent about assumptions we are making (e.g. the ODD protocol, Polhill et al. 2008). We should develop requirements and procedures for modelling studies, with respect to how the analysis is performed, also if clients don’t ask for it (validity, robustness of findings, sensitivity of outcomes, analysis of uncertainties). For some sectors, such requirements have been developed. The discussion around practices and validation is prominent in ABMs, where some ‘issues’ may be considered obvious (see for instance Heath, Hill, and Ciarallo 2009, the effort through CoMSES), but they should be asked for any type of model. In fact, we should share, debate on, and work with all types of models that are already out there (again, such as the great efforts through CoMSES), and consider forms of multi-modelling to save time and effort and benefit from strengths of different model formalisms.
  • We should start looking for good examples: get inspired and share them. Personally I like Basic Traffic from the NetLogo library, it does not predict you where traffic jams are, but it clearly shows the worth of slowing down earlier. Another may be the Limits to Growth, irrespective of its predictive power.
  • We should start doing it better ourselves, so that we show others that it can be done!


Heath, B., Hill, R. and Ciarallo, F. (2009). A Survey of Agent-Based Modeling Practices (January 1998 to July 2008). Journal of Artificial Societies and Social Simulation 12(4):9. http://jasss.soc.surrey.ac.uk/12/4/9.html

Polhill, J. Gary, Dawn Parker, Daniel Brown, and Volker Grimm. (2008). Using the ODD Protocol for Describing Three Agent-Based Social Simulation Models of Land-Use Change. Journal of Artificial Societies and Social Simulation 11(2): 3.

Edmonds, B. (2017) Five modelling purposes, Centre for Policy Modelling Discussion Paper CPM-17-238, http://cfpm.org/discussionpapers/192/

Chappin, E.J.L. (2018) Escaping the modelling crisis. Review of Artificial Societies and Social Simulation, 12th October 2018. https://rofasss.org/2018/10/12/ec/


A bad assumption: a simpler model is more general

By Bruce Edmonds

If one adds in some extra detail to a general model it can become more specific — that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general – it is just you can imagine it would be more general.

To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) – it is much less general than the original, because it is true for far fewer cases.

This is not very surprising – a claim that a model has general validity is a very strong claim – it is unlikely to be achieved by arm-chair reflection or by merely leaving out most of the observed processes.

Only under some special conditions does simplification result in greater generality:

  • When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
  • When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
  • When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)

In other cases, where you compare like with like (i.e. you don’t move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.


Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.

Edmonds, B. (2007) Simplicity is Not Truth-Indicative. In Gershenson, C.et al. (2007) Philosophy and Complexity. World Scientific, 65-80.

Edmonds, B. (2017) Different Modelling Purposes. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 39-58.

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144.

Edmonds, B. (2018) A bad assumption: a simpler model is more general. Review of Artificial Societies and Social Simulation, 28th August 2018. https://rofasss.org/2018/08/28/be-2/