Tag Archives: ChristianKammler

Where Now For Experiments In Agent-Based Modelling? Report of a Round Table at SSC2021, held on 22 September 2021


By Dino Carpentras1, Edmund Chattoe-Brown2*, Bruce Edmonds3, Cesar García-Diaz4, Christian Kammler5, Anna Pagani6 and Nanda Wijermans7

*Corresponding author, 1Centre for Social Issues Research, University of Limerick, 2School of Media, Communication and Sociology, University of Leicester, 3Centre for Policy Modelling, Manchester Metropolitan University, 4Department of Business Administration, Pontificia Universidad Javeriana, 5Department of Computing Science, Umeå University, 6Laboratory on Human-Environment Relations in Urban Systems (HERUS), École Polytechnique Fédérale de Lausanne (EPFL), 7Stockholm Resilience Centre, Stockholm University.

Introduction

This round table was convened to advance and improve the use of experimental methods in Agent-Based Modelling, in the hope that both existing and potential users of the method would be able to identify steps towards this aim[i]. The session began with a presentation by Bruce Edmonds (http://cfpm.org/slides/experiments%20and%20ABM.pptx) whose main argument was that the traditional idea of experimentation (controlling extensively for the environment and manipulating variables) was too simplistic to add much to the understanding of the sort of complex systems modelled by ABMs and that we should therefore aim to enhance experiments (for example using richer experimental settings, richer measures of those settings and richer data – like discussions between participants as well as their behaviour). What follows is a summary of the main ideas discussed organised into themed sections.

What Experiments Are

Defining the field of experiments proved to be challenging on two counts. The first was that there are a number of labels for potentially relevant approaches (experiments themselves – for example, Boero et al. 2010, gaming – for example, Tykhonov et al. 2008, serious games – for example Taillandier et al. 2019, companion/participatory modelling – for example, Ramanath and Gilbert 2004 and web based gaming – for example, Basole et al. 2013) whose actual content overlap is unclear. Is it the case that a gaming approach is generally more in line with the argument proposed by Edmonds? How can we systematically distinguish the experimental content of a serious game approach from a gaming approach? This seems to be a problem in immature fields where the labels are invented first (often on the basis of a few rather divergent instances) and the methodology has to grow into them. It would be ludicrous if we couldn’t be sure whether a piece of research was survey based or interview based (and this would radically devalue the associated labels if it were so.)

The second challenge is also more general in Agent-Based Modelling which is the same labels being used differently by different researchers. It is not productive to argue about which uses are correct but it is important that the concepts behind the different uses are clear so a common scheme of labelling might ultimately be agreed. So, for example, experiment can be used (and different round table participants had different perspectives on the uses they expected) to mean laboratory experiments (simplified settings with human subjects – again see, for example, Boero et al. 2010), experiments with ABMs (formal experimentation with a model that doesn’t necessarily have any empirical content – for example, Doran 1998) and natural experiments (choice of cases in the real world to, for example, test a theory – see Dinesen 2013).

One approach that may help with this diversity is to start developing possible dimensions of experimentation. One might be degree of control (all the way from very stripped down behavioural laboratory experiments to natural situations where the only control is to select the cases). Another might be data diversity: From pure analysis of ABMs (which need not involve data at all), through laboratory experiments that record only behaviour to ethnographic collection and analysis of diverse data in rich experiments (like companion modelling exercises.) But it is important for progress that the field develops robust concepts that allow meaningful distinctions and does not get distracted into pointless arguments about labelling. Furthermore, we must consider the possible scientific implications of experimentation carried out at different points in the dimension space: For example, what are the relative strengths and limitations of experiments that are more or less controlled or more or less data diverse? Is there a “sweet spot” where the benefit of experiments is greatest to Agent-Based Modelling? If so, what is it and why?

The Philosophy of Experiment

The second challenge is the different beliefs (often associated with different disciplines) about the philosophical underpinnings of experiment such as what we might mean by a cause. In an economic experiment, for example, the objective may be to confirm a universal theory of decision making through displayed behaviour only. (It is decisions described by this theory which are presumed to cause the pattern of observed behaviour.) This will probably not allow the researcher to discover that their basic theory is wrong (people are adaptive not rational after all) or not universal (agents have diverse strategies), or that some respondents simply didn’t understand the experiment (deviations caused by these phenomena may be labelled noise relative to the theory being tested but in fact they are not.)

By contrast qualitative sociologists believe that subjective accounts (including accounts of participation in the experiment itself) can be made reliable and that they may offer direct accounts of certain kinds of cause: If I say I did something for a certain reason then it is at least possible that I actually did (and that the reason I did it is therefore its cause). It is no more likely that agreement will be reached on these matters in the context of experiments than it has been elsewhere. But Agent-Based Modelling should keep its reputation for open mindedness by seeing what happens when qualitative data is also collected and not just rejecting that approach out of hand as something that is “not done”. There is no need for Agent-Based Modelling blindly to follow the methodology of any one existing discipline in which experiments are conducted (and these disciplines often disagree vigorously on issues like payment and deception with no evidence on either side which should also make us cautious about their self-evident correctness.)

Finally, there is a further complication in understanding experiments using analogies with the physical sciences. In understanding the evolution of a river system, for example, one can control/intervene, one can base theories on testable micro mechanisms (like percolation) and one can observe. But there is no equivalent to asking the river what it intends (whether we can do this effectively in social science or not).[ii] It is not totally clear how different kinds of data collection like these might relate to each other in the social sciences, for example, data from subjective accounts, behavioural experiments (which may show different things from what respondents claim) and, for example, brain scans (which side step the social altogether.) This relationship between different kinds of data currently seems incompletely explored and conceptualised. (There is a tendency just to look at easy cases like surveys versus interviews.)

The Challenge of Experiments as Practical Research

This is an important area where the actual and potential users of experiments participating in the round table diverged. Potential users wanted clear guidance on the resources, skills and practices involved in doing experimental work (and see similar issues in the behavioural strategy literature, for example, Reypens and Levine 2018). At the most basic level, when does a researcher need to do an experiment (rather than a survey, interviews or observation), what are the resource requirements in terms of time, facilities and money (laboratory experiments are unusual in often needing specific funding to pay respondents rather than substituting the researcher working for free) what design decisions need to be made (paying subjects, online or offline, can subjects be deceived?), how should the data be analysed (how should an ABM be validated against experimental data?) and so on.[iii] (There are also pros and cons to specific bits of potentially supporting technology like Amazon Mechanical Turk, Qualtrics and Prolific, which have not yet been documented and systematically compared for the novice with a background in Agent-Based Modelling.) There is much discussion about these matters in the traditional literatures of social sciences that do experiments (see, for example, Kagel and Roth 1995, Levine and Parkinson 1994 and Zelditch 2014) but this has not been summarised and tuned specifically for the needs of Agent-Based Modellers (or published where they are likely to see it).

However, it should not be forgotten that not all research efforts need this integration within the same project, so thinking about the problems that really need it is critical. Nonetheless, triangulation is indeed necessary within research programmes. For instance, in subfields such as strategic management and organisational design, it is uncommon to see an ABM integrated with an experiment as part of the same project (though there are exceptions, such as Vuculescu 2017). Instead, ABMs are typically used to explore “what if” scenarios, build process theories and illuminate potential empirical studies. In this approach, knowledge is accumulated instead through the triangulation of different methodologies in different projects (see Burton and Obel 2018). Additionally, modelling and experimental efforts are usually led by different specialists – for example, there is a Theoretical Organisational Models Society whose focus is the development of standards for theoretical organisation science.

In a relatively new and small area, all we often have is some examples of good practice (or more contentiously bad practice) of which not everyone is even aware. A preliminary step is thus to see to what extent people know of good practice and are able to agree that it is good (and perhaps why it is good).

Finally, there was a slightly separate discussion about the perspectives of experimental participants themselves. It may be that a general problem with unreal activity is that you know it is unreal (which may lead to problems with ecological validity – Bornstein 1999.) On the other hand, building on the enrichment argument put forward by Edmonds (above), there is at least anecdotal observational evidence that richer and more realistic settings may cause people to get “caught up” and perhaps participate more as they would in reality. Nonetheless, there are practical steps we can take to learn more about these phenomena by augmenting experimental designs. For example we might conduct interviews (or even group discussions) before and after experiments. This could make the initial biases of participants explicit and allow them to self-evaluate retrospectively the extent to which they got engaged (or perhaps even over-engaged) during the game. The first such questionnaire could be available before attending the experiment, whilst another could be administered right after the game (and perhaps even a third a week later). In addition to practical design solutions, there are also relevant existing literatures that experimental researchers should probably draw on in this area, for example that on systemic design and the associated concept of worldviews. But it is fair to say that we do not yet fully understand the issues here but that they clearly matter to the value of experimental data for Agent-Based Modelling.[iv]

Design of Experiments

Something that came across strongly in the round table discussion as argued by existing users of experimental methods was the desirability of either designing experiments directly based on a specific ABM structure (rather than trying to use a stripped down – purely behavioural – experiment) or mixing real and simulated participants in richer experimental settings. In line with the enrichment argument put forward by Edmonds, nobody seemed to be using stripped down experiments to specify, calibrate or validate ABM elements piecemeal. In the examples provided by round table participants, experiments corresponding closely to the ABM (and mixing real and simulated participants) seemed particularly valuable in tackling subjects that existing theory had not yet really nailed down or where it was clear that very little of the data needed for a particular ABM was available. But there was no sense that there is a clearly defined set of research designs with associated purposes on which the potential user can draw. (The possible role of experiments in supporting policy was also mentioned but no conclusions were drawn.)

Extracting Rich Data from Experiments

Traditional experiments are time consuming to do, so they are frequently optimised to obtain the maximum power and discrimination between factors of interest. In such situations they will often limit their data collection to what is strictly necessary for testing their hypotheses. Furthermore, it seems to be a hangover from behaviourist psychology that one does not use self-reporting on the grounds that it might be biased or simply involve false reconstruction (rationalisation). From the point of view of building or assessing ABMs this approach involves a wasted opportunity. Due to the flexible nature of ABMs there is a need for as many empirical constraints upon modelling as possible. These constraints can come from theory, evidence or abstract principles (such as simplicity) but should not hinder the design of an ABM but rather act as a check on its outcomes. Game-like situations can provide rich data about what is happening, simultaneously capturing decisions on action, the position and state of players, global game outcomes/scores and what players say to each other (see, for example, Janssen et al. 2010, Lindahl et al. 2021). Often, in social science one might have a survey with one set of participants, interviews with others and longitudinal data from yet others – even if these, in fact, involve the same people, the data will usually not indicate this through consistent IDs. When collecting data from a game (and especially from online games) there is a possibility for collecting linked data with consistent IDs – including interviews – that allows for a whole new level of ABM development and checking.

Standards and Institutional Bootstrapping

This is also a wider problem in newer methods like Agent-Based Modelling. How can we foster agreement about what we are doing (which has to build on clear concepts) and institutionalise those agreements into standards for a field (particularly when there is academic competition and pressure to publish).[v] If certain journals will not publish experiments (or experiments done in certain ways) what can we do about that? JASSS was started because it was so hard to publish ABMs. It has certainly made that easier but is there a cost through less publication in other journals? See, for example, Squazzoni and Casnici (2013). Would it have been better for the rigour and wider acceptance of Agent-Based Modelling if we had met the standards of other fields rather than setting our own? This strategy, harder in the short term, may also have promoted communication and collaboration better in the long term. If reviewing is arbitrary (reviewers do not seem to have a common view of what makes an experiment legitimate) then can that situation be improved (and in particular how do we best go about that with limited resources?) To some extent, normal individualised academic work may achieve progress here (researchers make proposals, dispute and refine them and their resulting quality ensures at least some individualised adoption by other researchers) but there is often an observable gap in performance: Even though most modellers will endorse the value of data for modelling in principle most models are still non-empirical in practice (Angus and Hassani-Mahmooei 2015, Figure 9). The jury is still out on the best way to improve reviewer consistency, use the power of peer review to impose better standards (and thus resolve a collective action problem under academic competition[vi]) and so on but recognising and trying to address these issues is clearly important to the health of experimental methods in Agent-Based Modelling. Since running experiments in association with ABMs is already challenging, adding the problem of arbitrary reviewer standards makes the publication process even harder. This discourages scientists from following this path and therefore retards this kind of research generally. Again, here, useful resources (like the Psychological Science Accelerator, which facilitates greater experimental rigour by various means) were suggested in discussion as raw material for our own improvements to experiments in Agent-Based Modelling.

Another issue with newer methods such as Agent-Based Modelling is the path to legitimation before the wider scientific community. The need to integrate ABMs with experiments does not necessarily imply that the legitimation of the former is achieved by the latter. Experimental economists, for instance, may still argue that (in the investigation of behaviour and its implications for policy issues), experiments and data analysis alone suffice. They may rightly ask: What is the additional usefulness of an ABM? If an ABM always needs to be justified by an experiment and then validated by a statistical model of its output, then the method might not be essential at all. Orthodox economists skip the Agent-Based Modelling part: They build behavioural experiments, gather (rich) data, run econometric models and make predictions, without the need (at least as they see it) to build any computational representation. Of course, the usefulness of models lies in the premise that they may tell us something that experiments alone cannot (see Knudsen et al. 2019). But progress needs to be made in understanding (and perhaps reconciling) these divergent positions. The social simulation community therefore needs to be clearer about exactly what ABMs can contribute beyond the limitations of an experiment, especially when addressing audiences of non-modellers (Ballard et al. 2021). Not only is a model valuable when rigorously validated against data, but also whenever it makes sense of the data in ways that traditional methods cannot.

Where Now?

Researchers usually have more enthusiasm than they have time. In order to make things happen in an academic context it is not enough to have good ideas, people need to sign up and run with them. There are many things that stand a reasonable chance of improving the profile and practice of experiments in Agent-Based Modelling (regular sessions at SSC, systematic reviews, practical guidelines and evaluated case studies, discussion groups, books or journal special issues, training and funding applications that build networks and teams) but to a great extent, what happens will be decided by those who make it happen. The organisers of this round table (Nanda Wijermans and Edmund Chattoe-Brown) are very keen to support and coordinate further activity and this summary of discussions is the first step to promote that. We hope to hear from you.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16, <http://jasss.soc.surrey.ac.uk/18/4/16.html>. doi:10.18564/jasss.2952

Ballard, Timothy, Palada, Hector, Griffin, Mark and Neal, Andrew (2021) ‘An Integrated Approach to Testing Dynamic, Multilevel Theory: Using Computational Models to Connect Theory, Model, and Data’, Organizational Research Methods, 24(2), April, pp. 251-284. doi: 10.1177/1094428119881209

Basole, Rahul C., Bodner, Douglas A. and Rouse, William B. (2013) ‘Healthcare Management Through Organizational Simulation’, Decision Support Systems, 55(2), May, pp. 552-563. doi:10.1016/j.dss.2012.10.012

Boero, Riccardo, Bravo, Giangiacomo, Castellani, Marco and Squazzoni, Flaminio (2010) ‘Why Bother with What Others Tell You? An Experimental Data-Driven Agent-Based Model’, Journal of Artificial Societies and Social Simulation, 13(3), June, article 6, <https://www.jasss.org/13/3/6.html>. doi:10.18564/jasss.1620

Bornstein, Brian H. (1999) ‘The Ecological Validity of Jury Simulations: Is the Jury Still Out?’ Law and Human Behavior, 23(1), February, pp. 75-91. doi:10.1023/A:1022326807441

Burton, Richard M. and Obel, Børge (2018) ‘The Science of Organizational Design: Fit Between Structure and Coordination’, Journal of Organization Design, 7(1), December, article 5. doi:10.1186/s41469-018-0029-2

Derbyshire, James (2020) ‘Answers to Questions on Uncertainty in Geography: Old Lessons and New Scenario Tools’, Environment and Planning A: Economy and Space, 52(4), June, pp. 710-727. doi:10.1177/0308518X19877885

Dinesen, Peter Thisted (2013) ‘Where You Come From or Where You Live? Examining the Cultural and Institutional Explanation of Generalized Trust Using Migration as a Natural Experiment’, European Sociological Review, 29(1), February, pp. 114-128. doi:10.1093/esr/jcr044

Doran, Jim (1998) ‘Simulating Collective Misbelief’, Journal of Artificial Societies and Social Simulation, 1(1), January, article 1, <https://www.jasss.org/1/1/3.html>.

Janssen, Marco A., Holahan, Robert, Lee, Allen and Ostrom, Elinor (2010) ‘Lab Experiments for the Study of Social-Ecological Systems’, Science, 328(5978), 30 April, pp. 613-617. doi:10.1126/science.1183532

Kagel, John H. and Roth, Alvin E. (eds.) (1995) The Handbook of Experimental Economics (Princeton, NJ: Princeton University Press).

Knudsen, Thorbjørn, Levinthal, Daniel A. and Puranam, Phanish (2019) ‘Editorial: A Model is a Model’, Strategy Science, 4(1), March, pp. 1-3. doi:10.1287/stsc.2019.0077

Levine, Gustav and Parkinson, Stanley (1994) Experimental Methods in Psychology (Hillsdale, NJ: Lawrence Erlbaum Associates).

Lindahl, Therese, Janssen, Marco A. and Schill, Caroline (2021) ‘Controlled Behavioural Experiments’, in Biggs, Reinette, de Vos, Alta, Preiser, Rika, Clements, Hayley, Maciejewski, Kristine and Schlüter, Maja (eds.) The Routledge Handbook of Research Methods for Social-Ecological Systems (London: Routledge), pp. 295-306. doi:10.4324/9781003021339-25

Ramanath, Ana Maria and Gilbert, Nigel (2004) ‘The Design of Participatory Agent-Based Social Simulations’, Journal of Artificial Societies and Social Simulation, 7(4), October, article 1, <https://www.jasss.org/7/4/1.html>.

Reypens, Charlotte and Levine, Sheen S. (2018) ‘Behavior in Behavioral Strategy: Capturing, Measuring, Analyzing’, in Behavioral Strategy in Perspective, Advances in Strategic Management Volume 39 (Bingley: Emerald Publishing), pp. 221-246. doi:10.1108/S0742-332220180000039016

Squazzoni, Flaminio and Casnici, Niccolò (2013) ‘Is Social Simulation a Social Science Outstation? A Bibliometric Analysis of the Impact of JASSS’, Journal of Artificial Societies and Social Simulation, 16(1), January, article 10, <http://jasss.soc.surrey.ac.uk/16/1/10.html>. doi:10.18564/jasss.2192

Taillandier, Patrick, Grignard, Arnaud, Marilleau, Nicolas, Philippon, Damien, Huynh, Quang-Nghi, Gaudou, Benoit and Drogoul, Alexis (2019) ‘Participatory Modeling and Simulation with the GAMA Platform’, Journal of Artificial Societies and Social Simulation, 22(2), March, article 3, <https://www.jasss.org/22/2/3.html>. doi:10.18564/jasss.3964

Tykhonov, Dmytro, Jonker, Catholijn, Meijer, Sebastiaan and Verwaart, Tim (2008) ‘Agent-Based Simulation of the Trust and Tracing Game for Supply Chains and Networks’, Journal of Artificial Societies and Social Simulation, 11(3), June, article 1, <https://www.jasss.org/11/3/1.html>.

Vuculescu, Oana (2017) ‘Searching Far Away from the Lamp-Post: An Agent-Based Model’, Strategic Organization, 15(2), May, pp. 242-263. doi:10.1177/1476127016669869

Zelditch, Morris Junior (2007) ‘Laboratory Experiments in Sociology’, in Webster, Murray Junior and Sell, Jane (eds.) Laboratory Experiments in the Social Sciences (New York, NY: Elsevier), pp. 183-197.


Notes

[i] This event was organised (and the resulting article was written) as part of “Towards Realistic Computational Models of Social Influence Dynamics” a project funded through ESRC (ES/S015159/1) by ORA Round 5 and involving Bruce Edmonds (PI) and Edmund Chattoe-Brown (CoI). More about SSC2021 (Social Simulation Conference 2021) can be found at https://ssc2021.uek.krakow.pl

[ii] This issue is actually very challenging for social science more generally. When considering interventions in social systems, knowing and acting might be so deeply intertwined (Derbyshire 2020) that interventions may modify the same behaviours that an experiment is aiming to understand.

[iii] In addition, experiments often require institutional ethics approval (but so do interviews, gaming activities and others sort of empirical research of course), something with which non-empirical Agent-Based Modellers may have little experience.

[iv] Chattoe-Brown had interesting personal experience of this. He took part in a simple team gaming exercise about running a computer firm. The team quickly worked out that the game assumed an infinite return to advertising (so you could have a computer magazine consisting entirely of adverts) independent of the actual quality of the product. They thus simultaneously performed very well in the game from the perspective of an external observer but remained deeply sceptical that this was a good lesson to impart about running an actual firm. But since the coordinators never asked the team members for their subjective view, they may have assumed that the simulation was also a success in its didactic mission.

[v] We should also not assume it is best to set our own standards from scratch. It may be valuable to attempt integration with existing approaches, like qualitative validity (https://conjointly.com/kb/qualitative-validity/) particularly when these are already attempting to be multidisciplinary and/or to bridge the gap between, for example, qualitative and quantitative data.

[vi] Although journals also face such a collective action problem at a different level. If they are too exacting relative to their status and existing practice, researchers will simply publish elsewhere.


Dino Carpentras, Edmund Chattoe-Brown, Bruce Edmonds, Cesar García-Diaz, Christian Kammler, Anna Pagani and Nanda Wijermans (2020) Where Now For Experiments In Agent-Based Modelling? Report of a Round Table as Part of SSC2021. Review of Artificial Societies and Social Simulation, 2nd Novermber 2021. https://rofasss.org/2021/11/02/round-table-ssc2021-experiments/


The ASSOCC Simulation Model: A Response to the Community Call for the COVID-19 Pandemic

Amineh Ghorbani1 , Fabian Lorig2 , Bart de Bruin1 , Paul Davidsson2, Frank Dignum3, Virginia Dignum3, Mijke van der Hurk4, Maarten Jensen3, Christian Kammler3, Kurt Kreulen1, Luis Gustavo Ludescher3, Alexander Melchior4, René Mellema3, Cezara Păstrăv3, Loïs Vanhée5, and Harko Verhagen6

1TU Delft, Netherlands, 
2Malmö University, Sweden, 3Umeå University, Sweden, 4Utrecht University, Netherlands, 5University of Caen, France,6Stockholm University, Sweden
*

(A contribution to the: JASSS-Covid19-Thread)

Abstract: This article is a response to the call for action to the social simulation community to contribute to research on the COVID-19 pandemic crisis. We introduce the ASSOCC model (Agent-based Social Simulation for the COVID-19 Crisis), a model that has specifically been designed and implemented to address the societal challenges of this pandemic. We reflect on how the model addresses many of the challenges raised in the call for action. We conclude by pointing out that the focus of the efforts of the social simulation community should be less on the data and prediction-based simulations but rather on the explanation of mechanisms and exploration of social dependencies and impact of interventions.

Introduction

The COVID-19 crisis is a pandemic that is currently spreading all over the world. It has already had a dramatic toll on humanity affecting the daily life of billions of people and causing a global economic crisis resulting in deficits and unemployment rates never experienced before. Decision makers as well as the general public are in dire need of support to understand the mechanisms and connections in the ongoing crisis as well as support for potentially life-threatening and far-reaching decisions that are to be made with unknown consequences. Many countries and regions are struggling to deal with the impacts of the COVID-19 crisis on healthcare, economy and social well-being of communities, resulting in many different interventions. Examples are the complete lock-down of cities and countries, appeals to the individual responsibility of citizens, and suggestions to use digital technology for tracking and tracing of the disease spread. All these strategies require considerable behavioural changes by all individuals.

In such an unprecedented situation, agent-based social simulation seems to be a very suitable technique for achieving a better understanding of the situation and for providing decision-making support. Most of the available simulations for pandemics focus either on specific aspects of the crisis, such as epidemiology (Chang et al., 2020) or simplified general agglomerated mechanics (e.g., IndiaSIM). Many models, repurposing existing models that were originally developed for other pandemics such as influenza are mostly illustrative and intend to provide theory exposition (Squazzoni et al., 2020). Although current simulations are based on advanced statistical modelling that enables sound predictions of specific aspects of the disease, they use very limited models of human motives and cultural differences. Yet, understanding the possible consequences of drastic policy measures requires more than statistical analysis such as R0 factor (the basic reproduction number, which denotes the expected number of cases directly generated by one case in a population) or economic variables. Measures impact people and thus need to consider individuals’ needs (e.g., affiliation, control, or self-fulfilment), social networks (norms, relationships), and how these attributes and conditions can quickly change during difficult situations (e.g., need for job and food security, overloaded hospitals, loss of relatives).

In this context we have developed ASSOCC (Agent-based Social Simulation for the COVID-19 Crisis; see Figure 1) as a many-faceted observatory of scenarios. In ASSOCC, we connect the many involved aspects in a cohesive simulation, for helping stakeholders to raise their general awareness on all critical aspects of the problem and especially the dependencies between them. Of course, one can hardly aim to cover a large variety of aspects and have very complete models on each of them. Thus, we strike a balance between broadness of the model and accuracy on all aspects. This simulation delivers a complementary perspective to state of the art disciplinary models. Where most of other simulations offer sharp yet isolated pieces of the image, our approach is valuable for combining the pieces of the puzzle since a specific modelling focus can limit space for debate (ní Aodha & Edmonds, 2017).

The ASSOCC approach puts the human behaviour central as a linking pin between many disciplines and aspects: psychology (needs, values, beliefs, plans), social sensitivity (norms, social networks, work relationships), infrastructures (transportation, supplies), epidemiology (spreading), economy (transactions, bankruptcy), cultural influences and public measures (closing activities, lock-down, social distancing, testing). The already complex model is extended on a daily basis. This is done in a largely modular fashion such that specific aspects can be switched on and off during the runs. This leads to some limitations and also requires re-calibration of variables, but overall it seems worth the effort when looking at the first results of the scenarios we have simulated.

In this article, we aim to share our approach to simulating the COVID-19 pandemic, outline how the building and use of ASSOCC takes up a number of the challenges that were posed in (Squazzoni et al., 2020), and emphasize the potentials of agent-based simulation as method in mastering pandemics.

Figure 1: A screenshot of the Graphical User Interface of the ASSOCC simulation

Figure 1: A screen shot of the Graphical User Interface of the ASSOCC simulation

Introducing the ASSOCC Model

The goal of the ASSOCC simulation model is to integrate different parts of our daily life that are affected by the pandemic in order to support decision makers when trading off different policies against each other. It facilitates the identification of potential interdependencies that might exist and need to be addressed. This is important as different countries, cultures and populations affect the suitability and consequences of measures thus requiring local conditions to be taken into account. The model allows stakeholders to study individual and social reactions to different policies, to explore different scenarios, and to analyse their potential effects.

Figure 2: A screenshot of the base simulation model.

Figure 2: A screen shot of the base simulation model.

How it works

The ASSOCC simulation model is based on a synthetic population that consists of a set of artificial individuals (see Figure 1), each with given needs, demographic characteristics and attitude towards regulations and risks. By having all these agents decide over time what they should be doing, we can analyse their reactions to many different policies, such as total lock-down or voluntary isolation. Agents can move, perceive other agents, and decide on their actions based on their individual characteristics and their perception of the environment. The environment constrains the physical actions of the agents but can also impose norms and regulations on their behaviour. Through interaction, agents can take over characteristics from the other agents, such as becoming infected with COVID-19, or receiving information.

Agents

In the ASSOCC model, there are four types of agents: children, students, workers, and retirees. These types represent different age groups with different socio-demographic attributes, common activities, infection risks and behaviours. Each agent has a health status that represents being infected, symptomatic or asymptomatic contagiousness, and a critical state. Moreover, agents have needs and capabilities as well as personal characteristics such as risk aversion and the propensity to follow the law. Needs of the agent include health, wealth and belonging. They are modelled using the water tank model introduced by Dörner et al. (2006). Agent capabilities capture for instance their jobs or family situations. Agents need a minimum wealth value to survive which they receive by working or through subsidies (or by living together with a working agent). In shops and workplaces, agents trade wealth for products and services. Agents pay tax to a central government that then uses this money for subsidies and the maintenance of public services such as hospitals and schools.

Places

During the simulation, agents can move between different places according to their needs and obligations. Places represent homes, shops, hospitals, workplaces, schools, airports and stations. By assigning agents to homes, different households can be represented: single adults, families, retirement homes, and multi-generational households with children, adults and elderly people. The configuration of households is assumed to have an impact on the spreading of COVID-19 and great differences in household configurations exist between countries. Thus, the distribution of these households can be set in the simulation to analyse the situation in different cities 
or countries.

Policies

Policies describe interventions that can be taken by decision makers such as social distancing, infection and immunity testing or closing of schools and workplaces. Policies have complex effects on health, wealth and well-being of all agents. Policies can be extended in many different ways to provide an experimentation environment for decision makers. It is not only the decision of whether or not to implement certain policies but also the point in time when the policy is implemented that influences its success.

Conceptual Design

The ASSOCC model has been conceptualized based on many theories from various scientific disciplines, including psychology (basic motives and needs (McClelland, 1987; Jerome, 2013)), sociology (Schwartz value system (Schwartz, 2012)), culture (Hofstede’s cultural dimensions (Hofstede et al., 2010)), economy (circular flow of income (Murphy, 1993)), and epidemiology (the SEIR model (Cope et al., 2018)). For the disease model, we looked at the following sources: a case study of a corona time lapse (Xu et al., 2020), a cohort study showing the general time lapse of the disease with and without fatality (Zhou et al., 2020) and the incubation period determined by confirmed cases (Lauer et al., 2020). This theory-driven model, determines the reaction of agents to policies and their physical and social context.

A short description of the conceptual architecture of ASSOCC as well as an overview of the agent architecture are available at the project website.

Tools

The simulation is built in Netlogo (see Figure 2 with a visual interface in Unity (see Figure 1. The Netlogo model can be used as a standalone simulation model. For the scenarios, we use the Unity interface for better visualisation of the simulation. The complete source code is available on Github under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Note that at the time of publication of this article, this is still a beta version of the model, which we are continuously developing. The complete description of the agent-based model using the ODD protocol can as well be found on the ASSOCC website.

Addressing Key Challenges

Having explained the ASSOCC framework, in this section, we explain how our modelling effort addresses the 
challenges raised by Squazzoni et al. (2020).

Like any model, the ASSOCC model cannot be a complete representation of reality and has its own limitations. Yet, we believe that the dimensions of social complexity that we have included provide a promising ground to draw useful insights. As rightfully highlighted by Squazzoni et al. (2020), the quality of a model depends on its purpose, its theoretical assumptions, the level of abstraction, and the quality of data.

The purpose of the ASSOCC model is to illustrate and investigate mechanisms. Through the simulation of scenarios, ASSOCC shows dependencies between human behaviour and the spread of the virus, the economic incentives and the psychological needs of people.

In the next sections we aim to explain how the ASSOCC model addresses the main issues raised in (Squazzoni et al., 2020).

Social Complexity

In order to incorporate pre-existing behavioral attitudes, network effects, social norms and culture that influence people’s response to policy measure, we have built a cross-disciplinary extended team of researchers. We have spent extra time and effort to construct a complex model where social complexity is extensively taken into account. As an example, the Maslow theory for individual needs takes pre-existing behavioral attitudes of individuals into account (Jerome, 2013). By connecting this theory to Schwartz value dimensions (Schwartz, 2012) and connecting these dimensions to the cultural dimensions of Hofstede (Hofstede et al., 2010), we incorporate a whole spectrum of individual biological and social needs all the way to cultural diversity among nations.

Yet, the limitations of ASSOCC are in the richness of each of the societal dimensions. We use some rather simple models, for example, in the economic, culture, social network and transport aspects. We document which choices have been made to indicate which complexities we left out and why they were left out and why we think this does not affect the validity of our results. For example, in the transport dimension we do not distinguish between cars and bikes. We do not need that as we do not have large distances and both cars and bikes can be used as solo transport means. We are aware that there are differences in economic terms and also in values for choosing between the two means of transport, but these aspects are not very relevant for the spread of the virus.

Transparency

Although there is pressure on the community to respond to this crisis and to provide expert judgement, we have not sacrificed the complexity of our model, nor it’s transparency to provide rapid answers. In fact, we have aimed to make our modelling process as transparent as possible. Starting from low level programming code, ASSOCC uses Github repository to make the code publicly available. Besides code documentation, our large scale model makes use of the ODD protocol to make the model transparent at the conceptual level. Additionally, by building the Unity interface layer on the Netlogo model, we aim to connect policy scenarios to the parameter setup of the model, so that policy makers themselves can see how changes to scenarios leads to various outcomes.

By emphasizing that ASSOCC creates simulations of policy scenarios, we step away from giving a particular advice for a “best” policy. Rather we highlight the fundamental questions and priorities that have to be dealt with to choose among various policies. This is done by showing the consequences of the implementation of various scenarios and comparing them. This comparison can for example show how different groups of people are affected economically and health-wise by a policy. The most appropriate policy thus depends on the outcomes that are deemed more desirable.

Data

Given the short time since the outbreak, accurate data on the COVID-19 outbreak suitable for complex agent-based models is not yet available. It is not clear how various cases are defined and how the data is collected. However, in our view, this should not limit our modelling abilities for this much-needed rapid response.

In our view, detailed data is not required to build a useful model. In fact, our model is a ’SimCity’ to study various policy scenarios rather than actual data-driven representation of cities. While we have made sure that our model can show similar patterns to the ones observed in reality for overall validity, small fine-grain data is not included. The data used for the simulation comes from particular epidemiological models, from economic models and from calibration of the model against known, normal situations.

As illustrated in models that were described in (Squazzoni et al., 2020), even models that are calibrated with real-world data fail to capture important aspects such as network effects as these changes are still based on stochastic randomized processes. Therefore, being aware that the current data is not yet available nor reliable, 
we have built our model on strong theoretical basis in order to avoid oversimplification of factors that play important roles in this crisis.

Interface between modelling and policy

As highlighted by Squazzoni et al. (2020), “good pandemic models are not always good policy advice models”. We fully agree with this point, which is central to our modelling efforts. A user-interface has been especially developed in Unity (see Figure 1) to support comprehension of the model by policy makers and to facilitate experimentation. In the Unity interface, one can explore the different parameters of a scenario, see the results of the simulations in graph form and also follow several aspects live through the elements available in the spatial representation of the town. This spatial interface is meant purely for better understanding of the model. We believe that having clarity regarding our modelling goal increases policy makers trust in our insights.

In addition, we have been in close contact with policy makers around the world to, on the one hand, understand their needs and immediate and long-term concerns, and on the other hand, communicate our model’s capabilities in the most concise manner to support their decisions. To date, we have engaged with policy makers in the Netherlands, Italy and Sweden.

Predictive Power

In our interactions with policy makers and other users, we make clear that the ASSOCC platform is not meant for giving detailed predictions, but to support the generation of insights. Such a broad model is best used to indicate dependencies and trends between different aspects of the society. Due to the computing power needed for each agent running the complex reasoning, it is difficult to scale this type of model to more than a few thousand agents, at least in NetLogo. 
The validation of the model can be done through the causal chains that can be followed throughout the model. I.e. certain outcomes can be linked through agent states to certain causes in the environment or the actions of other agents. If these causal chains can be interpreted as plausible stories that can be confirmed by the theories of those respective aspects, it is possible to achieve a certain type of high level validation. So, this is not a validation on data, but validation based on expert opinion.

A second type of validation that can be done on this type of ABM is to make a detailed comparison with established epidemiological models. For instance, we are comparing our simulation with the one used for (Ferretti et al., 2020) in a particular scenario where the effect of using tracking and tracing apps is investigated. By translating the assumptions and parameters very carefully to ASSOCC parameters and comparing the resulting simulations, we can validate the underlying models against more traditional ones and also show possible deviations that might come up and that highlights advantages or lacunas in the ASSOCC model. The results of this comparison will be published jointly by the two groups. Finally, we are calibrating ASSOCC parameters by using statistical data, such as R0, number of deaths, and demographic data as means to improve validity.

Conclusion

In this article, we presented the ASSOCC model as a comprehensive modelling endeavour that aims to contribute to the efforts for managing the COVID-19 crisis. By modelling multiple aspects of the society and interrelating them, we provide insights into the underlying mechanisms in the society that are influenced both by the outbreak as well as policy measures that aim to control it.

Being aware of the challenges, we have aimed to include as much social complexity as possible in the model to avoid biases and oversimplification. At the same time, by being in close contact with policy makers around the world, we have taken the actual needs and considerations into account, while providing a traceable, usable and comprehensible user interface that brings the modelling insights within the reach of policy makers. In our modelling efforts, we have paid extra attention to transparency, providing well-documented and open-source code that can be used by the rest of the simulation community.

All the assumptions, underlying theories and the source code of ASSOCC are available on the project website and on Github. We invite people to use it, give feedback and based on this feedback we continuously improve the model and its parameters. According to the development of the pandemic and the state of discussion, new scenarios will be added as well.

We hope that the ASSOCC model can contribute to handling this crisis in a way that shows the capabilities and usefulness of agent-based modelling.

References

Chang, S. L., Harding, N., Zachreson, C., Cliff, O. M. & Prokopenko, M. (2020). Modelling transmission and control of the covid-19 pandemic in australia. arXiv preprint arXiv:2003.10218 <https://arxiv.org/abs/2003.10218>

Cope, R. C., Ross, J. V., Chilver, M., Stocks, N. P., & Mitchell, L. (2018). Characterising seasonal influenza epidemiology using primary care surveillance data. PLoS computational biology, 14(8), e1006377. doi:10.1371/journal.pcbi.1006377

Dörner, D., Gerdes, J., Mayer, M., & Misra, S. (2006, April). A simulation of cognitive and emotional effects of overcrowding. In Proceedings of the Seventh International Conference on Cognitive Modeling (pp. 92-98). Triest, Italy: Edizioni Goliardiche.

Ferretti, L., Wymant, C., Kendall, M., Zhao, L., Nurtay, A., Abeler-Dörner, L., Parker, M., Bonsall, D. & Fraser, C. (2020). Quantifying sars-cov-2 transmission suggests epidemic control with digital contact tracing. Science,  31 Mar 2020:eabb6936. doi:10.1126/science.abb6936

Hofstede, G., Hofstede, G. J. & Minkov, M. (2010). Cultures and organizations: Software of the mind. revised and expanded 3rd edition. N.-Y.: McGraw-Hill.

Jerome, N. (2013). Application of the Maslow’s hierarchy of need theory; impacts and implications on organizational culture, human resource and employee’s performance. International Journal of Business and Management Invention, 2(3), 39–45.

Lauer, S. A., Grantz, K. H., Bi, Q., Jones, F. K., Zheng, Q., Meredith, H. R., Azman, A. S., Reich, N. G. & Lessler, J. (2020). The incubation period of coronavirus disease 2019 (covid-19) from publicly reported confirmed cases: estimation and application. Annals of internal medicine

McClelland, D. (1987). Human Motivation. Cambridge Univ. Press
Murphy, A. E. (1993). John law and richard cantillon on the circular flow of income. The European Journal of the History of Economic Thought, 1(1), 47–62.

Aodha, L. & Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Xu, Z., Shi, L., Wang, Y., Zhang, J., Huang, L., Zhang, C., Liu, S., Zhao, P., Liu, H., Zhu, L. et al. (2020). Pathological findings of covid-19 associated with acute respiratory distress syndrome. The Lancet respiratory medicine, 8(4), 420–422

Zhou, F., Yu, T., Du, R., Fan, G., Liu, Y., Liu, Z., Xiang, J., Wang, Y., Song, B., Gu, X. et al. (2020). Clinical course and risk factors for mortality of adult inpatients with covid-19 in wuhan, china: a retrospective cohort study. The Lancet, 395(10229), 1054-1062. doi:10.1016/S0140-6736(20)30566-3


Ghorbani, A., Lorig, F., de Bruin, B., Davidsson, P., Dignum, F., Dignum, V., van der Hurk, M., Jensen, M., Kammler, C., Kreulen, K., Ludescher, L. G., Melchior, A., Mellema, R., Păstrăv, C., Vanhée, L. and Verhagen, H. (2020) The ASSOCC Simulation Model: A Response to the Community Call for the COVID-19 Pandemic. Review of Artificial Societies and Social Simulation, 25th April 2020. https://rofasss.org/2020/04/25/the-assocc-simulation-model/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)