Tag Archives: communication

A Tale of Three Pandemic Models: Lessons Learned for Engagement with Policy Makers Before, During, and After a Crisis

By Emil Johansson1,2, Vittorio Nespeca3, Mikhail Sirenko4, Mijke van den Hurk5, Jason Thompson6, Kavin Narasimhan7, Michael Belfrage1, 2, Francesca Giardini8, and Alexander Melchior5,9

  1. Department of Computer Science and Media Technology, Malmö University, Sweden
  2. Internet of Things and People Research Center, Malmö University, Sweden
  3. Computational Science Lab, University of Amsterdam, The Netherlands
  4. Faculty of Technology, Policy and Management, Delft University of Technology, The Netherlands
  5. Department of Information and Computing Sciences, Utrecht University, The Netherlands
  6. Transport, Health and Urban Design Research Lab, The University of Melbourne, Australia
  7. Centre for Research in Social Simulation, University of Surrey, United Kingdom
  8. Department of Sociology & Agricola School for Sustainable Development, University of Groningen, The Netherlands
  9. Ministry of Economic Affairs and Climate Policy and Ministry of Agriculture, Nature and Food Quality, The Netherlands

Motivation

Pervasive and interconnected crises such as the COVID-19 pandemic, global energy shortages, geopolitical conflicts, and climate change have shown how a stronger collaboration between science, policy, and crisis management is essential to foster societal resilience. As modellers and computational social scientists we want to help. Several cases of model-based policy support have shown the potential of using modelling and simulation as tools to prepare for, learn from (Adam and Gaudou, 2017), and respond to crises (Badham et al., 2021). At the same time, engaging with policy-makers to establish effective crisis-management solutions remains a challenge for many modellers due to lacking forums that promote and help develop sustained science-policy collaborations. Equally challenging is to find ways to provide effective solutions under changing circumstances, as it is often the case with crises.

Despite the existing guidance regarding how modellers can engage with policy makers e.g. (Vennix, 1996; Voinov and Bousquet, 2010), this guidance often does not account for the urgency that characterizes crisis response. In this article, we tell the stories of three different models developed during the COVID-19 pandemic in different parts of the world. For each of the models, we draw key lessons for modellers regarding how to engage with policy makers before, during, and after crises. Our goal is to communicate the findings from our experiences to  modellers and computational scientists who, like us, want to engage with policy makers to provide model-based policy and crisis management support. We use selected examples from Kurt Vonnegut’s 2004 lecture on ‘shapes of stories’ alongside analogy with Lewis Carroll’s Alice In Wonderland as inspiration for these stories.

Boy Meets Girl (Too Late)

A Social Simulation On the Corona Crisis’ (ASSOCC) tale

The perfect love story between social modellers and stakeholders would be they meet (pre-crisis), build a trusting foundation and then, when a crisis hits, they work together as a team, maybe have some fight, but overcome the crisis together and have a happily ever after.

In the case of the ASSOCC project, we as modellers met our stakeholders too late, (i.e., while we were already in the middle of the COVID-19 crisis). The stakeholders we aimed for had already met their ‘boy’: Epidemiological modellers. For them, we were just one of the many scientists showing new models and telling them that ours should be looked at. Although, for example, our model showed that using a track and tracing-app would not help reduce the rate of new COVID-19 infections (as turned out to be the case), our psychological and social approach was novel for them. It was not the right time to explain the importance of integrating these kinds of concepts in epidemiological models, so without this basic trust, they were reluctant to work with us.

The moral of our story is that not only should we invest in a (working) relationship during non-crisis times to get the stakeholders on board during a crisis, such an approach would be helpful for us modelers too. For example, we integrated both social and epidemiological models within the ASSOCC project. We wanted to validate our model with that used by Oxford University. However, our model choices were not compatible with this type of validation. Had we been working with these types of researchers before a pandemic, we could have built a proper foundation for validation.

So, our biggest lesson learned is the importance of having a good relationship with stakeholders before a crisis hits, when there is time to get into social models and show the advantages of using these. When you invest in building and consolidating this relationship over time, we promise a happily ever after for every social modeler and stakeholder (until the next crisis hits).

Modeller’s Adventures in Wonderland

A Health Emergency Response in Interconnected Systems (HERoS) tale

If you are a modeler, you are likely to be curious and imaginative, like Alice from “Alice’s Adventures in Wonderland.” You like to think about how the world works and make models that can capture these sometimes weird mechanisms. We are the same. When Covid came, we made a model of a city to understand how its citizens would behave.

But there is more. When Alice first saw the White Rabbit, she found him fascinating. A rabbit with a pocket watch which is too late, what could be more interesting? Similarly, our attention got caught by policymakers who wear waistcoats, who are always busy but can bring change. They must need a model that we made! But why are they running away? Our model is so helpful, just let us explain! Or maybe our model is not good enough?

Yes, we fell down deep into a rabbit hole. Our first encounter with a policymaker didn’t result in a happy “yes, let’s try your model out.” However, we kept knocking on doors. How many did Alice try? But alright, there is one. It seems too tiny. We met with a group of policymakers but had only 10 minutes to explain our large-scale data-driven agent-based-like model. How can we possibly do that? Drink from a “Drink me” bottle, which will make our presentation smaller! Well, that didn’t help. We rushed over all the model complexities too fast and got applause, but that’s it. Ok, we have the next one, which will last 1 hour. Quickly! Eat an “Eat me” cake that will make the presentation longer! Oh, too many unnecessary details this time. To the next venue!

We are in the garden. The garden of crisis response. And it is full of policymakers: Caterpillar, Duchess, Cheshire Cat and Mad Hatter. They talk riddles: “We need to consult with the Head of Paperclip Optimization and Supply Management,” want different things: “Can you tell us what will be the impact of a curfew. Hmm, yesterday?” and shift responsibility from one to another. Thankfully there is no Queen of Hearts who would order to behead us.

If the world of policymaking is complex, then the world of policymaking during the crisis is a wonderland. And we all live in it. We must overgrow our obsession with building better models, learn about its fuzzy inhabitants, and find a way to instead work together. Constant interaction and a better understanding of each other’s needs must be at the centre of modeler-policymaker relations.

“But I don’t want to go among mad people,” Alice remarked.

“Oh, you can’t help that,” said the Cat: “we’re all mad here. I’m mad. You’re mad.”

“How do you know I’m mad?” said Alice.

“You must be,” said the Cat, “or you wouldn’t have come here.”

Lewis Carroll, Alice in Wonderland

Cinderella – A city’s tale

Everyone thought Melbourne was just too ugly to go to the ball…..until a little magic happened.

Once upon a time, the bustling Antipodean city of Melbourne, Victoria found itself in the midst of a dark and disturbing period. While all other territories in the great continent of Australia had ridded themselves of the dreaded COVID-19 virus, it was itself, besieged. Illness and death coursed through the land.

Shunned, the city faced scorn and derision. It was dirty. Its sisters called it a “plague state” and the people felt great shame and sadness as their family, friends and colleagues continued to fall to the virus. All they wanted was a chance to rejoin their families and countryfolk at the ball. What could they do?

Though downtrodden, the kind-hearted and resilient residents of Melbourne were determined to regain control over their lives. They longed for a glimmer of sunshine on these long, gloomy days – a touch of magic, perhaps? They turned to their embattled leaders for answers. Where was their Fairy Godmother now?

In this moment of despair, a group of scientists offered a gift in the form of a powerful agent-based model that was running on a supercomputer. This model, the scientists said, might just hold the key to transforming the fate of the city from vanquished to victor (Blakely et al., 2020). What was this strange new science? This magical black box?

Other states and scientists scoffed. “You can never achieve this!”, they said. “What evidence do you have? These models are not to be trusted. Such a feat as to eliminate COVID-19 at this scale has never been done in the history of the world!” But what of it? Why should history matter? Quietly and determinedly, the citizens of Melbourne persisted. They doggedly followed the plan.

Deep down, even the scientists knew it was risky. People’s patience and enchantment with the mystical model would not last forever. Still, this was Melbourne’s only chance. They needed to eliminate the virus so it would no longer have a grip on their lives. The people bravely stuck to the plan and each day – even when schools and businesses began to re-open – the COVID numbers dwindled from what seemed like impossible heights. Each day they edged down…

and down…

and down…until…

Finally! As the clock struck midnight, the people of Melbourne achieved the impossible: they had defeated COVID-19 by eliminating transmission. With the help of the computer model’s magic, illness and death from the virus stopped. Melbourne had triumphed, emerging stronger and more united than ever before (Thompson et al., 2022a).

From that day forth, Melbourne was internationally celebrated as a shining example of resilience, determination, and the transformative power of hope. Tens of thousands of lives were saved – and after enduring great personal and community sacrifice, its people could once again dance at the ball.

But what was the fate of the scientists and the model? Did such an experience change the way agent-based social simulation was used in public health? Not really. The scientists went back to their normal jobs and the magic of the model remained just that – magic. Its influence vanished like fairy dust on a warm Summer’s evening.

Even to this day the model and its impact largely remains a mystery (despite over 10,000 words of ODD documentation). Occasionally, policy-makers or researchers going about their ordinary business might be heard to say, “Oh yes, the model. The one that kept us inside and ruined the economy. Or perhaps it was the other way around? I really can’t recall – it was all such a blur. Anyway, back to this new social problem – Shall we attack it with some big data and ML techniques?”.

The fairy dust has vanished but the concrete remains.

And in fairness, while agent-based social simulation remains mystical and our descriptions opaque, we cannot begrudge others for ever choosing concrete over dust (Thompson et al, 2022b).

Conclusions

So what is the moral of these tales? We consolidate our experiences into these main conclusions:

  • No connection means no impact. If modellers wish for their models to be useful before, during or after a crisis, then it is up to them to start establishing a connection and building trust with policymakers.
  • The window of opportunity for policy modelling during crises can be narrow, perhaps only a matter of days. Capturing it requires both that we can supply a model within the timeframe (impossible as it may appear) and that our relationship with stakeholders is already established.
  • Engagement with stakeholders requires knowledge and skills that might be too much to ask of modelers alone, including project management, communication with individuals without a technical background, and insight into the policymaking process.
  • Being useful only sometimes means being excellent. A good model is one that is useful. By investing more in building relationships with policymakers and learning about each other, we have a bigger chance of providing the needed insight. Such a shift, however, is radical and requires us to give up our obsession with the models and engage with the fuzziness of the world around us.
  • If we cannot communicate our models effectively, we cannot expect to build trust with end-users over the long term, whether they be policy-makers or researchers. Individual models – and agent-based social simulation in general – needs better understanding that can only be achieved through greater transparency and communication, however that is achieved.

As taxing, time-consuming and complex as the process of making policy impact with simulation models might be, it is very much a fight worth fighting; perhaps even more so during crises. Assuming our models would have a positive impact on the world, not striving to make this impact could be considered admitting defeat. Making models useful to policymakers starts with admitting the complexity of their environment and willingness to dedicate time and effort to learn about it and work together. That is how we can pave the way for many more stories with happy endings.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise.

References

Adam, C. and Gaudou, B. (2017) ‘Modelling Human Behaviours in Disasters from Interviews: Application to Melbourne Bushfires’ Journal of Artificial Societies and Social Simulation 20(3), 12. http://jasss.soc.surrey.ac.uk/20/3/12.html. doi: 10.18564/jasss.3395

Badham, J., Barbrook-Johnson, P., Caiado, C. and Castellani, B. (2021) ‘Justified Stories with Agent-Based Modelling for Local COVID-19 Planning’ Journal of Artificial Societies and Social Simulation 24 (1) 8 http://jasss.soc.surrey.ac.uk/24/1/8.html. doi: 10.18564/jasss.4532

Crammond, B. R., & Kishore, V. (2021). The probability of the 6‐week lockdown in Victoria (commencing 9 July 2020) achieving elimination of community transmission of SARS‐CoV‐2. The Medical Journal of Australia, 215(2), 95-95. doi:10.5694/mja2.51146

Thompson, J., McClure, R., Blakely, T., Wilson, N., Baker, M. G., Wijnands, J. S., … & Stevenson, M. (2022). Modelling SARS‐CoV‐2 disease progression in Australia and New Zealand: an account of an agent‐based approach to support public health decision‐making. Australian and New Zealand Journal of Public Health, 46(3), 292-303. doi:10.1111/1753-6405.13221

Thompson, J., McClure, R., Scott, N., Hellard, M., Abeysuriya, R., Vidanaarachchi, R., … & Sundararajan, V. (2022). A framework for considering the utility of models when facing tough decisions in public health: a guideline for policy-makers. Health Research Policy and Systems, 20(1), 1-7. doi:10.1186/s12961-022-00902-6

Voinov, A., & Bousquet, F. (2010). Modelling with stakeholders. Environmental modelling & software, 25(11), 1268-1281. doi:10.1016/j.envsoft.2010.03.007

Vennix, J.A.M. (1996). Group Model Building: Facilitating Team Learning Using System Dynamics. Wiley.

Vonnegut, K. (2004). Lecture to Case College. https://www.youtube.com/watch?v=4_RUgnC1lm8


Johansson,E., Nespeca, V., Sirenko, M., van den Hurk, M., Thompson, J., Narasimhan, K., Belfrage, M., Giardini, F. and Melchior, A. (2023) A Tale of Three Pandemic Models: Lessons Learned for Engagement with Policy Makers Before, During, and After a Crisis. Review of Artificial Societies and Social Simulation, 15 Mar 2023. https://rofasss.org/2023/05/15/threepandemic


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

No one can predict the future: More than a semantic dispute

By Carlos A. de Matos Fernandes and Marijn A. Keijzer

(A contribution to the: JASSS-Covid19-Thread)

Models are pivotal to battle the current COVID-19 crisis. In their call to action, Squazzoni et al. (2020) convincingly put forward how social simulation researchers could and should respond in the short run by posing three challenges for the community among which is a COVID-19 prediction challenge. Although Squazzoni et al. (2020) stress the importance of transparent communication of model assumptions and conditions, we question the liberal use of the word ‘prediction’ for the outcomes of the broad arsenal of models used to mitigate the COVID-19 crisis by ours and other modelling communities. Four key arguments are provided that advocate using expectations derived from scenarios when explaining our models to a wider, possibly non-academic audience.

The current COVID-19 crisis necessitates that we implement life-changing policies that, to a large extent, build upon predictions from complex, quickly adapted, and sometimes poorly understood models. The examples of models spurring the news to produce catchphrase headlines are abundant (Imperial College, AceMod-Australian Census-based Epidemic Model, IndiaSIM, IHME, etc.). And even though most of these models will be useful to assess the comparative effectiveness of interventions in our aim to ‘flatten the curve’, the predictions that disseminate to news media are those of total cases or timing of the inflection point.

The current focus on predictive epidemiological and behavioural models brings back an important discussion about prediction in social systems. “[T]here is a lot of pressure for social scientists to predict” (Edmonds, Polhill & Hales, 2019), and we might add ‘especially nowadays’. But forecasting in human systems is often tricky (Hofman, Sharma & Watts, 2017). Approaches that take well-understood theories and simple mechanisms often fail to grasp the complexity of social systems, yet models that rely on complex supervised machine learning-like approaches may offer misleading levels of confidence (as was elegantly shown recently by Salganik et al., 2020). COVID-19 models appear to be no exception as a recent review concluded that “[…] their performance estimates are likely to be optimistic and misleading” (Wynants et al., 2020, p. 9). Squazzoni et al. describe these pitfalls too (2020: paragraph 3.3). In the crisis at hand, it may even be counter-productive to rely on complex models that combine well-understood mechanisms with many uncertain parameters (Elsenbroich & Badham, 2020).

Considering the level of confidence we can have about predictive models in general, we believe there is an issue with the way predictions are communicated by the community. Scientists often use ‘prediction’ to refer to some outcome of a (statistical) model where they ‘predict’ aspects of the data that are already known, but momentarily set aside. Edmonds et al. (2019: paragraph 2.4) state that “[b]y ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model”. Predictive accuracy, in this case, can then be computed later on, by comparing the prediction to the truth. Scientists know that when talking about predictions of their models, they don’t claim to generalize to situations outside of the narrow scope of their study sample or their artificial society. We are not predicting the future, and wouldn’t claim we could. However, this is wildly different from how ‘prediction’ is commonly understood: As an estimation of some unknown thing in the future. Now that our models quickly disseminate to the general public, we need to be careful with the way we talk about their outcomes.

Predictions in the COVID-19 crisis will remain imperfect. In the current virus outbreak, society cannot afford to rely on the falsification of models for interventions against empirical data. As the virus remains to spread rapidly, our only option is to rely on models as a basis for policy, ceteris paribus. And it is precisely here – at ‘ceteris paribus’ – where the terminology ‘predictions’ miss the mark. All things will not be equal tomorrow, the next day, or the day after that (Van Bavel et al. [2020] note numerous topics that affect managing the COVID-19 pandemic and its impact on society). Policies around the globe are constantly being tweaked, and people’s behaviour changes dramatically as a consequence (Google, 2020). Relying on predictions too much may give a false sense of security.

We propose to avoid using the word ‘prediction’ too much and talk about scenarios or expectations instead where possible. We identify four reasons why you should avoid talking about prediction right now:

  1. Not everyone is acquainted with noise and emergence. Computational Social Scientists generally understand the effects of noise in social systems (Squazzoni et al., 2020: paragraph 1.8). Small behavioural irregularities can be reinforced in complex systems fostering unexpected outcomes. Yet, scientists not acquainted with studying complex social systems may be unfamiliar with the principles we have internalized by now, and put over-confidence in the median outputs of volatile models that enter the scientific sphere as predictions.
  2. Predictions do not convey uncertainty. The general public is usually unacquainted with academic esoteric concepts. For instance, showing a flatten-the-curve scenario generally builds upon mean or median approximation, oftentimes neglecting to include variability of different scenarios. Still, there are numerous other outcomes, building on different parameter values. We fear that by stating a prediction to an undisciplined public, they expect such a thing to occur for certain. If we forecast a sunny day, but there’s rain, people are upset. Talking about scenarios, expectations, and mechanisms may prevent confusion and opposition when the forecast does not occur.
  3. It’s a model, not a reality. The previous argument feeds into the third notion: Be honest about what you model. A model is a model. Even the most richly calibrated model is a model. That is not to say that such models are not informative (we reiterate: models are not a shot in the dark). Still, richly calibrated models based on poor data may be more misleading than less calibrated models (Elsenbroich & Badham, 2020). Empirically calibrated models may provide more confidence at face value, but it lies in the nature of complex systems that small measurement errors in the input data may lead to big deviations in outputs. Models present a scenario for our theoretical reasoning with a given set of parameter values. We can update a model with empirical data to increase reliability but it remains a scenario about a future state given an (often expansive) set of assumptions (recently beautifully visualized by Koerth, Bronner, & Mithani, 2020).
  4. Stop predicting, start communicating. Communication is pivotal during a crisis. An abundance of research shows that communicating clearly and honestly is a best practice during a crisis, generally comforting the general public (e.g., Seeger, 2006). Squazzoni et al. (2020) call for transparent communication. by stating that “[t]he limitations of models and the policy recommendations derived from them have to be openly communicated and transparently addressed”. We are united in our aim to avert the COVID-19 crisis but should be careful that overconfidence doesn’t erode society’s trust in science. Stating unequivocally that we hope – based on expectations – to avert a crisis by implementing some policy, does not preclude altering our course of action when an updated scenario about the future may require us to do so. Modellers should communicate clearly to policy-makers and the general public that this is the role of computational models that are being updated daily.

Squazzoni et al. (2020) set out the agenda for our community in the coming months and it is an important one. Let’s hope that the expectations from the scenarios in our well-informed models will not fall on deaf ears.

References

Edmonds, B., Polhill, G., & Hales, D. (2019). Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H., & Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html> doi: 10.18564/jasss.3993

Elsenbroich, C., & Badham, J. (2020). Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020.
https://rofasss.org/2020/04/12/focussing-on-our-strengths/

Google. (2020). COVID-19 Mobility Reports. https://www.google.com/covid19/mobility/ (Accessed 15th April 2020)

Hofman, J. M., Sharma, A., & Watts, D. J. (2017). Prediction and Explanation in Social Systems. Science, 355, 486–488. doi: 10.1126/science.aal3856

Koerth, M., Bronner, L., & Mithani, J. (2020, March 31). Why It’s So Freaking Hard To Make A Good COVID-19 Model. FiveThirtyEight. https://fivethirtyeight.com/

Salganik, M. J. et al. (2020). Measuring the Predictability of Life Outcomes with a Scientific Mass Collaboration. PNAS. 201915006. doi: 10.1073/pnas.1915006117

Seeger, M. W. (2006). Best Practices in Crisis Communication: An Expert Panel Process, Journal of Applied Communication Research, 34(3), 232-244.  doi: 10.1080/00909880600769944

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Van Bavel, J. J. et al. (2020). Using social and behavioural science to support COVID-19 pandemic response. PsyArXiv. https://doi.org/10.31234/osf.io/y38m9

Wynants. L., et al. (2020). Prediction models for diagnosis and prognosis of COVID-19 infection: systematic review and critical appraisal. BMJ, 369, m1328. doi: 10.1136/bmj.m1328


de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Focussing on our Strengths

By Corinna Elsenbroich and Jennifer Badham

(A contribution to the: JASSS-Covid19-Thread)

Understanding a situation is the precondition to make good decisions. In the extraordinary current situation of a global pandemic, the lack of consensus about a good decision path is evident in the variety of government measures in different countries, analyses of decision made and debates on how the future will look. What is also clear is how little we understand the situation and the impact of policy choices. We are faced with the complexity of social systems, our ability to only ever partially understand them and the political pressure to make decisions on partial information.

The JASSS call to arms (Flaminio & al. 2020) is pointing out the necessity for the ABM modelling community to produce relevant models for this kind of emergency situation. Whilst we wholly agree with the sentiment that ABM modelling can contribute to the debate and decision making, we would like to also point out some of the potential pitfalls inherent in a false application and interpretation for ABM.

  1. Small change, big difference: Given the complexity of the real world, there will be aspects that are better and some that are less well understood. Trying to produce a very large model encompassing several different aspects might be counter-productive as we will mix together well understood aspects with highly hypothetical knowledge. It might be better to have different, smaller models – on the epidemic, the economy, human behaviour etc. each of which can be taken with its own level of validation and veracity and be developed by modellers with subject matter understanding, theoretical knowledge and familiarity with relevant data.
  2. Carving up complex systems: If separate models are developed, then we are necessarily making decisions about the boundaries of our models. For a complex system any carving up can separate interactions that are important, for example the way in which fear of the epidemic can drive protective behaviour thereby reducing contacts and limiting the spread. While it is tempting to think that a “bigger model”, a more encompassing one, is necessarily a better carving up of the system because it eliminates these boundaries, in fact it simply moves them inside the model and hides them.
  3. Policy decisions are moral decisions: The decision of what is the right course to take is a decision for the policy maker with all the competing interests and interdependencies of different aspects of the situation in mind. Scientists are there to provide the best information for the understanding of a situation, and models can be used to understand consequences of different courses of action and the uncertainties associated with that action. Models can be used to inform policy decisions but they must not obfuscate that it is a moral choice that has to be made.
  4. Delaying a decision is making a decision to do nothing: Like any other policy option, a decision to maintain the status quo while gathering further information has its own consequences. The Call to Action (paragraph 1.6) refers to public pressure for immediate responses, but this underplays the pressure arising from other sources. It is important to recognise the logical fallacy: “We must do something. This is something. Therefore we must do this.” However, if there are options available that are clearly better than doing nothing, then it is equally illogical to do nothing.

Instead of trying to compete with existing epidemiological models, ABM could focus on the things it is really good at:

  1. Understanding uncertainty in complex systems resulting from heterogeneity, social influence, and feedback. For the case at hand this means not to build another model of the epidemic spread – there are excellent SEIR models doing that – but to explore how the effect of heterogeneity in the infected population (such as in contact patterns or personal behavior in response to infection) can influence the spread. Other possibilities include social effects such as how fear might spread and influence behaviours of panic buying or compliance with the lockdown.
  2. Build models for the pieces that are missing and couple these to the pieces that exist, thereby enriching the debate about the consequences of policy options by making those connections clear.
  3. Visualise and communicate difficult to understand and counterintuitive developments. Right now people are struggling to understand exponential growth, the dynamics of social distancing, the consequences of an overwhelmed health system, and the delays between actions and their consequences. It is well established that such fundamentals of systems thinking are difficult (Booth Sweeney and Sterman https://doi.org/10.1002/sdr.198). Models such as the simple models in the Washington Post or less abstract ones like the routine day activity one from Vermeulen et al (2020) do a wonderful job at this, allowing people to understand how their individual behaviour will contribute to the spread or containment of a pandemic.
  4. Highlight missing data and inform future collection. This unfolding pandemic is defined through the constant assessment using highly compromised data, i.e. infection rates in countries are entirely determined by how much is tested. The most comparable might be the rates of death but even there we have reporting delays and omissions. Trying to build models is one way to identify what needs to be known to properly evaluate consequences of policy options.

The problem we are faced with in this pandemic is one of complexity, not one of ABM, and we must ensure we are honouring the complexity rather than just paying lip service to it. We agree that model transparency, open data collection and interdisciplinary research are important, and want to ensure that all scientific knowledge is used in the best possible way to ensure a positive outcome of this global crisis.

But it is also important to consider the comparative advantage of agent-based modellers. Yes, we have considerable commitment to, and expertise in, open code and data. But so do many other disciplines. Health information is routinely collected in national surveys and administrative datasets, and governments have a great deal of established expertise in health data management. Of course, our individual skills in coding models, data visualisation, and relevant theoretical knowledge can be offered to individual projects as required. But we believe our institutional response should focus on activities where other disciplines are less well equipped, applying systems thinking to understand and communicate the consequences of uncertainty and complexity.

References

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P. , Antosz, P., Scholz, G., Chappin, E., Borit, M., Verhagen, H., Francesca, G. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation 23(2), 10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Booth Sweeney, L., & Sterman, J. D. (2000). Bathtub dynamics: initial results of a systems thinking inventory. System Dynamics Review: The Journal of the System Dynamics Society, 16(4), 249-286.

Stevens, H. (2020) Why outbreaks like coronavirus spread exponentially, and how to “flatten the curve”. Washington Post, 14th of March 2020. (accessed 11th April 2020) https://www.washingtonpost.com/graphics/2020/world/corona-simulator/

Vermeulen, B.,  Pyka, A. and Müller, M. (2020) An agent-based policy laboratory for COVID-19 containment strategies, (accessed 11th April 2020) https://inno.uni-hohenheim.de/corona-modell


Elsenbroich, C. and Badham, J. (2020) Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020. https://rofasss.org/2020/04/12/focussing-on-our-strengths/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)