Tag Archives: crisis_thread

An Institute for Crisis Modelling (ICM) – Towards a resilience center for sustained crisis modeling capability

By Fabian Lorig1*, Bart de Bruin2, Melania Borit3, Frank Dignum4, Bruce Edmonds5, Sinéad M. Madden6, Mario Paolucci7, Nicolas Payette8, Loïs Vanhée4

*Corresponding author
1 Internet of Things and People Research Center, Malmö University, Sweden
2 Delft University of Technology, Netherlands
3 CRAFT Lab, Arctic University of Norway, Tromsø, Norway
4 Department of Computing Science, Umeå University, Sweden
5 Centre for Policy Modelling, Manchester Metropolitan University Business School, UK
6 School of Engineering, University of Limerick, Ireland
7 Laboratory of Agent Based Social Simulation, ISTC/CNR, Italy
8 Complex Human-Environmental Systems Simulation Laboratory, University of Oxford, UK

The Need for an ICM

Most crises and disasters do occur suddenly and hit the society while it is unprepared. This makes it particularly challenging to react quick to their occurrence, to adapt to the resulting new situation, to minimize the societal impact, and to recover from the disturbance. A recent example was the Covid-19 crisis, which revealed weak points of our crisis preparedness. Governments were trying to put restrictions in place to limit the spread of the virus while ensuring the well-being of the population and at the same time preserving economic stability. It quickly became clear that interventions which worked well in some countries did not seem to have the intended effect in other countries and the reason for this is that the success of interventions to a great extent depends on individual human behavior.

Agent-based Social Simulations (ABSS) explicitly model the behavior of the individuals and their interactions in the population and allow us to better understand social phenomena. Thus, ABSS are perfectly suited for investigating how our society might be affected by different crisis scenarios and how policies might affect the societal impact and consequences of these disturbances. Particularly during the Covid-19 crisis, a great number of ABSS have been developed to inform policy making around the globe (e.g., Dignum et al. 2020, Balkely et al. 2021, Lorig et al. 2021). However, weaknesses in creating useful and explainable simulations in a short time also became apparent and there is still a lack of consistency to be better prepared for the next crisis (Squazzoni et al. 2020). Especially, ABSS development approaches are, at this moment, more geared towards simulating one particular situation and validating the simulation using data from that situation. In order to be prepared for a crisis, instead, one needs to simulate many different scenarios for which data might not yet be available. They also typically need a more interactive interface where stake holders can experiment with different settings, policies, etc.

For ABSS to become an established, reliable, and well-esteemed method for supporting crisis management, we need to organize and consolidate the available competences and resources. It is not sufficient to react once a crisis occurs but instead, we need to proactively make sure that we are prepared for future disturbances and disasters. For this purpose, we also need to systematically address more fundamental problems of ABSS as a method of inquiry and particularly consider the specific requirements for the use of ABSS to support policy making, which may differ from the use of ABSS in academic research. We therefore see the need for establishing an Institute for Crisis Modelling (ICM), a resilience center to ensure sustained crisis modeling capability.

The vision of starting an Institute for Crisis Modelling was the result of the discussions and working groups at the Lorentz Center workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” that took place in Leiden, Netherlands from 27 February to 3 March 2023**.

Vision of the ICM

“To have tools suitable to support policy actors in situations that are of
big uncertainty, large consequences, and dependent on human behavior.”

The ICM consists of a taskforce for quickly and efficiently supporting policy actors (e.g., decision makers, policy makers, policy analysts) in situations that are of big uncertainty, large consequences, and dependent on human behavior. For this purpose, the taskforce consists of a larger (informal) network of associates that contribute with their knowledge, skills, models, tools, and networks. The group of associates is composed of a core group of multidisciplinary modeling experts (ranging from social scientists and formal modelers to programmers) as well as of partners that can contribute to specific focus areas (like epidemiology, water management, etc.). The vision of ICM is to consolidate and institutionalize the use of ABSS as a method for crisis management. Although physically ABSS competences may be distributed over a variety of universities, research centers, and other institutions, the ICM serves as a virtual location that coordinates research developments and provides a basic level of funding and communication channel for ABSS for crisis management. This does not only provide policy actors with a single point of contact, making it easier for them to identify who to reach when simulation expertise is needed and to develop long-term trust relationships. It also enables us to jointly and systematically evolve ABSS to become a valuable and established tool for crisis response. The center combines all necessary resources, competences, and tools to quickly develop new models, to adapt existing models, and to efficiently react to new situations.

To achieve this goal and to evolve and establish ABSS as a valuable tool for policy makers in crisis situations, research is needed in different areas. This includes the collection, development, critical analysis, and review of fundamental principles, theories, methods, and tools used in agent-based modeling. This also includes research on data handling (analysis, sharing, access, protection, visualization), data repositories, ontologies, user-interfaces, methodologies, documentation, and ethical principles. Some of these points are concisely described in (Dignum, 2021, Ch. 14 and 15).

The ICM shall be able to provide a wide portfolio of models, methods, techniques, design patterns, and components required to quickly and effectively facilitate the work of policy actors in crisis situations by providing them with adequate simulation models. For the purpose of being able to provide specialized support, the institute will coordinate the human effort (e.g., the modelers) and have specific focus areas for which expertise and models are available. This might be, for instance, pandemics, natural disasters, or financial crises. For each of these focus areas, the center will develop different use cases, which ensures and facilitates rapid responses due to the availability of models, knowledge, and networks.

Objectives of the ICM

To achieve this vision, there are a series of objectives that a resilience center for sustained crisis modeling capability in crisis situations needs to address:

1) Coordinate and promote research

Providing quick and appropriate support for policy actors in crisis situations requires not only a profound knowledge on existing models, methods, tools, and theories but also the systematic development of new approaches and methodologies. This is to advance and evolve ABSS for being better prepared for future crises and will serve as a beacon for organizing the ABSS research oriented towards practical applications.

2) Enable trusted connections with policy actors

Sustainable collaborations and interactions with decision-makers and policy analysts as well as other relevant stakeholders is a great challenge in ABSS. Getting in contact with the right actors, “speaking the same language”, and having realistic expectations are only some of the common problems that need to be addressed. Thus, the ICM should not only connect to policy actors in times of crises, but have continuous interactions, provide sample simulations, develop use cases, and train the policy actors wherever possible.

3) Enable sustainability of the institute itself

Classic funding schemes are unfit for responding in crises, which require fast responses with always-available resources as well as the continuous build-up of knowledge, skills, network, and technological buildup requires long-term. Sustainable funding is needed that for enabling such a continuity, for which the IBM provides a demarked, unifying frame.

4) Actively maintain the network of associates

Maintaining a network of experts is challenging because it requires different competences and experiences. PhD candidates, for instance, might have a great practical experience in using different simulation frameworks, however, after their graduation, some might leave academia and others might continue to other positions where they do not have the opportunity to use their simulation expertise. Thus, new experts need to be acquired continuously to form a resilient and balanced network.

5) Inform policy actors

The most advanced and profound models cannot do any good in crisis situations in case of a lacking demand from policy actors. Many modelers perceive a certain hesitation from policy actors regarding the use of ABSS which might be due to them being unfamiliar with the potential benefits and use-cases of ABSS, lacking trust in the method itself, or simply due to a lack of awareness that ABSS actually exists. Hence, the center needs to educate policy makers and raise awareness as well as improve trust in ABSS.

6) Train the next generation of experts

To quickly develop suitable ABSS models in critical situations requires a variety of expertise. In addition to objective 4, the acquisition of associates, it is also of great importance to educate and train the next generation of experts. ABSS research is still a niche and not taught as an inherent part of the spectrum of methods of most disciplines. The center shall promote and strengthen ABSS education to ensure the training of the next generation of experts.

7) Engage the general public

Finally, the success of ABSS does not only depend on the trust of policy actors but also on how it is perceived by the general public. When developing interventions during the Covid-19 crisis and giving recommendations, the trust in the method was a crucial success factor. Also, developing realistic models requires the active participation of the general public.

Next steps

For ABSS to become a valuable and established tool for supporting policy actors in crisis situations, we are convinced that our efforts need to be institutionalized. This allows us to consolidate available competences, models, and tools as well as to coordinate research endeavors and the development of new approaches required to ensure a sustained crisis modeling capability.

To further pursue this vision, a Special Interest Group (SIG) on Building ResilienCe with Social Simulations (BRICSS) was established at the European Social Simulation Association (ESSA). Moreover, Special Tracks will be organized at the 2023 Social Simulation Conference (SSC) to bring together interested experts.

However, for this vision to become reality, the next steps towards establishing an Institute for Crisis Modelling consist of bringing together ambitious and competent associates as well as identifying core funding opportunities for the center. If the readers feel motivated to contribute in any way to this topic, they are encouraged to contact Frank Dignum, Umeå University, Sweden or any of the authors of this article.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise. The final report of the workshop as well as more information can be found on the webpage of the Lorentz Center: https://www.lorentzcenter.nl/agent-based-simulations-for-societal-resilience-in-crisis-situations.html

References

Blakely, T., Thompson, J., Bablani, L., Andersen, P., Ouakrim, D. A., Carvalho, N., Abraham, P., Boujaoude, M.A., Katar, A., Akpan, E., Wilson, N. & Stevenson, M. (2021). Determining the optimal COVID-19 policy response using agent-based modelling linked to health and cost modelling: Case study for Victoria, Australia. Medrxiv, 2021-01.

Dignum, F., Dignum, V., Davidsson, P., Ghorbani, A., van der Hurk, M., Jensen, M., Kammler C., Lorig, F., Ludescher, L.G., Melchior, A., Mellema, R., Pastrav, C., Vanhee, L. & Verhagen, H. (2020). Analysing the combined health, social and economic impacts of the coronavirus pandemic using agent-based social simulation. Minds and Machines, 30, 177-194. doi: 10.1007/s11023-020-09527-6

Dignum, F. (ed.). (2021) Social Simulation for a Crisis; Results and Lessons from Simulating the COVID-19 Crisis. Springer.

Lorig, Fabian, Johansson, Emil and Davidsson, Paul (2021) ‘Agent-Based Social Simulation of the Covid-19 Pandemic: A Systematic Review’ Journal of Artificial Societies and Social Simulation 24(3), 5. http://jasss.soc.surrey.ac.uk/24/3/5.html. doi: 10.18564/jasss.4601

Squazzoni, F. et al. (2020) ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action‘ Journal of Artificial Societies and Social Simulation 23(2), 10. http://jasss.soc.surrey.ac.uk/23/2/10.html. doi: 10.18564/jasss.4298


Lorig, F., de Bruin, B., Borit, M., Dignum, F., Edmonds, B., Madden, S.M., Paolucci, M., Payette, N. and Vanhée, L. (2023) An Institute for Crisis Modelling (ICM) –
Towards a resilience center for sustained crisis modeling capability. Review of Artificial Societies and Social Simulation, 22 May 2023. https://rofasss.org/2023/05/22/icm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

A Tale of Three Pandemic Models: Lessons Learned for Engagement with Policy Makers Before, During, and After a Crisis

By Emil Johansson1,2, Vittorio Nespeca3, Mikhail Sirenko4, Mijke van den Hurk5, Jason Thompson6, Kavin Narasimhan7, Michael Belfrage1, 2, Francesca Giardini8, and Alexander Melchior5,9

  1. Department of Computer Science and Media Technology, Malmö University, Sweden
  2. Internet of Things and People Research Center, Malmö University, Sweden
  3. Computational Science Lab, University of Amsterdam, The Netherlands
  4. Faculty of Technology, Policy and Management, Delft University of Technology, The Netherlands
  5. Department of Information and Computing Sciences, Utrecht University, The Netherlands
  6. Transport, Health and Urban Design Research Lab, The University of Melbourne, Australia
  7. Centre for Research in Social Simulation, University of Surrey, United Kingdom
  8. Department of Sociology & Agricola School for Sustainable Development, University of Groningen, The Netherlands
  9. Ministry of Economic Affairs and Climate Policy and Ministry of Agriculture, Nature and Food Quality, The Netherlands

Motivation

Pervasive and interconnected crises such as the COVID-19 pandemic, global energy shortages, geopolitical conflicts, and climate change have shown how a stronger collaboration between science, policy, and crisis management is essential to foster societal resilience. As modellers and computational social scientists we want to help. Several cases of model-based policy support have shown the potential of using modelling and simulation as tools to prepare for, learn from (Adam and Gaudou, 2017), and respond to crises (Badham et al., 2021). At the same time, engaging with policy-makers to establish effective crisis-management solutions remains a challenge for many modellers due to lacking forums that promote and help develop sustained science-policy collaborations. Equally challenging is to find ways to provide effective solutions under changing circumstances, as it is often the case with crises.

Despite the existing guidance regarding how modellers can engage with policy makers e.g. (Vennix, 1996; Voinov and Bousquet, 2010), this guidance often does not account for the urgency that characterizes crisis response. In this article, we tell the stories of three different models developed during the COVID-19 pandemic in different parts of the world. For each of the models, we draw key lessons for modellers regarding how to engage with policy makers before, during, and after crises. Our goal is to communicate the findings from our experiences to  modellers and computational scientists who, like us, want to engage with policy makers to provide model-based policy and crisis management support. We use selected examples from Kurt Vonnegut’s 2004 lecture on ‘shapes of stories’ alongside analogy with Lewis Carroll’s Alice In Wonderland as inspiration for these stories.

Boy Meets Girl (Too Late)

A Social Simulation On the Corona Crisis’ (ASSOCC) tale

The perfect love story between social modellers and stakeholders would be they meet (pre-crisis), build a trusting foundation and then, when a crisis hits, they work together as a team, maybe have some fight, but overcome the crisis together and have a happily ever after.

In the case of the ASSOCC project, we as modellers met our stakeholders too late, (i.e., while we were already in the middle of the COVID-19 crisis). The stakeholders we aimed for had already met their ‘boy’: Epidemiological modellers. For them, we were just one of the many scientists showing new models and telling them that ours should be looked at. Although, for example, our model showed that using a track and tracing-app would not help reduce the rate of new COVID-19 infections (as turned out to be the case), our psychological and social approach was novel for them. It was not the right time to explain the importance of integrating these kinds of concepts in epidemiological models, so without this basic trust, they were reluctant to work with us.

The moral of our story is that not only should we invest in a (working) relationship during non-crisis times to get the stakeholders on board during a crisis, such an approach would be helpful for us modelers too. For example, we integrated both social and epidemiological models within the ASSOCC project. We wanted to validate our model with that used by Oxford University. However, our model choices were not compatible with this type of validation. Had we been working with these types of researchers before a pandemic, we could have built a proper foundation for validation.

So, our biggest lesson learned is the importance of having a good relationship with stakeholders before a crisis hits, when there is time to get into social models and show the advantages of using these. When you invest in building and consolidating this relationship over time, we promise a happily ever after for every social modeler and stakeholder (until the next crisis hits).

Modeller’s Adventures in Wonderland

A Health Emergency Response in Interconnected Systems (HERoS) tale

If you are a modeler, you are likely to be curious and imaginative, like Alice from “Alice’s Adventures in Wonderland.” You like to think about how the world works and make models that can capture these sometimes weird mechanisms. We are the same. When Covid came, we made a model of a city to understand how its citizens would behave.

But there is more. When Alice first saw the White Rabbit, she found him fascinating. A rabbit with a pocket watch which is too late, what could be more interesting? Similarly, our attention got caught by policymakers who wear waistcoats, who are always busy but can bring change. They must need a model that we made! But why are they running away? Our model is so helpful, just let us explain! Or maybe our model is not good enough?

Yes, we fell down deep into a rabbit hole. Our first encounter with a policymaker didn’t result in a happy “yes, let’s try your model out.” However, we kept knocking on doors. How many did Alice try? But alright, there is one. It seems too tiny. We met with a group of policymakers but had only 10 minutes to explain our large-scale data-driven agent-based-like model. How can we possibly do that? Drink from a “Drink me” bottle, which will make our presentation smaller! Well, that didn’t help. We rushed over all the model complexities too fast and got applause, but that’s it. Ok, we have the next one, which will last 1 hour. Quickly! Eat an “Eat me” cake that will make the presentation longer! Oh, too many unnecessary details this time. To the next venue!

We are in the garden. The garden of crisis response. And it is full of policymakers: Caterpillar, Duchess, Cheshire Cat and Mad Hatter. They talk riddles: “We need to consult with the Head of Paperclip Optimization and Supply Management,” want different things: “Can you tell us what will be the impact of a curfew. Hmm, yesterday?” and shift responsibility from one to another. Thankfully there is no Queen of Hearts who would order to behead us.

If the world of policymaking is complex, then the world of policymaking during the crisis is a wonderland. And we all live in it. We must overgrow our obsession with building better models, learn about its fuzzy inhabitants, and find a way to instead work together. Constant interaction and a better understanding of each other’s needs must be at the centre of modeler-policymaker relations.

“But I don’t want to go among mad people,” Alice remarked.

“Oh, you can’t help that,” said the Cat: “we’re all mad here. I’m mad. You’re mad.”

“How do you know I’m mad?” said Alice.

“You must be,” said the Cat, “or you wouldn’t have come here.”

Lewis Carroll, Alice in Wonderland

Cinderella – A city’s tale

Everyone thought Melbourne was just too ugly to go to the ball…..until a little magic happened.

Once upon a time, the bustling Antipodean city of Melbourne, Victoria found itself in the midst of a dark and disturbing period. While all other territories in the great continent of Australia had ridded themselves of the dreaded COVID-19 virus, it was itself, besieged. Illness and death coursed through the land.

Shunned, the city faced scorn and derision. It was dirty. Its sisters called it a “plague state” and the people felt great shame and sadness as their family, friends and colleagues continued to fall to the virus. All they wanted was a chance to rejoin their families and countryfolk at the ball. What could they do?

Though downtrodden, the kind-hearted and resilient residents of Melbourne were determined to regain control over their lives. They longed for a glimmer of sunshine on these long, gloomy days – a touch of magic, perhaps? They turned to their embattled leaders for answers. Where was their Fairy Godmother now?

In this moment of despair, a group of scientists offered a gift in the form of a powerful agent-based model that was running on a supercomputer. This model, the scientists said, might just hold the key to transforming the fate of the city from vanquished to victor (Blakely et al., 2020). What was this strange new science? This magical black box?

Other states and scientists scoffed. “You can never achieve this!”, they said. “What evidence do you have? These models are not to be trusted. Such a feat as to eliminate COVID-19 at this scale has never been done in the history of the world!” But what of it? Why should history matter? Quietly and determinedly, the citizens of Melbourne persisted. They doggedly followed the plan.

Deep down, even the scientists knew it was risky. People’s patience and enchantment with the mystical model would not last forever. Still, this was Melbourne’s only chance. They needed to eliminate the virus so it would no longer have a grip on their lives. The people bravely stuck to the plan and each day – even when schools and businesses began to re-open – the COVID numbers dwindled from what seemed like impossible heights. Each day they edged down…

and down…

and down…until…

Finally! As the clock struck midnight, the people of Melbourne achieved the impossible: they had defeated COVID-19 by eliminating transmission. With the help of the computer model’s magic, illness and death from the virus stopped. Melbourne had triumphed, emerging stronger and more united than ever before (Thompson et al., 2022a).

From that day forth, Melbourne was internationally celebrated as a shining example of resilience, determination, and the transformative power of hope. Tens of thousands of lives were saved – and after enduring great personal and community sacrifice, its people could once again dance at the ball.

But what was the fate of the scientists and the model? Did such an experience change the way agent-based social simulation was used in public health? Not really. The scientists went back to their normal jobs and the magic of the model remained just that – magic. Its influence vanished like fairy dust on a warm Summer’s evening.

Even to this day the model and its impact largely remains a mystery (despite over 10,000 words of ODD documentation). Occasionally, policy-makers or researchers going about their ordinary business might be heard to say, “Oh yes, the model. The one that kept us inside and ruined the economy. Or perhaps it was the other way around? I really can’t recall – it was all such a blur. Anyway, back to this new social problem – Shall we attack it with some big data and ML techniques?”.

The fairy dust has vanished but the concrete remains.

And in fairness, while agent-based social simulation remains mystical and our descriptions opaque, we cannot begrudge others for ever choosing concrete over dust (Thompson et al, 2022b).

Conclusions

So what is the moral of these tales? We consolidate our experiences into these main conclusions:

  • No connection means no impact. If modellers wish for their models to be useful before, during or after a crisis, then it is up to them to start establishing a connection and building trust with policymakers.
  • The window of opportunity for policy modelling during crises can be narrow, perhaps only a matter of days. Capturing it requires both that we can supply a model within the timeframe (impossible as it may appear) and that our relationship with stakeholders is already established.
  • Engagement with stakeholders requires knowledge and skills that might be too much to ask of modelers alone, including project management, communication with individuals without a technical background, and insight into the policymaking process.
  • Being useful only sometimes means being excellent. A good model is one that is useful. By investing more in building relationships with policymakers and learning about each other, we have a bigger chance of providing the needed insight. Such a shift, however, is radical and requires us to give up our obsession with the models and engage with the fuzziness of the world around us.
  • If we cannot communicate our models effectively, we cannot expect to build trust with end-users over the long term, whether they be policy-makers or researchers. Individual models – and agent-based social simulation in general – needs better understanding that can only be achieved through greater transparency and communication, however that is achieved.

As taxing, time-consuming and complex as the process of making policy impact with simulation models might be, it is very much a fight worth fighting; perhaps even more so during crises. Assuming our models would have a positive impact on the world, not striving to make this impact could be considered admitting defeat. Making models useful to policymakers starts with admitting the complexity of their environment and willingness to dedicate time and effort to learn about it and work together. That is how we can pave the way for many more stories with happy endings.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise.

References

Adam, C. and Gaudou, B. (2017) ‘Modelling Human Behaviours in Disasters from Interviews: Application to Melbourne Bushfires’ Journal of Artificial Societies and Social Simulation 20(3), 12. http://jasss.soc.surrey.ac.uk/20/3/12.html. doi: 10.18564/jasss.3395

Badham, J., Barbrook-Johnson, P., Caiado, C. and Castellani, B. (2021) ‘Justified Stories with Agent-Based Modelling for Local COVID-19 Planning’ Journal of Artificial Societies and Social Simulation 24 (1) 8 http://jasss.soc.surrey.ac.uk/24/1/8.html. doi: 10.18564/jasss.4532

Crammond, B. R., & Kishore, V. (2021). The probability of the 6‐week lockdown in Victoria (commencing 9 July 2020) achieving elimination of community transmission of SARS‐CoV‐2. The Medical Journal of Australia, 215(2), 95-95. doi:10.5694/mja2.51146

Thompson, J., McClure, R., Blakely, T., Wilson, N., Baker, M. G., Wijnands, J. S., … & Stevenson, M. (2022). Modelling SARS‐CoV‐2 disease progression in Australia and New Zealand: an account of an agent‐based approach to support public health decision‐making. Australian and New Zealand Journal of Public Health, 46(3), 292-303. doi:10.1111/1753-6405.13221

Thompson, J., McClure, R., Scott, N., Hellard, M., Abeysuriya, R., Vidanaarachchi, R., … & Sundararajan, V. (2022). A framework for considering the utility of models when facing tough decisions in public health: a guideline for policy-makers. Health Research Policy and Systems, 20(1), 1-7. doi:10.1186/s12961-022-00902-6

Voinov, A., & Bousquet, F. (2010). Modelling with stakeholders. Environmental modelling & software, 25(11), 1268-1281. doi:10.1016/j.envsoft.2010.03.007

Vennix, J.A.M. (1996). Group Model Building: Facilitating Team Learning Using System Dynamics. Wiley.

Vonnegut, K. (2004). Lecture to Case College. https://www.youtube.com/watch?v=4_RUgnC1lm8


Johansson,E., Nespeca, V., Sirenko, M., van den Hurk, M., Thompson, J., Narasimhan, K., Belfrage, M., Giardini, F. and Melchior, A. (2023) A Tale of Three Pandemic Models: Lessons Learned for Engagement with Policy Makers Before, During, and After a Crisis. Review of Artificial Societies and Social Simulation, 15 Mar 2023. https://rofasss.org/2023/05/15/threepandemic


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Towards an Agent-based Platform for Crisis Management

By Christian Kammler1, Maarten Jensen1, Rajith Vidanaarachchi2 Cezara Păstrăv1

  1. Department of Computer Science, Umeå University, Sweden
    Transport, Health, and Urban Design (THUD)
  2. Research Lab, The University of Melbourne, Australia

Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” — John Woods

1       Introduction

Agent-based modelling can be a valuable tool for gaining insight into crises [3], both, during and before to increase resilience. However, in the current state of the art, the models have to build up from scratch which is not well suitable for a crisis situation as it hinders quick responses. Consequently, the models do not play the central supportive role that they could. Not only is it hard to compare existing models (given the absence of existing standards) and asses their quality, but also the most widespread toolkits, such as Netlogo [6], MESA (Python) [4], Repast (Java) [1,5], or Agents.jl (Julia) [2], are specific for the modelling field and lack the platform support necessary to empower policy makers to use the model (see Figure 1).

Fig. 1. Platform in the middle as a connector between the code and the model and interaction point for the user. It must not require any expert knowledge.

While some of these issues are systemic within the field of ABM (Agent-Based Modelling) itself, we aim to alleviate some of them in this particular context by using a platform purpose-built for developing and using ABM in a crisis. To do so, we view the problem through a multi-dimensional space which is as follows (consisting of the dimensions A-F):

  • A: Back-end to front-end interactivity
  • B: User and stakeholder levels
    – Social simulators to domain experts to policymakers
    – Skills and expertise in coding, modelling and manipulating a model
  • C: Crisis levels (Risk, Crisis, Resilience – also identified as – Pre Crisis, In Crisis, Post Crisis)
  • D: Language specific to language independent
  • E: Domain specific to domain-independent (e.g.: flooding, pandemic, climate change, )
  • F: Required iteration level (Instant, rapid, slow)

A platform can now be viewed as a vector within this space. While all of these axes require in-depth research (for example in terms of correlation or where existing platforms fit), we chose to focus on the functionalities we believe would be the most relevant in ABM for crises.

2       Rapid Development

During a crisis, time is compressed, and fast iterations are necessary (mainly focusing on axes C and F), making instant and rapid/fast iterations necessary while slow iterations are not suitable. As the crisis develops, the model may need to be adjusted to quickly absorb new data, actors, events, and response strategies, leading to new scenarios that need to be modelled and simulated. In this environment, models need to be built with reusability and rapid versioning in mind from the beginning, otherwise every new change makes the model more unstable and less trustworthy.

While a suite of best practices exists in general Software Development, they are not widely used in the agent-based modelling community. The platform needs a coding environment that favors modular reusable code, easy storage and sharing of such modules in well-organized libraries and makes it easy to integrate existing modules with new code.

Having this modularity is not only helping with the right side of Figure 1, we can also use it to help with the left side of the Figure at the same time. Meaning that the conceptual model can be part of the respective module, allowing to quickly determine if a module is relevant and understanding what the module is doing. Furthermore, it can be used to create a top-level drag and drop like model building environment to allow for rapid changes without having to write code (given that we take of the interface properly).

Having the code and the conceptual model together would also lower the effort required to review these modules. The platform can further help with this task by keeping track of which modules have been reviewed, and with versioning of the modules, as they can be annotated accordingly. It has to be noted however,

that such as system does not guarantee a trustworthy model, even though it might be up to date in terms of versioning.

3       Model transparency

Another key factor we want to focus on is the stakeholder dimension (axis B). These people are not experts in terms of models, mainly the left side of Figure 1, and thus need extensive support to be empowered to use the simulation in a – for them  – meaningful  way. While for  the visualization side  (the how? )  we can use insights from Data Visualization, for the why side it is not that easy.

In a crisis, it is crucial to quickly determine why the model behaves in a certain way in order to interpret the results. Here, the platform can help by offering tools to build model narratives (at agent, group, or whole population level), to detect events and trends, and to compare model behavior between runs. We can take inspiration from the larger software development field for a few useful ideas on how to visually track model elements, log the behavior of model elements, or raise flags when certain conditions or events are detected. However, we also have to be careful here, as we easily move towards the technical solution side and away from the stakeholder and policy maker. Therefore, more research has to be done on what support policy makers actually need. An avenue here can be techniques from data story-telling.

4       The way forward

What this platform will look like depends on the approaches we take going forward. We think that the following two questions are central (also to prompt further research):

  1. What are relevant roles that can be identified for a platform?
  2. Given a role for the platform, where should it exist within the space de- scribed, and what attributes/characteristics should it have?

While these questions are key to identify whether or not existing platforms can be extended and shaped in the way we need them or if we need to build a sandbox from scratch, we strongly advocate or an open source approach. An open source approach can not only help to allow for the use of the range of expertise spread across the field, but also alleviate some of the trust challenges. One of the main challenges is that  a  trustworthy,  well-curated  model  base with different modules does not yet exist. As such, the platform should aim first to aid in building this shared resource and add more related functionality as it becomes relevant. As for model tracking tools, we should aim for simple tools first and build more complex functionality on top of them later.

A starting point can be to build modules for existing crises, such as earth- quakes or floods where it is possible to pre-identify most of the modelling needs, the level of stakeholder engagement, the level of policymaker engagement, etc.

With this we can establish the process of open-source modelling and learn how to integrate new knowledge quickly, and be potentially better prepared for unknown crises in the future.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise.

References

  1. Collier, N., North, M.: Parallel agent-based simulation with repast for high per- formance computing. SIMULATION 89(10), 1215–1235 (2013), https://doi.org/10. 1177/0037549712462620
  2. Datseris, G., Vahdati, A.R., DuBois, T.C.: Agents.jl: a performant and feature-full agent-based modeling software of minimal code complexity. SIMULATION 0(0), 003754972110688 (2022), https://doi.org/10.1177/00375497211068820
  3. Dignum, F. (ed.): Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer International Publishing, Cham (2021)
  4. Kazil, J., Masad, D., Crooks, A.: Utilizing python for agent-based modeling: The mesa framework. In: Thomson, R., Bisgin, H., Dancy, C., Hyder, A., Hussain, M. (eds.) Social, Cultural, and Behavioral Modeling. pp. 308–317. Springer Interna- tional Publishing, Cham (2020)
  5. North, M.J., Collier, N.T., Ozik, J., Tatara, E.R., Macal, C.M., Bragen, M., Sydelko, P.: Complex adaptive systems modeling with Repast Simphony. Complex Adaptive Systems Modeling 1(1), 3 (March 2013), https://doi.org/10.1186/2194-3206-1-3
  6. Wilensky, U.: Netlogo. http://ccl.northwestern.edu/netlogo/, Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL (1999), http://ccl.northwestern.edu/netlogo/

Kammler, C., Jensen, M., Vidanaarachchi, R. and Păstrăv, C. (2023) Towards an Agent-based Platform for Crisis Management. Review of Artificial Societies and Social Simulation, 10 May 2023. https://rofasss.org/2023/05/10/abm4cm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Designing Crisis Models: Report of Workshop Activity and Prospectus for Future Research

By: Mike Bithell1, Giangiacomo Bravo2, Edmund Chattoe-Brown3, René Mellema4, Harko Verhagen5 and Thorid Wagenblast6

  1. Formerly Department of Geography, University of Cambridge
  2. Center for Data Intensive Sciences and Applications, Linnaeus University
  3. School of Media, Communication and Sociology, University of Leicester
  4. Department of Computing Science, Umeå Universitet
  5. Department of Computer and Systems Sciences, Stockholm University
  6. Department of Multi-Actor Systems, Delft University of Technology

Background

This piece arose from a Lorentz Center (Leiden) workshop on Agent Based Simulations for Societal Resilience in Crisis Situations held from 27 February to 3 March 2023 (https://www.lorentzcenter.nl/agent-based-simulations-for-societal-resilience-in-crisis-situations.html). During the week, our group was tasked with discussing requirements for Agent-Based Models (hereafter ABM) that could be useful in a crisis situation. Here we report on our discussion and propose some key challenges for platform support where models deal with such challenges.

Introduction

When it comes to crisis situations, modelling can provide insights into which responses are best, how to avoid further negative spill over consequences of policy interventions, and which arrangements could be useful to increase present or future resilience. This approach can be helpful in preparation for a crisis situation, for management during the event itself, or in the post-crisis evaluation of response effectiveness. Further, evaluation of performance in these areas can also lead to subsequent progressive improvement of the models themselves. However, to serve these ends, models need to be built in the most effective way possible. Part of the goal of this piece is to outline what might be needed to make such models effective in various ways and why: Reliability, validity, flexibility and so on. Often, diverse models seem to be built ad hoc when the crisis situation occurs, putting the modellers under time pressure, which can lead to important system aspects being neglected (https://www.jasss.org/24/4/reviews/1.html). This is part of a more general tendency, contrary to say the development of climate modelling, to merely proliferate ABM rather than progress them (https://rofasss.org/2021/05/11/systcomp/). Therefore, we propose some guidance about how to make models for crises that may better inform policy makers about the potential effects of the policies under discussion. Furthermore, we draw attention to the fact that modelling may need to be just part of a wider process of crisis response that occurs both before and after the crisis and not just while it is happening.

Crisis and Resilience: A Working Definition

A crisis can be defined as an initial (relatively stable) state that is disrupted in some way (e.g., through a natural disaster such as a flood) and after some time reaches a new relatively stable state, possibly inferior (or rarely superior – as when an earthquake leads to reconstruction of safer housing) to the initial one (see Fig. 1).

fig1

Fig. 1: Potential outcomes of a disruption of an initial (stable) state.

While some data about daily life may be routinely collected for the initial state and perhaps as the disruption evolves, it is rarely known how the disruption will affect the initial state and how it will subsequently evolve into the new state. (The non-feasibility of collecting much data during a crisis may also draw attention to methods that can more effectively be used, for example, oral history data – see, for example, Holmes and Pilkington 2011.) ABM can help increase the understanding of those changes by providing justified – i. e. process based – scenarios under different circumstances. Based on this definition, and justifying it, we can identify several distinct senses of resilience (for a wider theoretical treatment see, for example, Holing 2001). We decided to use the example of flooding because the group did not have much pre-existing expertise and because it seemed like a fairly typical kind of crisis to draw potentially generalisable conclusions from. However, it should be recognised that not all crises are “known” and building effective resilience capacity for “unknown” crises (like alien invasion) remains an open challenge.

Firstly, a system can be resilient if it is able to return quickly to a desirable state after disruption. For example, a system that allows education and healthcare to become available again in at least their previous forms soon after the water goes down.

Secondly, however, the system is not resilient if it cannot return to anything like its original state (i. e. the society was only functioning at a particular level because it happened that there was no flood in a flood zone) usually owing to resource constraints, poor governance and persistent social inequality. (It is probably only higher income countries that can afford to “build back better” after a crisis. All low income countries can often do is hope they do not happen.) This raises the possibility that more should be invested in resilience without immediate payoff to create a state you can actually return to (or, better, one where vulnerability is reduced) rather than a “Fool’s Paradise” state. This would involve comparison of future welfare streams and potential trade-offs under different investment strategies.

Thirdly, and probably more usually, the system can be considered resilient if it can deliver alternative modes of provision (for example of food) during the crisis. People can no longer go shopping when they want but they can be fed effectively at local community centres which they are nonetheless able to reach despite the flood water.

The final insight that we took from these working definitions is that daily routines operate over different time scales and it may be these scales that determine the unfolding nature of different crises. For example, individuals in a flood area must immediately avoid drowning. They will very rapidly need clean water to drink and food to eat. Soon after, they may well have shelter requirements. After that, there may be a need for medical care and only in the rather longer term for things like education, restored housing and community infrastructure.

Thus, an effective response to a crisis is one that is able to provide what is needed over the timescale at which it occurs (initially escape routes or evacuation procedures, then distribution of water and food and so on), taking into account different levels of need. It is an inability to do this (or one goal conflicting with another as when people escape successfully but in a way that means they cannot then be fed) which leads to the various causes of death (and, in the longer term things like impoverishment – so ideally farmers should be able to save at least some of their livestock as well as themselves) like drowning, starvation, death by waterborne diseases and so on. The effects of some aspects of a crisis (like education disruption and “learning loss”, destruction of community life and of mental health or loss of social capital) may be very long term if they cannot be avoided (and there may therefore be a danger of responding mainly to the “most obvious” effects which may not ultimately be the most damaging).

Preparing for the Model

To deal effectively with a crisis, it is crucial not to “just start building an ABM”, but to approach construction in a structured manner. First, the initial state needs to be defined and modelled. As well as making use of existing data (and perhaps identifying the need to collect additional data going forward, see Gilbert et al. 2021), this is likely to involve engaging with stakeholders, including policy makers, to collect information, for example, on decision-making procedures. Ideally, the process will be carried out in advance of the crisis and regularly updated if changes in the represented system occur (https://rofasss.org/2018/08/22/mb/). This idea is similar to a digital twin https://www.arup.com/perspectives/digital-twin-managing-real-flood-risks-in-a-virtual-world or the “PetaByte Playbook” suggested by Joshua Epstein – Epstein et al. 2011. Second, as much information as possible about potential disruptions should be gathered. This is the sort of data often revealed by emergency planning exercises (https://www.osha.gov/flood), for example involving flood maps, climate/weather assessments (https://check-for-flooding.service.gov.uk/)  or insight into general system vulnerabilities – for example the effects of parts of the road network being underwater – as well as dissections of failed crisis responses in the particular area being modelled and elsewhere (https://www.theguardian.com/environment/2014/feb/02/flooding-winter-defences-environment-climate-change). Third, available documents such as flood plans (https://www.peterborough.gov.uk/council/planning-and-development/flood-and-water-management/water-data) should be checked to get an idea of official crisis response (and also objectives, see below) and thus provide face validity for the proposed model. It should be recognised that certain groups, often disadvantaged, may be engaging in activities – like work – “under the radar” of official data collection: https://www.nytimes.com/2021/09/27/nyregion/hurricane-ida-aid-undocumented-immigrants.html. Engaging with such communities as well as official bodies is likely to be an important aspect of successful crisis management (e.g. Mathias et al. 2020). The general principle here is to do as much effective work as possible before any crisis starts and to divide what can be done in readiness from what can only be done during or after a crisis.

Scoping the Model

As already suggested above, one thing that can and should be done before the crisis is to scope the model for its intended use. This involves reaching a consensus on who the model and its outputs are for and what it is meant to achieve. There is some tendency in ABM for modellers to assume that whatever model they produce (even if they don’t attend much to a context of data or policy) has to be what policy makers and other users must need. Besides asking policy makers, this may also require the negotiation of power relationships so that the needs of the model don’t just reflect the interests/perspective of politicians but also numerous and important but “politically weak” groups like small scale farmers or local manufacturers. Scoping refers not just to technical matters (Is the code effectively debugged? What evidence can be provided that the policy makers should trust the model?) but also to “softer” preparations like building trust and effective communication with the policy makers themselves. This should probably focus any literature reviewing exercise on flood management using models that are least to some extent backed by participatory approaches (for example, work like Mehryar et al. 2021 and Gilligan et al. 2015). It would also be useful to find some way to get policy makers to respond effectively to the existing set of models to direct what can most usefully be “rescued” from them in a user context. (The models that modellers like may not be the ones that policy makers find most useful.)

At the same time, participatory approaches face the unavoidable challenge of interfacing with the scientific process. No matter how many experts believe something to be true, the evidence may nonetheless disagree. So another part of the effective collaboration is to make sure that, whatever its aims, the model is still constructed according to an appropriate methodology (for example being designed to answer clear and specific research questions). This aim obliges us to recognise that the relationship between modellers and policy makers may not just involve evidence and argument but also power, so that modellers then have to decide what compromises they are willing to make to maintain a relationship. In the limit, this may involve negotiating the popular perception that policy makers only listen to academics when they confirm decisions that have already been taken for other reasons. But the existence of power also suggests that modelling may not only be effective with current governments (the most “obvious” power source) but also with opposition parties, effective lobbyists, and NGOs, in building bridges to enhance the voice of “the academic community” and so on.

Finally, one important issue may be to consider whether “the model” is a useful response at all. In order to make an effective compromise (or meet various modelling challenges) it might be necessary to design a set of models with different purposes and scales and consider how/whether they should interface. The necessity for such integration in human-environments systems is already widely recognised (see for example Luus et al. 2013) but it may need to be adjusted more precisely to crisis management models. This is also important because it may be counter-productive to reify policy makers and equate them to the activities of the central government. It may be more worthwhile to get emergency responders or regional health planners, NGOs or even local communities interested in the modelling approach in the first instance.

Large Scale Issues of Model Design

Much as with the research process generally, effective modelling has to proceed through a sequence of steps, each one dependent on the quality of the steps before it. Having characterised a crisis (and looked at existing data/modelling efforts) and achieved a workable measure of consensus regarding who the model is for and (broadly) what it needs to do, the next step is to consider large scale issues of model design (as opposed, for example, to specific details of architecture or coding.)

Suppose, for example, that a model was designed to test scenarios to minimise the death toll in the flooding of a particular area so that governments could focus their flood prevention efforts accordingly (build new defences, create evacuation infrastructure, etc.) The sort of large scale issues that would need to be addressed are as follows:

Model Boundaries: Does it make sense just to model the relevant region? Can deaths within the region be clearly distinguished from those outside it (for example people who escape to die subsequently)? Can the costs and benefits of specific interventions similarly be limited to being clearly inside a model region? What about the extent to which assistance must, by its nature, come from outside the affected area? In accordance with general ABM methodology (Gilbert and Troitzsch 2005), the model needs to represent a system with a clearly and coherently specified “inside” and “outside” to work effectively. This is another example of an area where there will have to be a compromise between the sway of policy makers (who may prefer a model that can supposedly do everything) and the value of properly followed scientific method.

Model Scale: This will also inevitably be a compromise between what is desirable in the abstract and what is practical (shaped by technical issues). Can a single model run with enough agents to unfold the consequences of a year after a flood over a whole region? If the aim is to consider only deaths, then does it need to run that long or that widely? Can the model run fast enough (and be altered fast enough) to deliver the answers that policy makers need over the time scale at which they need them? This kind of model practicality, when compared with the “back of an envelope” calculations beloved of policy advisors, is also a strong argument for progressive modelling (where efforts can be combined in one model rather than diffused among many.)

Model Ontology: One advantage of the modelling process is to serve as a checklist for necessary knowledge. For example, we have to assume something about how individuals make decisions when faced with rising water levels. Ontology is about the evidence base for putting particular things in models or modelling in certain ways. For example, on what grounds do we build an ABM rather than a System Dynamics model beyond doing what we prefer? On what grounds are social networks to be included in a model of emergency evacuation (for example that people are known to rescue not just themselves but their friends and kin in real floods)? Based on wider experience of modelling, the problems here are that model ontologies are often non-empirical, that the assumptions of different models contradict each other and so on. It is unlikely that we already have all the data we need to populate these models but we are required for their effectiveness to be honest about the process where we ideally proceed from completely “made up” models to steadily increasing quality/consensus of ontology. This will involve a mixture of exploring existing models, integrating data with modelling and methods for testing reliability, and perhaps drawing on wider ideas (like modularisation where some modellers specialise in justifying cognitive models, others in transport models and so on). Finally, the ontological dimension may have to involve thinking effectively about what it means to interface a hydrological model (say) with a model of human behaviour and how to separate out the challenges of interfacing the best justified model of each kind. This connects to the issue above about how many models we may need to build an effective compromise with the aims of policy makers.

It should be noted that these dimensions of large scale design may interact. For example, we may need less fine grained models of regions outside the flooded area to understand the challenges of assistance (perhaps there are infrastructure bottlenecks unrelated to the flooding) and escape (will we be able to account for and support victims of the flood who scatter to friends and relatives in other areas? Might escapees create spill over crises in other regions of a low income country?). Another example of such interactions would be that ecological considerations might not apply to very short term models of evacuation but might be much more important to long term models of economic welfare or environmental sustainability in a region. It is instructive to recall that in Ancient Egypt, it was the absence of Nile flooding that was the disaster!

Technical Issues: One argument in favour of trying to focus on specific challenges (like models of flood crises suitable for policy makers) is that they may help to identify specific challenges to modelling or innovations in technique. For example, if a flooding crisis can be usefully divided into phases (immediate, medium and long term) then we may need sets of models each of which creates starting conditions for the next. We are not currently aware of any attention paid to this “model chaining” problem. Another example is the capacity that workshop participants christened “informability”, the ability of a model to easily and quickly incorporate new data (and perhaps even new behaviours) as a situation unfolds. There is a tendency, not always well justified, for ABM to be “wound up” with fixed behaviours and parameters and just left to run. This is only sometimes a good approximation to the social world.

Crisis, Response and Resilience Features: This has already been touched on in the preparatory phase but is also clearly part of large scale model design. What is known (and needs to be known) about the nature of flooding? (For example, one important factor we discovered from looking at a real flood plan was that in locations with dangerous animals, additional problems can be created by these also escaping to unflooded locations (https://www.youtube.com/watch?v=PPpvciP5im8). We would have never worked that out “from the armchair”, meaning it would be left out of a model we would have created.) What policy interventions are considered feasible and how are they supposed to work? (Sometimes the value of modelling is just to show that a plausible sounding intervention doesn’t actually do what you expect.) What aspects of the system are likely to promote (tendency of households to store food) or impede (highly centralised provision of some services) resilience in practice? (And this in turn relates to a good understanding of as many aspects of the pre-crisis state as possible.)

Although a “single goal” model has been used as an example, it would also be a useful thought experiment to consider how the model would need to be different if the aim was the conservation of infrastructure rather than saving lives. When building models really intended for crisis management, however, single issue models are likely to be problematic, since they might show damage in different areas but make no assessment of trade-offs. We experienced a recent example of this where epidemiological COVID models focusing on COVID deaths but not on deaths caused by postponed operations or the health impact from the economic costs of interventions – for example depression and suicide caused by business failure. For an example of attempts at multi-criteria analyses see for example the UK NEA synthesis of key findings (http://uknea.unep-wcmc.org/Resources/tabid/82/Default.aspx), and the IPCC AR6 synthesis for policy makers (https://report.ipcc.ch/ar6syr/pdf/IPCC_AR6_SYR_SPM.pdf).

Model Quality Assurance and “Overheads”

Quality assurance runs right through the development of effective crisis models. Long before you start modelling it is necessary to have an agreement on what the model should do and the challenge of ontology is to justify why the model is as it is and not some other way to successfully achieve this goal. Here, ABM might benefit from more clearly following the idea of “research design”: a clear research question leading to a specifically chosen method, corresponding data collection and analysis leading to results that “provably” answer the right question. This is clearly very different from the still rather widespread “here’s a model and it does some stuff” approach. But the large scale design for the model should also (feeding into the specifics of implementation) set up standards to decide how the model is performing. In the case of crises rather than everyday repeated behaviours, this may require creative conceptual thinking about, for instance, “testing” the model on past flooding incidents (perhaps building on ideas about retrodiction, see for example, Kreps and Ernst 2017). At the same time, it is necessary to be aware of the “overheads” of the model: What new data is needed to fill discovered gaps in the ontology and what existing data must continue to be collected to keep the model effective. Finally, attention must be paid to mundane quality control. How do we assure the absence of disastrous programming bugs? How sensitive is the model to specific assumptions, particularly those with limited empirical support? The answers to these questions obviously matter far more when someone is actually using the model for something “real” and where decisions may be taken that affect people’s livelihoods.

The “Dark Side”

It is also necessary to have a reflexive awareness of ways in which floods are not merely technocratic or philanthropic events. What if the unstated aims of a government in flood control are actually preserving the assets of their political allies? What if a flood model needs to take account of looters and rapists as well as the thirsty and homeless? And, of course, the modellers themselves have to guard against the possibility that models and their assumptions discriminate against the poor, the powerless, or the “socially invisible”. For example, while we have to be realistic about answering the questions that policy makers want answered, we also have to be scientifically critical about what problems they show no interest in.

Conclusion and Next Steps

One way to organise the conclusion of a rather wide-ranging group discussion is to say that the next steps are to make the best use of what already exists and (building on this) to most effectively discover what does not. This could be everything from a decent model of “decision making” during panic to establishing good will from relevant policy makers. At the same time, the activities proposed have to take place within a broad context of academic capabilities and dissemination channels (when people are very busy and have to operate within academic incentive structures). This process can be divided into a number of parts.

  • Getting the most out of models: What good work has been done in flood modelling and on what basis do we call it good? What set of existing model elements can we justify drawing on to build a progressive model? This would be an obvious opportunity for a directed literature review, perhaps building on the recent work of Zhuo and Han (2020).
  • Getting the most out of existing data: What is actually known about flooding that could inform the creation of better models? Do existing models use what is already known? Are there stylised facts that could prune the existing space of candidate models? Can an ABM synthesise interviews, statistics and role playing successfully? How? What appears not to be known? This might also suggest a complementary literature review or “data audit”. This data auditing process may also create specific sub-questions: How much do we know about what happens during a crisis and how do we know it? (For example, rather than asking responders to report when they are busy and in danger, could we make use of offline remote analysis of body cam data somehow?)
  • Getting the most out of the world: This involves combining modelling work with the review of existing data to argue for additional or more consistent data collection. If data matters to the agreed effectiveness of the model, then somehow it has to be collected. This is likely to be carried out through research grants or negotiation with existing data collection agencies and (except in a few areas like experiments) seems to be a relatively neglected aspect of ABM.
  • Getting the most out of policy makers: This is probably the largest unknown quantity. What is the “opening position” of policy makers on models and what steps do we need to take to move them towards a collaborative position if possible? This may have to be as basic as re-education from common misperceptions about the technique (for example that ABM are unavoidably ad hoc.) While this may include more standard academic activities like publishing popular accounts where policy makers are more likely to see them, really the only way to proceed here seems to be to have as many open-minded interactions with as many relevant people as possible to find out what might help the dialogue next.
  • Getting the most out of the population: This overlaps with the other categories. What can the likely actors in a crisis contribute before, during and after the crisis to more effective models? Can there be citizen science to collect data or civil society interventions with modelling justifications? What advantages might there be to discussions that don’t simply occur between academics and central government? This will probably involve the iteration of modelling, science communication and various participatory activities, all of which are already carried out in some areas of ABM.
  • Getting the most out of modellers: One lesson from the COVID crisis is that there is a strong tendency for the ABM community to build many separate (and ultimately non-comparable) models from scratch. We need to think both about how to enforce responsibility for quality where models are actually being used and also whether we can shift modelling culture towards more collaborative and progressive modes (https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/). One way to do this may be precisely to set up a test case on which people can volunteer to work collaboratively to develop this new approach in the hope of demonstrating its effectiveness.

If this piece can get people to combine to make these various next steps happen then it may have served its most useful function!

Acknowledgements

This piece is a result of discussions (both before and after the workshop) by Mike Bithell, Giangiacomo Bravo, Edmund Chattoe-Brown, Corinna Elsenbroich, Aashis Joshi, René Mellema, Mario Paolucci, Harko Verhagen and Thorid Wagenblast. Unless listed as authors above, these participants bear no responsibility for the final form of the written document summarising the discussion! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such productive enterprises.

References

Epstein, J. M., Pankajakshan, R., and Hammond, R. A. (2011) ‘Combining Computational Fluid Dynamics and Agent-Based Modeling: A New Approach to Evacuation Planning’, PLoS ONE, 6(5), e20139. doi:10.1371/journal.pone.0020139

Gilbert, N., Chattoe-Brown, E., Watts, C., and Robertson, D. (2021) ‘Why We Need More Data before the Next Pandemic’, Sociologica, 15(3), pp. 125-143. doi:10.6092/issn.1971-8853/13221

Gilbert, N. G., and Troitzch, K. G. (2005) Simulation for the Social Scientist (Buckingham: Open University Press).

Gilligan, J. M., Brady, C., Camp, J. V., Nay, J. J., and Sengupta, P. (2015) ‘Participatory Simulations of Urban Flooding for Learning and Decision Support’, 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, pp. 3174-3175. doi:10.1109/WSC.2015.7408456.

Holling, C. (2001) ‘Understanding the Complexity of Economic, Ecological, and Social Systems’, Ecosystems, 4, pp. 390-405. doi:10.1007/s10021-001-0101-5

Holmes, A. and Pilkington, M. (2011) ‘Storytelling, Floods, Wildflowers and Washlands: Oral History in the River Ouse Project’, Oral History, 39(2), Autumn, pp. 83-94. https://www.jstor.org/stable/41332167

Krebs, F. and Ernst, A. (2017) ‘A Spatially Explicit Agent-Based Model of the Diffusion of Green Electricity: Model Setup and Retrodictive Validation’, in Jager, W., Verbrugge, R., Flache, A., de Roo, G., Hoogduin, L. and Hemelrijk, C. (eds.) Advances in Social Simulation 2015 (Cham: Springer), pp. 217-230. doi:10.1007/978-3-319-47253-9_19

Luus, K. A., Robinson, D. T., and Deadman, P. J. (2013) ‘Representing ecological processes in agent-based models of land use and cover change’, Journal of Land Use Science, 8(2), pp. 175-198. doi:10.1080/1747423X.2011.640357

Mathias, K., Rawat, M., Philip, S. and Grills, N. (2020) ‘“We’ve Got Through Hard Times Before”: Acute Mental Distress and Coping among Disadvantaged Groups During COVID-19 Lockdown in North India: A Qualitative Study’, International Journal for Equity in Health, 19, article 224. doi:10.1186/s12939-020-01345-7

Mehryar, S., Surminski, S., and Edmonds, B. (2021) ‘Participatory Agent-Based Modelling for Flood Risk Insurance’, in Ahrweiler, P. and Neumann, M. (eds) Advances in Social Simulation, ESSA 2019 (Springer: Cham), pp. 263-267. doi:10.1007/978-3-030-61503-1_25

Zhuo, L. and Han, D. (2020) ‘Agent-Based Modelling and Flood Risk Management: A Compendious Literature Review’, Journal of Hydrology, 591, 125600. doi:10.1016/j.jhydrol.2020.125600


Bithell, M., Bravo, G., Chattoe-Brown, E., Mellema, R., Verhagen, H. and Wagenblast, T. (2023) Designing Crisis Models: Report of Workshop Activity and Prospectus for Future Research. Review of Artificial Societies and Social Simulation, 3 May 2023. https://rofasss.org/2023/05/03/designingcrisismodels


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Reply to Frank Dignum

By Edmund Chattoe-Brown

This is a reply to Frank Dignum’s reply (about Edmund Chattoe-Brown’s review of Frank’s book)

As my academic career continues, I have become more and more interested in the way that people justify their modelling choices, for example, almost every Agent-Based Modeller makes approving noises about validation (in the sense of comparing real and simulated data) but only a handful actually try to do it (Chattoe-Brown 2020). Thus I think two specific statements that Frank makes in his response should be considered carefully:

  1. … we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises.” Firstly, negative claims (“This is not a banana”) are not generally helpful in argument. Secondly, readers want to know (or should want to know) what is being claimed and, importantly, how they would decide if it is true “objectively”. Given how many models sprang up under COVID it is clear that what is described here cannot be the only way to do it but the question is how do we know you did it “better?” This was also my point about institutionalisation. For me, the big lesson from COVID was how much the automatic response of the ABM community seems to be to go in all directions and build yet more models in a tearing hurry rather than synthesise them, challenge them or test them empirically. I foresee a problem both with this response and our possible unwillingness to be self-aware about it. Governments will not want a million “interesting” models to choose from but one where they have externally checkable reasons to trust it and that involves us changing our mindset (to be more like climate modellers for example, Bithell & Edmonds 2020). For example, colleagues and I developed a comparison methodology that allowed for the practical difficulties of direct replication (Chattoe-Brown et al. 2021).
  2. The second quotation which amplifies this point is: “But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.” Again, here one has to ask the right question for progress in modelling. On what scientific grounds should people do this? On what grounds should someone reuse this model rather than start their own? Why isn’t the Dignum et al. model built on another “market leader” to set a good example? (My point about programming languages was purely practical not scientific. Frank is right that the model is no less valid because the programming language was changed but a version that is now unsupported seems less useful as a basis for the kind of further development advocated here.)

I am not totally sure I have understood Frank’s point about data so I don’t want to press it but my concern was that, generally, the book did not seem to “tap into” relevant empirical research (and this is a wider problem that models mostly talk about other models). It is true that parameter values can be adjusted arbitrarily in sensitivity analysis but that does not get us any closer to empirically justified parameter values (which would then allow us to attempt validation by the “generative methodology”). Surely it is better to build a model that says something about the data that exists (however imperfect or approximate) than to rely on future data collection or educated guesses. I don’t really have the space to enumerate the times the book said “we did this for simplicity”, “we assumed that” etc. but the cumulative effect is quite noticeable. Again, we need to be aware of the models which use real data in whatever aspects and “take forward” those inputs so they become modelling standards. This has to be a collective and not an individualistic enterprise.

References

Bithell, M. and Edmonds, B. (2020) The Systematic Comparison of Agent-Based Policy Models – It’s time we got our act together!. Review of Artificial Societies and Social Simulation, 11th May 2021. https://rofasss.org/2021/05/11/SystComp/

Chattoe-Brown, E. (2020) A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation. Review of Artificial Societies and Social Simulation, 12th June 2020. https://rofasss.org/2020/06/12/abm-validation-bib/

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html

Chattoe-Brown, E., Gilbert, N., Robertson, D. A., & Watts, C. J. (2021). Reproduction as a Means of Evaluating Policy Models: A Case Study of a COVID-19 Simulation. medRxiv 2021.01.29.21250743; DOI: https://doi.org/10.1101/2021.01.29.21250743

Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8


Chattoe-Brown, E. (2021) Reply to Frank Dignum. Review of Artificial Societies and Social Simulation, 10th November 2021. https://rofasss.org/2021/11/10/reply-to-dignum/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”

By Frank Dignum

This is a reply to a review in JASSS (Chattoe-Brown 2021) of (Dignum 2021).

Before responding to some of the specific concerns of Edmund I would like to thank him for the thorough review. I am especially happy with his conclusion that the book is solid enough to make it a valuable contribution to scientific progress in modelling crises. That was the main aim of the book and it seems that is achieved. I want to reiterate what we already remarked in the book; we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises. Nor do we claim that our simulations were without limitations. But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.

The concerns that are expressed by Edmund are certainly valid. I agree with some of them, but will nuance some others. First of all the concern about the fact that we seem to abandon the NetLogo implementation and move to Repast. This fact does not make the ABM itself any less valid! In itself it is also an important finding. It is not possible to scale such a complex model in NetLogo beyond around two thousand agents. This is not just a limitation of our particular implementation, but a more general limitation of the platform. It leads to the important challenge to get more computer scientists involved to develop platforms for social simulations that both support the modelers adequately and provide efficient and scalable implementations.

That the sheer size of the model and the results make it difficult to trace back the importance and validity of every factor on the results is completely true. We have tried our best to highlight the most important aspects every time. But, this leaves questions as to whether we make the right selection of highlighted aspects. As an illustration to this, we have been busy for two months to justify our results of the simulations of the effectiveness of the track and tracing apps. We basically concluded that we need much better integrated analysis tools in the simulation platform. NetLogo is geared towards creating one simulation scenario, running the simulation and analyzing the results based on a few parameters. This is no longer sufficient when we have a model with which we can create many scenarios and have many parameters that influence a result. We used R now to interpret the flood of data that was produced with every scenario. But, R is not really the most user friendly tool and also not specifically meant for analyzing the data from social simulations.

Let me jump to the third concern of Edmund and link it to the analysis of the results as well. While we tried to justify the results of our simulation on the effectiveness of the track and tracing app we compared our simulation with an epidemiological based model. This is described in chapter 12 of the book. Here we encountered the difference in assumed number of contacts per day a person has with other persons. One can take the results, as quoted by Edmund as well, of 8 or 13 from empirical work and use them in the model. However, the dispute is not about the number of contacts a person has per day, but what counts as a contact! For the COVID-19 simulations standing next to a person in the queue in a supermarket for five minutes can count as a contact, while such a contact is not a meaningful contact in the cited literature. Thus, we see that what we take as empirically validated numbers might not at all be the right ones for our purpose. We have tried to justify all the values of parameters and outcomes in the context for which the simulations were created. We have also done quite some sensitivity analyses, which we did not all report on just to keep the volume of the book to a reasonable size. Although we think we did a proper job in justifying all results, that does not mean that one can have different opinions on the value that some parameters should have. It would be very good to check the influence on the results of changes in these parameters. This would also progress scientific insights in the usefulness of complex models like the one we made!

I really think that an ABM crisis response should be institutional. That does not mean that one institution determines the best ABM, but rather that the ABM that is put forward by that institution is the result of a continuous debate among scientists working on ABM’s for that type of crisis. For us, one of the more important outcomes of the ASSOCC project is that we really need much better tools to support the types of simulations that are needed for a crisis situation. However, it is very difficult to develop these tools as a single group. A lot of the effort needed is not publishable and thus not valued in an academic environment. I really think that the efforts that have been put in platforms such as NetLogo and Repast are laudable. They have been made possible by some generous grants and institutional support. We argue that this continuous support is also needed in order to be well equipped for a next crisis. But we do not argue that an institution would by definition have the last word in which is the best ABM. In an ideal case it would accumulate all academic efforts as is done in the climate models, but even more restricted models would still be better than just having a thousand individuals all claiming to have a useable ABM while governments have to react quickly to a crisis.

The final concern of Edmund is about the empirical scale of our simulations. This is completely true! Given the scale and details of what we can incorporate we can only simulate some phenomena and certainly not everything around the COVID-19 crisis. We tried to be clear about this limitation. We had discussions about the Unity interface concerning this as well. It is in principle not very difficult to show people walking in the street, taking a car or a bus, etc. However, we decided to show a more abstract representation just to make clear that our model is not a complete model of a small town functioning in all aspects. We have very carefully chosen which scenarios we can realistically simulate and give some insights in reality from. Maybe we should also have discussed more explicitly all the scenarios that we did not run with the reasons why they would be difficult or unrealistic in our ABM. One never likes to discuss all the limitations of one’s labor, but it definitely can be very insightful. I have made up for this a little bit by submitting an to a special issue on predictions with ABM in which I explain in more detail, which should be the considerations to use a particular ABM to try to predict some state of affairs. Anyone interested to learn more about this can contact me.

To conclude this response to the review, I again express my gratitude for the good and thorough work done. The concerns that were raised are all very valuable to concern. What I tried to do in this response is to highlight that these concerns should be taken as a call to arms to put effort in social simulation platforms that give better support for creating simulations for a crisis.

References

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html


Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

How Can ABM Models Become Part of the Policy-Making Process in Times of Emergencies – The S.I.S.A.R. Epidemic Model

By Gianpiero Pescarmona1, Pietro Terna2,*, Alberto Acquadro1, Paolo Pescarmona3, Giuseppe Russo4, and Stefano Terna5

*Corresponding author, 1University of Torino, IT, 2University of Torino, IT, retired & Collegio Carlo Alberto, IT, 3University of Groningen, NL, 4Centro Einaudi, Torino, IT, 5tomorrowdata.io

(A contribution to the: JASSS-Covid19-Thread)

We propose an agent-based model to simulate the Covid-19 epidemic diffusion, with Susceptible, Infected, symptomatic, asymptomatic, and Recovered people: hence the name S.I.s.a.R. The scheme comes from S.I.R. models, with (i) infected agents categorized as symptomatic and asymptomatic and (ii) the places of contagion specified in a detailed way, thanks to agent-based modeling capabilities. The infection transmission is related to three factors: the infected person’s characteristics and the susceptible one, plus those of the space in which contact occurs. The asset of the model is the development of a tool that allows analyzing the contagions’ sequences in simulated epidemics and identifying the places where they occur.

The characteristics of the S.I.s.a.R. model

S.I.s.a.R. can be found at https://terna.to.it/simul/SIsaR.html with information on model construction, the draft of a paper also reporting results, and an online executable version of the simulation program, built using NetLogo. The model includes the structural data of Piedmont, an Italian region, but it can be readily calibrated for other areas. The model reproduces a realistic calendar (e.g., national or local government decisions), via a dedicated script interpreter.

Why another model? The starting point has been the need to model the pandemic problem in a multi-scale way. This was initiated a few months before the publication of new frontier articles, such as Bellomo et al. (2020), so when equation-based S.I.R. models, with their different versions, were predominating.

As any model, also this one is based on some assumptions: time will tell whether these were reasonable hypotheses. Modeling the Covid-19 pandemic requires a scenario and the actors. As in a theatre play, the author defines the roles of the actors and the environment. The characters are not real, they are pre-built by the author, and they act according to their peculiar constraints. If the play is successful, it will run for a long time, even centuries. If not, we will rapidly forget it. Shakespeare’s Hamlet is still playing after centuries, even if the characters and the plot are entirely imaginary. The same holds for our simulations: we are the authors, we arbitrarily define the characters, we force them to act again and again in different scenarios. However, in our model, the micro-micro assumptions are not arbitrary but based on scientific hypotheses at the molecular level, the micro agents’ behaviors are modeled in an explicit and realistic way. In both plays and simulations, we compress the time: a whole life to 2 or 3 hours on the stage. In a few seconds, we run the Covid-19 pandemic spread in a given regional area.

With our model, we move from a macro compartmental vision to a meso and microanalysis capability. Its main characteristics are:

  • scalability: we take in account the interactions between virus and molecules inside the host, the interactions between individuals in more or less restricted contexts, the movement between different environments (home, school, workplace, open spaces, shops, in a second version, we will add transportations and long trips between regions/countries; discotheques; other social aggregation events, as football matches); the movements occur in different parts of the daily life, as in Ghorbani et al. (2020);

the scales are:

    • micro, with the internal biochemical mechanism involved in reacting to the virus, as in Silvagno et al. (2020), from where we derive the critical importance assigned to an individual intrinsic susceptibility related to the age and previous morbidity episodes; the model incorporates the medical insights of one of its co-authors, former full professor of clinical biochemistry, signing also the quoted article; a comment on Lancet (Horton, 2020) consistently signals the syndemic character of the current event: «Two categories of disease are interacting within specific populations—infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and an array of non-communicable diseases (NCDs)»;
    • meso, with the open and closed contexts where the agents behave, as reported above;
    • macro, with the emergent effects of the actions of the agents; this final analysis is a premise to evaluate the costs and benefits of the different intervention policies;
  • granularity: at any level, the interactions are partially random and therefore the final results always reflect the sum of the randomness at the different levels; changing the constraints at different levels and running multiple simulations should allow the identification of the most critical points, i.e., those on which the intervention should be focused.

Contagion sequences as a source of suggestions for intervention policies

All the previous considerations are not exhaustive. The critical point that makes helpful the production of a new model is creating a tool that allows analyzing the contagions’ sequences in simulated epidemics and identifying the places where they occur. We represent each infecting agent as a horizontal segment with a vertical connection to another agent receiving the infection. We represent the second agent via a further segment at an upper layer. With colors, line thickness, and styles, we display multiple data.

As an example, look at Fig.4: we start with two agents coming from the outside, with black as color code (external place), the first one–regular, as reported by the thickness of the segment, starting at day 0 and finishing at day 22–is asymptomatic (dashed line) and infects five agents; the second one–robust, as reported by the thickness of the segment, starting at day 0 and finishing at day 15–is asymptomatic (dashed line) and infects no one; the first of the five infected agents received the infection at home (cyan color) and turns to be asymptomatic after a few days of incubation (dotted line), and so on. Solid lines identify symptomatic agents; brown color refers to workplaces, orange to nursing homes; yellow to schools; pink to hospitals; gray to open spaces. Thick or extra-thick lines refer to fragile or extra-fragile agents, respectively.

This technique enables understanding at a glance how an epidemic episode is developing. In this way, it is easier to reason about countermeasures and, thus, to develop intervention policies. In Figs. 1-4, we can look both at the places where contagions occur and at the dynamics emerging with different levels of intervention. In Fig. 1 we find evidence of the role of the workplaces in diffusing the infection, with a relevant number of infected fragile workers. In Fig. 2, by isolating fragile workers at home, the epidemics seems to finish, but in Fig. 3, we see a thin event (a single case of contagion) that creates a bridge toward a second wave. Finally, in Fig. 4, we see that the epidemic is under control by isolating the workers and any kind of fragile agents. (Please enlarge the on-screen images to see more details).

A scatter graph showing an epidemic with regular containment measures, showing a highly significant effect of workplaces (brown)Figure 1 – An epidemic with regular containment measures, showing a highly significant effect of workplaces (brown)

A scatter graph showing The effects of stopping fragile workers at day 20, with a positive result, but home contagions (cyan) keep alive the pandemic, exploding again in workplaces (brown)

Figure 2 – The effects of stopping fragile workers at day 20, with a positive result, but home contagions (cyan) keep alive the pandemic, exploding again in workplaces (brown)

A scatter graph showing The effects of stopping fragile workers at day 20, with a positive result, but home contagions (cyan) keep alive the pandemic, exploding again in workplaces (brown)

Figure 3 – Same, analyzing the first 200 infections with evidence of the event around day 110 with the new phase due to a unique asymptomatic worker

A scatter graph showing the impoaact of Stopping fragile workers plus any case of fragility at day 15, also isolating nursing homes

Figure 4 – Stopping fragile workers plus any case of fragility at day 15, also isolating nursing homes

Batches of simulation runs

The sequence in the steps described by the four figures is only a snapshot, a suggestion. We need to explore systematically the introduction of factual, counterfactual, and prospective interventions to control the spread of the contagions. Each simulation run–whose length coincides with the disappearance of symptomatic or asymptomatic contagion cases–is a datum in a wide scenario of variability in time and effects. Consequently, we need to represent compactly the results emerging from batches of repetitions, to compare the consequences of each batch’s basic assumptions.

For this purpose, we used blocks of one thousand repetitions. Besides summarizing the results with the usual statistical indicators, we adopted the technique of the heat-maps. In this perspective, with Steinmann et al. (2020), we developed a tool for comparative analyses, not for forecasting. This consideration is consistent with the enormous standard deviation values that are intrinsic to the problem.

Figs. 5-6 provide two heat-maps reporting the duration of each simulated epidemic in the x axis and the number of the symptomatic, asymptomatic, and deceased agents in the y axis. 1,000 runs in both cases.

The actual data for Piedmont, where the curve of the contagions flattened with the end of May, with around 30 thousand subjects, is included in the cell in the first row, immediately to the right of the mode in Fig. 6. In the Fall, a second wave seems possible, jumping into one of the events of the range of events on the right side of the same figure.

Figure 5 – 1000 Epidemics without containment measures (2D histogram of (Symptomatic+Asymptomatic+Deceased against days)

Figure 6 – 1000 Epidemics with basic non-pharmaceutical containment measures, no school in September 2020 (2D histogram of (Symptomatic+Asymptomatic+Deceased against days)

In Table 1 we have a set of statistical indicators related to 1,000 runs of the simulation with the different initial conditions. Cases 1 and 2 are those of Fig. 5 and 6. Then we introduce Case 4, excluding from the workplace workers with health fragilities, so highly susceptible to contagion, with smart work when possible or sick pay conditions. The gain in the reduction of affected people and duration is relevant and increases – in Case 5 – if we leave at home all kinds of fragile people.

Scenarios Total symptomatic Total symptomatic, asymptomatic, deceased Days
1. no control 851.12
(288.52)
2253.48
(767.58)
340.10
(110.21)
2. basic controls, no school in Sep 2020 158.55
(174.10)
416.98
(462.94)
196.97
(131.18)
4. basic controls, stop fragile workers, no schools in Sep 2020 120.17
(149.10)
334.68
(413.90)
181.10
(125.46)
5. basic controls, stop fragile workers & fragile people, nursing-homes isolation, no schools in Sep 2020 105.63
(134.80)
302.62
(382.14)
174.39
(12.82)
7. basic controls, stop f. workers & fragile people, nursing-homes isolation, open factories, schools in Sep 2020 116.55
(130.91)
374.68
(394.66)
195.28
(119.33)

Table 1 – Statistical indicators, limited to the mean and to the standard deviation, reported in parentheses, for a set of experiments; the row numbers are consistent with the paper at https://terna.to.it/simul/SIsaR.html where we report a larger number of simulation experiments

In Case 7, we show that keeping the conditions of Case 5, while opening schools and factories (work places in general), increases in a limited way the adverse events.

A second version

A second version of the model is under development, using https://terna.github.io/SLAPP/, a Python shell for ABM prepared by one of the authors of this note, referring to the pioneering proposal http://www.swarm.org of the Santa Fe Institute.

References

Bellomo, N., Bingham, R., Chaplain, M. A. J., Dosi, G., Forni, G., Knopoff, D. A., Lowengrub,, J., Twarock, R., and Virgillito, M. E. (2020). A multi-scale model of virus pandemic: Heterogeneous interactive entities in a globally connected world. arXiv e-prints, art. arXiv:2006.03915, June.

Ghorbani, A., Lorig, F., de Bruin, B., Davidsson, P., Dignum, F., Dignum, V., van der Hurk, M., Jensen, M., Kammler, C., Kreulen, K., et al. (2020). The ASSOCC Simulation Model: A Response to the Community Call for the COVID-19 Pandemic. Review of Artificial Societies and Social Simulation. URL https://rofasss.org/2020/04/25/the-assocc-simulation-model/.

Horton, R. (2020). Offline: Covid-19 is not a pandemic. Lancet (London, England), 396(10255):874. URL https://www.thelancet.com/action/showPdf?pii=S0140-6736%2820%2932000-6.

Silvagno, F., Vernone, A. and Pescarmona, G. P. (2020). The Role of Glutathione in Protecting against the Severe Inflammatory Response Triggered by COVID-19. In «Antioxidants», vol. 9(7), p. 624. http://dx.doi.org/10.3390/antiox9070624.

Steinmann P., Wang J. R., van Voorn G. A., and Kwakkel J. H. (2020). Don’t try to predict covid-19. if you must, use deep uncertainty methods. Review of Artificial Societies and Social Simulation, 17. https://rofasss.org/2020/04/17/deep-uncertainty/.


Pescarmona, G., Terna, P., Acquadro, A., Pescarmona, P., Russo, G., and Terna, S. (2020) How Can ABM Models Become Part of the Policy-Making Process in Times of Emergencies - The S.I.S.A.R. Epidemic Model. Review of Artificial Societies and Social Simulation, 20th Oct 2020. https://rofasss.org/2020/10/20/sisar/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)