Tag Archives: modelling crisis

An Institute for Crisis Modelling (ICM) – Towards a resilience center for sustained crisis modeling capability

By Fabian Lorig1*, Bart de Bruin2, Melania Borit3, Frank Dignum4, Bruce Edmonds5, Sinéad M. Madden6, Mario Paolucci7, Nicolas Payette8, Loïs Vanhée4

*Corresponding author
1 Internet of Things and People Research Center, Malmö University, Sweden
2 Delft University of Technology, Netherlands
3 CRAFT Lab, Arctic University of Norway, Tromsø, Norway
4 Department of Computing Science, Umeå University, Sweden
5 Centre for Policy Modelling, Manchester Metropolitan University Business School, UK
6 School of Engineering, University of Limerick, Ireland
7 Laboratory of Agent Based Social Simulation, ISTC/CNR, Italy
8 Complex Human-Environmental Systems Simulation Laboratory, University of Oxford, UK

The Need for an ICM

Most crises and disasters do occur suddenly and hit the society while it is unprepared. This makes it particularly challenging to react quick to their occurrence, to adapt to the resulting new situation, to minimize the societal impact, and to recover from the disturbance. A recent example was the Covid-19 crisis, which revealed weak points of our crisis preparedness. Governments were trying to put restrictions in place to limit the spread of the virus while ensuring the well-being of the population and at the same time preserving economic stability. It quickly became clear that interventions which worked well in some countries did not seem to have the intended effect in other countries and the reason for this is that the success of interventions to a great extent depends on individual human behavior.

Agent-based Social Simulations (ABSS) explicitly model the behavior of the individuals and their interactions in the population and allow us to better understand social phenomena. Thus, ABSS are perfectly suited for investigating how our society might be affected by different crisis scenarios and how policies might affect the societal impact and consequences of these disturbances. Particularly during the Covid-19 crisis, a great number of ABSS have been developed to inform policy making around the globe (e.g., Dignum et al. 2020, Balkely et al. 2021, Lorig et al. 2021). However, weaknesses in creating useful and explainable simulations in a short time also became apparent and there is still a lack of consistency to be better prepared for the next crisis (Squazzoni et al. 2020). Especially, ABSS development approaches are, at this moment, more geared towards simulating one particular situation and validating the simulation using data from that situation. In order to be prepared for a crisis, instead, one needs to simulate many different scenarios for which data might not yet be available. They also typically need a more interactive interface where stake holders can experiment with different settings, policies, etc.

For ABSS to become an established, reliable, and well-esteemed method for supporting crisis management, we need to organize and consolidate the available competences and resources. It is not sufficient to react once a crisis occurs but instead, we need to proactively make sure that we are prepared for future disturbances and disasters. For this purpose, we also need to systematically address more fundamental problems of ABSS as a method of inquiry and particularly consider the specific requirements for the use of ABSS to support policy making, which may differ from the use of ABSS in academic research. We therefore see the need for establishing an Institute for Crisis Modelling (ICM), a resilience center to ensure sustained crisis modeling capability.

The vision of starting an Institute for Crisis Modelling was the result of the discussions and working groups at the Lorentz Center workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” that took place in Leiden, Netherlands from 27 February to 3 March 2023**.

Vision of the ICM

“To have tools suitable to support policy actors in situations that are of
big uncertainty, large consequences, and dependent on human behavior.”

The ICM consists of a taskforce for quickly and efficiently supporting policy actors (e.g., decision makers, policy makers, policy analysts) in situations that are of big uncertainty, large consequences, and dependent on human behavior. For this purpose, the taskforce consists of a larger (informal) network of associates that contribute with their knowledge, skills, models, tools, and networks. The group of associates is composed of a core group of multidisciplinary modeling experts (ranging from social scientists and formal modelers to programmers) as well as of partners that can contribute to specific focus areas (like epidemiology, water management, etc.). The vision of ICM is to consolidate and institutionalize the use of ABSS as a method for crisis management. Although physically ABSS competences may be distributed over a variety of universities, research centers, and other institutions, the ICM serves as a virtual location that coordinates research developments and provides a basic level of funding and communication channel for ABSS for crisis management. This does not only provide policy actors with a single point of contact, making it easier for them to identify who to reach when simulation expertise is needed and to develop long-term trust relationships. It also enables us to jointly and systematically evolve ABSS to become a valuable and established tool for crisis response. The center combines all necessary resources, competences, and tools to quickly develop new models, to adapt existing models, and to efficiently react to new situations.

To achieve this goal and to evolve and establish ABSS as a valuable tool for policy makers in crisis situations, research is needed in different areas. This includes the collection, development, critical analysis, and review of fundamental principles, theories, methods, and tools used in agent-based modeling. This also includes research on data handling (analysis, sharing, access, protection, visualization), data repositories, ontologies, user-interfaces, methodologies, documentation, and ethical principles. Some of these points are concisely described in (Dignum, 2021, Ch. 14 and 15).

The ICM shall be able to provide a wide portfolio of models, methods, techniques, design patterns, and components required to quickly and effectively facilitate the work of policy actors in crisis situations by providing them with adequate simulation models. For the purpose of being able to provide specialized support, the institute will coordinate the human effort (e.g., the modelers) and have specific focus areas for which expertise and models are available. This might be, for instance, pandemics, natural disasters, or financial crises. For each of these focus areas, the center will develop different use cases, which ensures and facilitates rapid responses due to the availability of models, knowledge, and networks.

Objectives of the ICM

To achieve this vision, there are a series of objectives that a resilience center for sustained crisis modeling capability in crisis situations needs to address:

1) Coordinate and promote research

Providing quick and appropriate support for policy actors in crisis situations requires not only a profound knowledge on existing models, methods, tools, and theories but also the systematic development of new approaches and methodologies. This is to advance and evolve ABSS for being better prepared for future crises and will serve as a beacon for organizing the ABSS research oriented towards practical applications.

2) Enable trusted connections with policy actors

Sustainable collaborations and interactions with decision-makers and policy analysts as well as other relevant stakeholders is a great challenge in ABSS. Getting in contact with the right actors, “speaking the same language”, and having realistic expectations are only some of the common problems that need to be addressed. Thus, the ICM should not only connect to policy actors in times of crises, but have continuous interactions, provide sample simulations, develop use cases, and train the policy actors wherever possible.

3) Enable sustainability of the institute itself

Classic funding schemes are unfit for responding in crises, which require fast responses with always-available resources as well as the continuous build-up of knowledge, skills, network, and technological buildup requires long-term. Sustainable funding is needed that for enabling such a continuity, for which the IBM provides a demarked, unifying frame.

4) Actively maintain the network of associates

Maintaining a network of experts is challenging because it requires different competences and experiences. PhD candidates, for instance, might have a great practical experience in using different simulation frameworks, however, after their graduation, some might leave academia and others might continue to other positions where they do not have the opportunity to use their simulation expertise. Thus, new experts need to be acquired continuously to form a resilient and balanced network.

5) Inform policy actors

The most advanced and profound models cannot do any good in crisis situations in case of a lacking demand from policy actors. Many modelers perceive a certain hesitation from policy actors regarding the use of ABSS which might be due to them being unfamiliar with the potential benefits and use-cases of ABSS, lacking trust in the method itself, or simply due to a lack of awareness that ABSS actually exists. Hence, the center needs to educate policy makers and raise awareness as well as improve trust in ABSS.

6) Train the next generation of experts

To quickly develop suitable ABSS models in critical situations requires a variety of expertise. In addition to objective 4, the acquisition of associates, it is also of great importance to educate and train the next generation of experts. ABSS research is still a niche and not taught as an inherent part of the spectrum of methods of most disciplines. The center shall promote and strengthen ABSS education to ensure the training of the next generation of experts.

7) Engage the general public

Finally, the success of ABSS does not only depend on the trust of policy actors but also on how it is perceived by the general public. When developing interventions during the Covid-19 crisis and giving recommendations, the trust in the method was a crucial success factor. Also, developing realistic models requires the active participation of the general public.

Next steps

For ABSS to become a valuable and established tool for supporting policy actors in crisis situations, we are convinced that our efforts need to be institutionalized. This allows us to consolidate available competences, models, and tools as well as to coordinate research endeavors and the development of new approaches required to ensure a sustained crisis modeling capability.

To further pursue this vision, a Special Interest Group (SIG) on Building ResilienCe with Social Simulations (BRICSS) was established at the European Social Simulation Association (ESSA). Moreover, Special Tracks will be organized at the 2023 Social Simulation Conference (SSC) to bring together interested experts.

However, for this vision to become reality, the next steps towards establishing an Institute for Crisis Modelling consist of bringing together ambitious and competent associates as well as identifying core funding opportunities for the center. If the readers feel motivated to contribute in any way to this topic, they are encouraged to contact Frank Dignum, Umeå University, Sweden or any of the authors of this article.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise. The final report of the workshop as well as more information can be found on the webpage of the Lorentz Center: https://www.lorentzcenter.nl/agent-based-simulations-for-societal-resilience-in-crisis-situations.html

References

Blakely, T., Thompson, J., Bablani, L., Andersen, P., Ouakrim, D. A., Carvalho, N., Abraham, P., Boujaoude, M.A., Katar, A., Akpan, E., Wilson, N. & Stevenson, M. (2021). Determining the optimal COVID-19 policy response using agent-based modelling linked to health and cost modelling: Case study for Victoria, Australia. Medrxiv, 2021-01.

Dignum, F., Dignum, V., Davidsson, P., Ghorbani, A., van der Hurk, M., Jensen, M., Kammler C., Lorig, F., Ludescher, L.G., Melchior, A., Mellema, R., Pastrav, C., Vanhee, L. & Verhagen, H. (2020). Analysing the combined health, social and economic impacts of the coronavirus pandemic using agent-based social simulation. Minds and Machines, 30, 177-194. doi: 10.1007/s11023-020-09527-6

Dignum, F. (ed.). (2021) Social Simulation for a Crisis; Results and Lessons from Simulating the COVID-19 Crisis. Springer.

Lorig, Fabian, Johansson, Emil and Davidsson, Paul (2021) ‘Agent-Based Social Simulation of the Covid-19 Pandemic: A Systematic Review’ Journal of Artificial Societies and Social Simulation 24(3), 5. http://jasss.soc.surrey.ac.uk/24/3/5.html. doi: 10.18564/jasss.4601

Squazzoni, F. et al. (2020) ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action‘ Journal of Artificial Societies and Social Simulation 23(2), 10. http://jasss.soc.surrey.ac.uk/23/2/10.html. doi: 10.18564/jasss.4298


Lorig, F., de Bruin, B., Borit, M., Dignum, F., Edmonds, B., Madden, S.M., Paolucci, M., Payette, N. and Vanhée, L. (2023) An Institute for Crisis Modelling (ICM) –
Towards a resilience center for sustained crisis modeling capability. Review of Artificial Societies and Social Simulation, 22 May 2023. https://rofasss.org/2023/05/22/icm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Towards an Agent-based Platform for Crisis Management

By Christian Kammler1, Maarten Jensen1, Rajith Vidanaarachchi2 Cezara Păstrăv1

  1. Department of Computer Science, Umeå University, Sweden
    Transport, Health, and Urban Design (THUD)
  2. Research Lab, The University of Melbourne, Australia

Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” — John Woods

1       Introduction

Agent-based modelling can be a valuable tool for gaining insight into crises [3], both, during and before to increase resilience. However, in the current state of the art, the models have to build up from scratch which is not well suitable for a crisis situation as it hinders quick responses. Consequently, the models do not play the central supportive role that they could. Not only is it hard to compare existing models (given the absence of existing standards) and asses their quality, but also the most widespread toolkits, such as Netlogo [6], MESA (Python) [4], Repast (Java) [1,5], or Agents.jl (Julia) [2], are specific for the modelling field and lack the platform support necessary to empower policy makers to use the model (see Figure 1).

Fig. 1. Platform in the middle as a connector between the code and the model and interaction point for the user. It must not require any expert knowledge.

While some of these issues are systemic within the field of ABM (Agent-Based Modelling) itself, we aim to alleviate some of them in this particular context by using a platform purpose-built for developing and using ABM in a crisis. To do so, we view the problem through a multi-dimensional space which is as follows (consisting of the dimensions A-F):

  • A: Back-end to front-end interactivity
  • B: User and stakeholder levels
    – Social simulators to domain experts to policymakers
    – Skills and expertise in coding, modelling and manipulating a model
  • C: Crisis levels (Risk, Crisis, Resilience – also identified as – Pre Crisis, In Crisis, Post Crisis)
  • D: Language specific to language independent
  • E: Domain specific to domain-independent (e.g.: flooding, pandemic, climate change, )
  • F: Required iteration level (Instant, rapid, slow)

A platform can now be viewed as a vector within this space. While all of these axes require in-depth research (for example in terms of correlation or where existing platforms fit), we chose to focus on the functionalities we believe would be the most relevant in ABM for crises.

2       Rapid Development

During a crisis, time is compressed, and fast iterations are necessary (mainly focusing on axes C and F), making instant and rapid/fast iterations necessary while slow iterations are not suitable. As the crisis develops, the model may need to be adjusted to quickly absorb new data, actors, events, and response strategies, leading to new scenarios that need to be modelled and simulated. In this environment, models need to be built with reusability and rapid versioning in mind from the beginning, otherwise every new change makes the model more unstable and less trustworthy.

While a suite of best practices exists in general Software Development, they are not widely used in the agent-based modelling community. The platform needs a coding environment that favors modular reusable code, easy storage and sharing of such modules in well-organized libraries and makes it easy to integrate existing modules with new code.

Having this modularity is not only helping with the right side of Figure 1, we can also use it to help with the left side of the Figure at the same time. Meaning that the conceptual model can be part of the respective module, allowing to quickly determine if a module is relevant and understanding what the module is doing. Furthermore, it can be used to create a top-level drag and drop like model building environment to allow for rapid changes without having to write code (given that we take of the interface properly).

Having the code and the conceptual model together would also lower the effort required to review these modules. The platform can further help with this task by keeping track of which modules have been reviewed, and with versioning of the modules, as they can be annotated accordingly. It has to be noted however,

that such as system does not guarantee a trustworthy model, even though it might be up to date in terms of versioning.

3       Model transparency

Another key factor we want to focus on is the stakeholder dimension (axis B). These people are not experts in terms of models, mainly the left side of Figure 1, and thus need extensive support to be empowered to use the simulation in a – for them  – meaningful  way. While for  the visualization side  (the how? )  we can use insights from Data Visualization, for the why side it is not that easy.

In a crisis, it is crucial to quickly determine why the model behaves in a certain way in order to interpret the results. Here, the platform can help by offering tools to build model narratives (at agent, group, or whole population level), to detect events and trends, and to compare model behavior between runs. We can take inspiration from the larger software development field for a few useful ideas on how to visually track model elements, log the behavior of model elements, or raise flags when certain conditions or events are detected. However, we also have to be careful here, as we easily move towards the technical solution side and away from the stakeholder and policy maker. Therefore, more research has to be done on what support policy makers actually need. An avenue here can be techniques from data story-telling.

4       The way forward

What this platform will look like depends on the approaches we take going forward. We think that the following two questions are central (also to prompt further research):

  1. What are relevant roles that can be identified for a platform?
  2. Given a role for the platform, where should it exist within the space de- scribed, and what attributes/characteristics should it have?

While these questions are key to identify whether or not existing platforms can be extended and shaped in the way we need them or if we need to build a sandbox from scratch, we strongly advocate or an open source approach. An open source approach can not only help to allow for the use of the range of expertise spread across the field, but also alleviate some of the trust challenges. One of the main challenges is that  a  trustworthy,  well-curated  model  base with different modules does not yet exist. As such, the platform should aim first to aid in building this shared resource and add more related functionality as it becomes relevant. As for model tracking tools, we should aim for simple tools first and build more complex functionality on top of them later.

A starting point can be to build modules for existing crises, such as earth- quakes or floods where it is possible to pre-identify most of the modelling needs, the level of stakeholder engagement, the level of policymaker engagement, etc.

With this we can establish the process of open-source modelling and learn how to integrate new knowledge quickly, and be potentially better prepared for unknown crises in the future.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise.

References

  1. Collier, N., North, M.: Parallel agent-based simulation with repast for high per- formance computing. SIMULATION 89(10), 1215–1235 (2013), https://doi.org/10. 1177/0037549712462620
  2. Datseris, G., Vahdati, A.R., DuBois, T.C.: Agents.jl: a performant and feature-full agent-based modeling software of minimal code complexity. SIMULATION 0(0), 003754972110688 (2022), https://doi.org/10.1177/00375497211068820
  3. Dignum, F. (ed.): Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer International Publishing, Cham (2021)
  4. Kazil, J., Masad, D., Crooks, A.: Utilizing python for agent-based modeling: The mesa framework. In: Thomson, R., Bisgin, H., Dancy, C., Hyder, A., Hussain, M. (eds.) Social, Cultural, and Behavioral Modeling. pp. 308–317. Springer Interna- tional Publishing, Cham (2020)
  5. North, M.J., Collier, N.T., Ozik, J., Tatara, E.R., Macal, C.M., Bragen, M., Sydelko, P.: Complex adaptive systems modeling with Repast Simphony. Complex Adaptive Systems Modeling 1(1), 3 (March 2013), https://doi.org/10.1186/2194-3206-1-3
  6. Wilensky, U.: Netlogo. http://ccl.northwestern.edu/netlogo/, Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL (1999), http://ccl.northwestern.edu/netlogo/

Kammler, C., Jensen, M., Vidanaarachchi, R. and Păstrăv, C. (2023) Towards an Agent-based Platform for Crisis Management. Review of Artificial Societies and Social Simulation, 10 May 2023. https://rofasss.org/2023/05/10/abm4cm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Designing Crisis Models: Report of Workshop Activity and Prospectus for Future Research

By: Mike Bithell1, Giangiacomo Bravo2, Edmund Chattoe-Brown3, René Mellema4, Harko Verhagen5 and Thorid Wagenblast6

  1. Formerly Department of Geography, University of Cambridge
  2. Center for Data Intensive Sciences and Applications, Linnaeus University
  3. School of Media, Communication and Sociology, University of Leicester
  4. Department of Computing Science, Umeå Universitet
  5. Department of Computer and Systems Sciences, Stockholm University
  6. Department of Multi-Actor Systems, Delft University of Technology

Background

This piece arose from a Lorentz Center (Leiden) workshop on Agent Based Simulations for Societal Resilience in Crisis Situations held from 27 February to 3 March 2023 (https://www.lorentzcenter.nl/agent-based-simulations-for-societal-resilience-in-crisis-situations.html). During the week, our group was tasked with discussing requirements for Agent-Based Models (hereafter ABM) that could be useful in a crisis situation. Here we report on our discussion and propose some key challenges for platform support where models deal with such challenges.

Introduction

When it comes to crisis situations, modelling can provide insights into which responses are best, how to avoid further negative spill over consequences of policy interventions, and which arrangements could be useful to increase present or future resilience. This approach can be helpful in preparation for a crisis situation, for management during the event itself, or in the post-crisis evaluation of response effectiveness. Further, evaluation of performance in these areas can also lead to subsequent progressive improvement of the models themselves. However, to serve these ends, models need to be built in the most effective way possible. Part of the goal of this piece is to outline what might be needed to make such models effective in various ways and why: Reliability, validity, flexibility and so on. Often, diverse models seem to be built ad hoc when the crisis situation occurs, putting the modellers under time pressure, which can lead to important system aspects being neglected (https://www.jasss.org/24/4/reviews/1.html). This is part of a more general tendency, contrary to say the development of climate modelling, to merely proliferate ABM rather than progress them (https://rofasss.org/2021/05/11/systcomp/). Therefore, we propose some guidance about how to make models for crises that may better inform policy makers about the potential effects of the policies under discussion. Furthermore, we draw attention to the fact that modelling may need to be just part of a wider process of crisis response that occurs both before and after the crisis and not just while it is happening.

Crisis and Resilience: A Working Definition

A crisis can be defined as an initial (relatively stable) state that is disrupted in some way (e.g., through a natural disaster such as a flood) and after some time reaches a new relatively stable state, possibly inferior (or rarely superior – as when an earthquake leads to reconstruction of safer housing) to the initial one (see Fig. 1).

fig1

Fig. 1: Potential outcomes of a disruption of an initial (stable) state.

While some data about daily life may be routinely collected for the initial state and perhaps as the disruption evolves, it is rarely known how the disruption will affect the initial state and how it will subsequently evolve into the new state. (The non-feasibility of collecting much data during a crisis may also draw attention to methods that can more effectively be used, for example, oral history data – see, for example, Holmes and Pilkington 2011.) ABM can help increase the understanding of those changes by providing justified – i. e. process based – scenarios under different circumstances. Based on this definition, and justifying it, we can identify several distinct senses of resilience (for a wider theoretical treatment see, for example, Holing 2001). We decided to use the example of flooding because the group did not have much pre-existing expertise and because it seemed like a fairly typical kind of crisis to draw potentially generalisable conclusions from. However, it should be recognised that not all crises are “known” and building effective resilience capacity for “unknown” crises (like alien invasion) remains an open challenge.

Firstly, a system can be resilient if it is able to return quickly to a desirable state after disruption. For example, a system that allows education and healthcare to become available again in at least their previous forms soon after the water goes down.

Secondly, however, the system is not resilient if it cannot return to anything like its original state (i. e. the society was only functioning at a particular level because it happened that there was no flood in a flood zone) usually owing to resource constraints, poor governance and persistent social inequality. (It is probably only higher income countries that can afford to “build back better” after a crisis. All low income countries can often do is hope they do not happen.) This raises the possibility that more should be invested in resilience without immediate payoff to create a state you can actually return to (or, better, one where vulnerability is reduced) rather than a “Fool’s Paradise” state. This would involve comparison of future welfare streams and potential trade-offs under different investment strategies.

Thirdly, and probably more usually, the system can be considered resilient if it can deliver alternative modes of provision (for example of food) during the crisis. People can no longer go shopping when they want but they can be fed effectively at local community centres which they are nonetheless able to reach despite the flood water.

The final insight that we took from these working definitions is that daily routines operate over different time scales and it may be these scales that determine the unfolding nature of different crises. For example, individuals in a flood area must immediately avoid drowning. They will very rapidly need clean water to drink and food to eat. Soon after, they may well have shelter requirements. After that, there may be a need for medical care and only in the rather longer term for things like education, restored housing and community infrastructure.

Thus, an effective response to a crisis is one that is able to provide what is needed over the timescale at which it occurs (initially escape routes or evacuation procedures, then distribution of water and food and so on), taking into account different levels of need. It is an inability to do this (or one goal conflicting with another as when people escape successfully but in a way that means they cannot then be fed) which leads to the various causes of death (and, in the longer term things like impoverishment – so ideally farmers should be able to save at least some of their livestock as well as themselves) like drowning, starvation, death by waterborne diseases and so on. The effects of some aspects of a crisis (like education disruption and “learning loss”, destruction of community life and of mental health or loss of social capital) may be very long term if they cannot be avoided (and there may therefore be a danger of responding mainly to the “most obvious” effects which may not ultimately be the most damaging).

Preparing for the Model

To deal effectively with a crisis, it is crucial not to “just start building an ABM”, but to approach construction in a structured manner. First, the initial state needs to be defined and modelled. As well as making use of existing data (and perhaps identifying the need to collect additional data going forward, see Gilbert et al. 2021), this is likely to involve engaging with stakeholders, including policy makers, to collect information, for example, on decision-making procedures. Ideally, the process will be carried out in advance of the crisis and regularly updated if changes in the represented system occur (https://rofasss.org/2018/08/22/mb/). This idea is similar to a digital twin https://www.arup.com/perspectives/digital-twin-managing-real-flood-risks-in-a-virtual-world or the “PetaByte Playbook” suggested by Joshua Epstein – Epstein et al. 2011. Second, as much information as possible about potential disruptions should be gathered. This is the sort of data often revealed by emergency planning exercises (https://www.osha.gov/flood), for example involving flood maps, climate/weather assessments (https://check-for-flooding.service.gov.uk/)  or insight into general system vulnerabilities – for example the effects of parts of the road network being underwater – as well as dissections of failed crisis responses in the particular area being modelled and elsewhere (https://www.theguardian.com/environment/2014/feb/02/flooding-winter-defences-environment-climate-change). Third, available documents such as flood plans (https://www.peterborough.gov.uk/council/planning-and-development/flood-and-water-management/water-data) should be checked to get an idea of official crisis response (and also objectives, see below) and thus provide face validity for the proposed model. It should be recognised that certain groups, often disadvantaged, may be engaging in activities – like work – “under the radar” of official data collection: https://www.nytimes.com/2021/09/27/nyregion/hurricane-ida-aid-undocumented-immigrants.html. Engaging with such communities as well as official bodies is likely to be an important aspect of successful crisis management (e.g. Mathias et al. 2020). The general principle here is to do as much effective work as possible before any crisis starts and to divide what can be done in readiness from what can only be done during or after a crisis.

Scoping the Model

As already suggested above, one thing that can and should be done before the crisis is to scope the model for its intended use. This involves reaching a consensus on who the model and its outputs are for and what it is meant to achieve. There is some tendency in ABM for modellers to assume that whatever model they produce (even if they don’t attend much to a context of data or policy) has to be what policy makers and other users must need. Besides asking policy makers, this may also require the negotiation of power relationships so that the needs of the model don’t just reflect the interests/perspective of politicians but also numerous and important but “politically weak” groups like small scale farmers or local manufacturers. Scoping refers not just to technical matters (Is the code effectively debugged? What evidence can be provided that the policy makers should trust the model?) but also to “softer” preparations like building trust and effective communication with the policy makers themselves. This should probably focus any literature reviewing exercise on flood management using models that are least to some extent backed by participatory approaches (for example, work like Mehryar et al. 2021 and Gilligan et al. 2015). It would also be useful to find some way to get policy makers to respond effectively to the existing set of models to direct what can most usefully be “rescued” from them in a user context. (The models that modellers like may not be the ones that policy makers find most useful.)

At the same time, participatory approaches face the unavoidable challenge of interfacing with the scientific process. No matter how many experts believe something to be true, the evidence may nonetheless disagree. So another part of the effective collaboration is to make sure that, whatever its aims, the model is still constructed according to an appropriate methodology (for example being designed to answer clear and specific research questions). This aim obliges us to recognise that the relationship between modellers and policy makers may not just involve evidence and argument but also power, so that modellers then have to decide what compromises they are willing to make to maintain a relationship. In the limit, this may involve negotiating the popular perception that policy makers only listen to academics when they confirm decisions that have already been taken for other reasons. But the existence of power also suggests that modelling may not only be effective with current governments (the most “obvious” power source) but also with opposition parties, effective lobbyists, and NGOs, in building bridges to enhance the voice of “the academic community” and so on.

Finally, one important issue may be to consider whether “the model” is a useful response at all. In order to make an effective compromise (or meet various modelling challenges) it might be necessary to design a set of models with different purposes and scales and consider how/whether they should interface. The necessity for such integration in human-environments systems is already widely recognised (see for example Luus et al. 2013) but it may need to be adjusted more precisely to crisis management models. This is also important because it may be counter-productive to reify policy makers and equate them to the activities of the central government. It may be more worthwhile to get emergency responders or regional health planners, NGOs or even local communities interested in the modelling approach in the first instance.

Large Scale Issues of Model Design

Much as with the research process generally, effective modelling has to proceed through a sequence of steps, each one dependent on the quality of the steps before it. Having characterised a crisis (and looked at existing data/modelling efforts) and achieved a workable measure of consensus regarding who the model is for and (broadly) what it needs to do, the next step is to consider large scale issues of model design (as opposed, for example, to specific details of architecture or coding.)

Suppose, for example, that a model was designed to test scenarios to minimise the death toll in the flooding of a particular area so that governments could focus their flood prevention efforts accordingly (build new defences, create evacuation infrastructure, etc.) The sort of large scale issues that would need to be addressed are as follows:

Model Boundaries: Does it make sense just to model the relevant region? Can deaths within the region be clearly distinguished from those outside it (for example people who escape to die subsequently)? Can the costs and benefits of specific interventions similarly be limited to being clearly inside a model region? What about the extent to which assistance must, by its nature, come from outside the affected area? In accordance with general ABM methodology (Gilbert and Troitzsch 2005), the model needs to represent a system with a clearly and coherently specified “inside” and “outside” to work effectively. This is another example of an area where there will have to be a compromise between the sway of policy makers (who may prefer a model that can supposedly do everything) and the value of properly followed scientific method.

Model Scale: This will also inevitably be a compromise between what is desirable in the abstract and what is practical (shaped by technical issues). Can a single model run with enough agents to unfold the consequences of a year after a flood over a whole region? If the aim is to consider only deaths, then does it need to run that long or that widely? Can the model run fast enough (and be altered fast enough) to deliver the answers that policy makers need over the time scale at which they need them? This kind of model practicality, when compared with the “back of an envelope” calculations beloved of policy advisors, is also a strong argument for progressive modelling (where efforts can be combined in one model rather than diffused among many.)

Model Ontology: One advantage of the modelling process is to serve as a checklist for necessary knowledge. For example, we have to assume something about how individuals make decisions when faced with rising water levels. Ontology is about the evidence base for putting particular things in models or modelling in certain ways. For example, on what grounds do we build an ABM rather than a System Dynamics model beyond doing what we prefer? On what grounds are social networks to be included in a model of emergency evacuation (for example that people are known to rescue not just themselves but their friends and kin in real floods)? Based on wider experience of modelling, the problems here are that model ontologies are often non-empirical, that the assumptions of different models contradict each other and so on. It is unlikely that we already have all the data we need to populate these models but we are required for their effectiveness to be honest about the process where we ideally proceed from completely “made up” models to steadily increasing quality/consensus of ontology. This will involve a mixture of exploring existing models, integrating data with modelling and methods for testing reliability, and perhaps drawing on wider ideas (like modularisation where some modellers specialise in justifying cognitive models, others in transport models and so on). Finally, the ontological dimension may have to involve thinking effectively about what it means to interface a hydrological model (say) with a model of human behaviour and how to separate out the challenges of interfacing the best justified model of each kind. This connects to the issue above about how many models we may need to build an effective compromise with the aims of policy makers.

It should be noted that these dimensions of large scale design may interact. For example, we may need less fine grained models of regions outside the flooded area to understand the challenges of assistance (perhaps there are infrastructure bottlenecks unrelated to the flooding) and escape (will we be able to account for and support victims of the flood who scatter to friends and relatives in other areas? Might escapees create spill over crises in other regions of a low income country?). Another example of such interactions would be that ecological considerations might not apply to very short term models of evacuation but might be much more important to long term models of economic welfare or environmental sustainability in a region. It is instructive to recall that in Ancient Egypt, it was the absence of Nile flooding that was the disaster!

Technical Issues: One argument in favour of trying to focus on specific challenges (like models of flood crises suitable for policy makers) is that they may help to identify specific challenges to modelling or innovations in technique. For example, if a flooding crisis can be usefully divided into phases (immediate, medium and long term) then we may need sets of models each of which creates starting conditions for the next. We are not currently aware of any attention paid to this “model chaining” problem. Another example is the capacity that workshop participants christened “informability”, the ability of a model to easily and quickly incorporate new data (and perhaps even new behaviours) as a situation unfolds. There is a tendency, not always well justified, for ABM to be “wound up” with fixed behaviours and parameters and just left to run. This is only sometimes a good approximation to the social world.

Crisis, Response and Resilience Features: This has already been touched on in the preparatory phase but is also clearly part of large scale model design. What is known (and needs to be known) about the nature of flooding? (For example, one important factor we discovered from looking at a real flood plan was that in locations with dangerous animals, additional problems can be created by these also escaping to unflooded locations (https://www.youtube.com/watch?v=PPpvciP5im8). We would have never worked that out “from the armchair”, meaning it would be left out of a model we would have created.) What policy interventions are considered feasible and how are they supposed to work? (Sometimes the value of modelling is just to show that a plausible sounding intervention doesn’t actually do what you expect.) What aspects of the system are likely to promote (tendency of households to store food) or impede (highly centralised provision of some services) resilience in practice? (And this in turn relates to a good understanding of as many aspects of the pre-crisis state as possible.)

Although a “single goal” model has been used as an example, it would also be a useful thought experiment to consider how the model would need to be different if the aim was the conservation of infrastructure rather than saving lives. When building models really intended for crisis management, however, single issue models are likely to be problematic, since they might show damage in different areas but make no assessment of trade-offs. We experienced a recent example of this where epidemiological COVID models focusing on COVID deaths but not on deaths caused by postponed operations or the health impact from the economic costs of interventions – for example depression and suicide caused by business failure. For an example of attempts at multi-criteria analyses see for example the UK NEA synthesis of key findings (http://uknea.unep-wcmc.org/Resources/tabid/82/Default.aspx), and the IPCC AR6 synthesis for policy makers (https://report.ipcc.ch/ar6syr/pdf/IPCC_AR6_SYR_SPM.pdf).

Model Quality Assurance and “Overheads”

Quality assurance runs right through the development of effective crisis models. Long before you start modelling it is necessary to have an agreement on what the model should do and the challenge of ontology is to justify why the model is as it is and not some other way to successfully achieve this goal. Here, ABM might benefit from more clearly following the idea of “research design”: a clear research question leading to a specifically chosen method, corresponding data collection and analysis leading to results that “provably” answer the right question. This is clearly very different from the still rather widespread “here’s a model and it does some stuff” approach. But the large scale design for the model should also (feeding into the specifics of implementation) set up standards to decide how the model is performing. In the case of crises rather than everyday repeated behaviours, this may require creative conceptual thinking about, for instance, “testing” the model on past flooding incidents (perhaps building on ideas about retrodiction, see for example, Kreps and Ernst 2017). At the same time, it is necessary to be aware of the “overheads” of the model: What new data is needed to fill discovered gaps in the ontology and what existing data must continue to be collected to keep the model effective. Finally, attention must be paid to mundane quality control. How do we assure the absence of disastrous programming bugs? How sensitive is the model to specific assumptions, particularly those with limited empirical support? The answers to these questions obviously matter far more when someone is actually using the model for something “real” and where decisions may be taken that affect people’s livelihoods.

The “Dark Side”

It is also necessary to have a reflexive awareness of ways in which floods are not merely technocratic or philanthropic events. What if the unstated aims of a government in flood control are actually preserving the assets of their political allies? What if a flood model needs to take account of looters and rapists as well as the thirsty and homeless? And, of course, the modellers themselves have to guard against the possibility that models and their assumptions discriminate against the poor, the powerless, or the “socially invisible”. For example, while we have to be realistic about answering the questions that policy makers want answered, we also have to be scientifically critical about what problems they show no interest in.

Conclusion and Next Steps

One way to organise the conclusion of a rather wide-ranging group discussion is to say that the next steps are to make the best use of what already exists and (building on this) to most effectively discover what does not. This could be everything from a decent model of “decision making” during panic to establishing good will from relevant policy makers. At the same time, the activities proposed have to take place within a broad context of academic capabilities and dissemination channels (when people are very busy and have to operate within academic incentive structures). This process can be divided into a number of parts.

  • Getting the most out of models: What good work has been done in flood modelling and on what basis do we call it good? What set of existing model elements can we justify drawing on to build a progressive model? This would be an obvious opportunity for a directed literature review, perhaps building on the recent work of Zhuo and Han (2020).
  • Getting the most out of existing data: What is actually known about flooding that could inform the creation of better models? Do existing models use what is already known? Are there stylised facts that could prune the existing space of candidate models? Can an ABM synthesise interviews, statistics and role playing successfully? How? What appears not to be known? This might also suggest a complementary literature review or “data audit”. This data auditing process may also create specific sub-questions: How much do we know about what happens during a crisis and how do we know it? (For example, rather than asking responders to report when they are busy and in danger, could we make use of offline remote analysis of body cam data somehow?)
  • Getting the most out of the world: This involves combining modelling work with the review of existing data to argue for additional or more consistent data collection. If data matters to the agreed effectiveness of the model, then somehow it has to be collected. This is likely to be carried out through research grants or negotiation with existing data collection agencies and (except in a few areas like experiments) seems to be a relatively neglected aspect of ABM.
  • Getting the most out of policy makers: This is probably the largest unknown quantity. What is the “opening position” of policy makers on models and what steps do we need to take to move them towards a collaborative position if possible? This may have to be as basic as re-education from common misperceptions about the technique (for example that ABM are unavoidably ad hoc.) While this may include more standard academic activities like publishing popular accounts where policy makers are more likely to see them, really the only way to proceed here seems to be to have as many open-minded interactions with as many relevant people as possible to find out what might help the dialogue next.
  • Getting the most out of the population: This overlaps with the other categories. What can the likely actors in a crisis contribute before, during and after the crisis to more effective models? Can there be citizen science to collect data or civil society interventions with modelling justifications? What advantages might there be to discussions that don’t simply occur between academics and central government? This will probably involve the iteration of modelling, science communication and various participatory activities, all of which are already carried out in some areas of ABM.
  • Getting the most out of modellers: One lesson from the COVID crisis is that there is a strong tendency for the ABM community to build many separate (and ultimately non-comparable) models from scratch. We need to think both about how to enforce responsibility for quality where models are actually being used and also whether we can shift modelling culture towards more collaborative and progressive modes (https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/). One way to do this may be precisely to set up a test case on which people can volunteer to work collaboratively to develop this new approach in the hope of demonstrating its effectiveness.

If this piece can get people to combine to make these various next steps happen then it may have served its most useful function!

Acknowledgements

This piece is a result of discussions (both before and after the workshop) by Mike Bithell, Giangiacomo Bravo, Edmund Chattoe-Brown, Corinna Elsenbroich, Aashis Joshi, René Mellema, Mario Paolucci, Harko Verhagen and Thorid Wagenblast. Unless listed as authors above, these participants bear no responsibility for the final form of the written document summarising the discussion! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such productive enterprises.

References

Epstein, J. M., Pankajakshan, R., and Hammond, R. A. (2011) ‘Combining Computational Fluid Dynamics and Agent-Based Modeling: A New Approach to Evacuation Planning’, PLoS ONE, 6(5), e20139. doi:10.1371/journal.pone.0020139

Gilbert, N., Chattoe-Brown, E., Watts, C., and Robertson, D. (2021) ‘Why We Need More Data before the Next Pandemic’, Sociologica, 15(3), pp. 125-143. doi:10.6092/issn.1971-8853/13221

Gilbert, N. G., and Troitzch, K. G. (2005) Simulation for the Social Scientist (Buckingham: Open University Press).

Gilligan, J. M., Brady, C., Camp, J. V., Nay, J. J., and Sengupta, P. (2015) ‘Participatory Simulations of Urban Flooding for Learning and Decision Support’, 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, pp. 3174-3175. doi:10.1109/WSC.2015.7408456.

Holling, C. (2001) ‘Understanding the Complexity of Economic, Ecological, and Social Systems’, Ecosystems, 4, pp. 390-405. doi:10.1007/s10021-001-0101-5

Holmes, A. and Pilkington, M. (2011) ‘Storytelling, Floods, Wildflowers and Washlands: Oral History in the River Ouse Project’, Oral History, 39(2), Autumn, pp. 83-94. https://www.jstor.org/stable/41332167

Krebs, F. and Ernst, A. (2017) ‘A Spatially Explicit Agent-Based Model of the Diffusion of Green Electricity: Model Setup and Retrodictive Validation’, in Jager, W., Verbrugge, R., Flache, A., de Roo, G., Hoogduin, L. and Hemelrijk, C. (eds.) Advances in Social Simulation 2015 (Cham: Springer), pp. 217-230. doi:10.1007/978-3-319-47253-9_19

Luus, K. A., Robinson, D. T., and Deadman, P. J. (2013) ‘Representing ecological processes in agent-based models of land use and cover change’, Journal of Land Use Science, 8(2), pp. 175-198. doi:10.1080/1747423X.2011.640357

Mathias, K., Rawat, M., Philip, S. and Grills, N. (2020) ‘“We’ve Got Through Hard Times Before”: Acute Mental Distress and Coping among Disadvantaged Groups During COVID-19 Lockdown in North India: A Qualitative Study’, International Journal for Equity in Health, 19, article 224. doi:10.1186/s12939-020-01345-7

Mehryar, S., Surminski, S., and Edmonds, B. (2021) ‘Participatory Agent-Based Modelling for Flood Risk Insurance’, in Ahrweiler, P. and Neumann, M. (eds) Advances in Social Simulation, ESSA 2019 (Springer: Cham), pp. 263-267. doi:10.1007/978-3-030-61503-1_25

Zhuo, L. and Han, D. (2020) ‘Agent-Based Modelling and Flood Risk Management: A Compendious Literature Review’, Journal of Hydrology, 591, 125600. doi:10.1016/j.jhydrol.2020.125600


Bithell, M., Bravo, G., Chattoe-Brown, E., Mellema, R., Verhagen, H. and Wagenblast, T. (2023) Designing Crisis Models: Report of Workshop Activity and Prospectus for Future Research. Review of Artificial Societies and Social Simulation, 3 May 2023. https://rofasss.org/2023/05/03/designingcrisismodels


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Escaping the modelling crisis

By Emile Chappin

Let me explain something I call the ‘modelling crisis’. It is something that many modellers in one way or another encounter. By being aware we may resolve such a crisis, avoid frustration, and, hopefully, save the world from some bad modelling.

Views on modelling

I first present two views on modelling. Bear with me!

[View 1: Model = world] The first view is that models capture things in the real world pretty well and some models are pretty much representative. And of course this is true. You can add many things to the model and you may have. But if you think along this line, you start seeing the model as if it is the world. At one point you may become rather optimistic about modelling. Well, I really mean to say, you become naive: the model is fabulous. The model can help anyone with any problem only somewhat related to the original idea behind this model. You don’t waste time worrying about the details and sell the model to everyone listening, and you’re quite convinced in the way you do this. You may come to a belief that the model is the truth.

[View 2: Model ≠ world] The second view is that the model can never represent the world adequately enough to really predict what is going on. And of course this is true. But if you think along this line, you can get pretty frustrated: the model is never good enough, because factor A is not in there, mechanism B is biased, etc. At one point you may become quite pessimistic about ‘the model’: will it help anyone anytime soon? You may come to the belief that the model is nonsense (and that modelling itself is nonsense).

As a modeller, you may encounter these views in your modelling journey: in how your model is perceived, in how your model is compared to other models and in the questions you’re asked about your model. And it may the case that you get stuck in either one of the views yourself. And you may not be aware, but you might still behave accordingly.

Possible consequences

Let’s conceive the idea of having a business doing modelling: we are ambitious and successful! What might happen over time with our business and with our clients?

  • Your clients love your business – Clients can ask us any question and they will get a very precise answer back! Anytime we give a good result, a result that comes true in some sense, we are praised, and our reputation grows. Anytime we give a bad result, something that turns out quite different from what we’d expected, we can blame the particular circumstances which could not have been foreseen or argue that this result is basically out of the original scope. Our modesty makes our reputation grow! And it makes us proud!
  • Assets need protection – Over time, our model/business reputation becomes more and more important. You should ask us for any modelling job because we’ve modelled (this) for decades. Any question goes into our fabulous model that can Answer Any Question In A Minute (AAQIAM). Our models became patchworks because of questions that were not so easy to fit in. But obviously, as a whole, the model is great. More than great: it is the best! The models are our key assets: they need to be protected. In a board meeting we decide that we should not show the insides of our models anymore. We should keep them secret.
  • Modelling schools – Habits emerge of how our models are used, what kind of analysis we do, and which we don’t. Core assumptions that we always make with our model are accepted and forgotten. We get used to those assumptions, we won’t change them anyway and probably we can’t. It is not really needed to think about the consequences of those assumptions anyway. We stick to the basics, represent the results in the way that the client can use it, and mention in footnotes how much detail is underneath, and that some caution is warranted in interpretation of the results. Other modelling schools may also emerge, but they really can’t deliver the precision/breadth of what we have been doing for decades, so they are not relevant, not really, anyway.
  • Distrusting all models – Another kind of people, typically not your clients, start distrusting the modelling business completely. They get upset in discussions: why worry about discussing the model details: there is always something missing anyway. And it is impossible to quantify anything, really. They decide that it is better to ignore model geeks completely and just follow their own reasoning. It doesn’t matter that this reasoning can’t be backed up with facts (such as a modelled reality). They don’t believe that it be done could anyway. So the problem is not their reasoning, it is the inability of quantitative science.

Here is the crisis

At this point, people stop debating the crucial elements in our models and the ambition for model innovation goes out of the window. I would say, we end up in a modelling crisis. At some point, decisions have to be made in the real world, and they can either be inspired by good modelling, by bad modelling, or not by modelling at all.

The way out of the modelling crisis

How can such a modelling crisis be resolved? First, we need to accept that the model ≠ world, so we don’t necessarily need to predict. We also need to accept that modelling can certainly be useful, for example when it helps to find clear and explicit reasoning/underpinning of an argument.

  • We should focus more on the problem that we really want to address, and for that problem, argue how modelling can actually contribute to a solution for that problem. This should result in better modelling questions, because modelling is a means, not an end. We should stop trying to outsource the thinking to a model.
  • Following from this point, we should be very explicit about the modelling purpose: in what way does the modelling contribute to solving the problem identified earlier? We have to be aware that different kinds of purposes lead to different styles of reasoning, and, consequently, to different strengths and weaknesses in the modelling that we do. Consider the differences between prediction, explanation, theoretical exposition, description and illustration as types of modelling purpose, see (Edmonds 2017), (more types are possible).
  • Following this point, we should accept the importance of creativity and the process in modelling. Science is about reasoned, reproducible work. But, paradoxically, good science does not come from a linear, step-by-step approach. Accepting this, modelling can help both in the creative process by exploring possible ideas, explicating an intuition as well as in justification and underpinning of a very particular reasoning. Next, it is important avoid mixing these perspectives up. The modelling process is as relevant as the model outcome. In the end, the reasoning should be standalone and strong (also without the model). But you may have needed the model to find it.
  • We should adhere to better modelling practices and develop the tooling to accommodate them. For ABM, many successful developments are ongoing: we should be explicit and transparent about assumptions we are making (e.g. the ODD protocol, Polhill et al. 2008). We should develop requirements and procedures for modelling studies, with respect to how the analysis is performed, also if clients don’t ask for it (validity, robustness of findings, sensitivity of outcomes, analysis of uncertainties). For some sectors, such requirements have been developed. The discussion around practices and validation is prominent in ABMs, where some ‘issues’ may be considered obvious (see for instance Heath, Hill, and Ciarallo 2009, the effort through CoMSES), but they should be asked for any type of model. In fact, we should share, debate on, and work with all types of models that are already out there (again, such as the great efforts through CoMSES), and consider forms of multi-modelling to save time and effort and benefit from strengths of different model formalisms.
  • We should start looking for good examples: get inspired and share them. Personally I like Basic Traffic from the NetLogo library, it does not predict you where traffic jams are, but it clearly shows the worth of slowing down earlier. Another may be the Limits to Growth, irrespective of its predictive power.
  • We should start doing it better ourselves, so that we show others that it can be done!

References

Heath, B., Hill, R. and Ciarallo, F. (2009). A Survey of Agent-Based Modeling Practices (January 1998 to July 2008). Journal of Artificial Societies and Social Simulation 12(4):9. http://jasss.soc.surrey.ac.uk/12/4/9.html

Polhill, J. Gary, Dawn Parker, Daniel Brown, and Volker Grimm. (2008). Using the ODD Protocol for Describing Three Agent-Based Social Simulation Models of Land-Use Change. Journal of Artificial Societies and Social Simulation 11(2): 3.

Edmonds, B. (2017) Five modelling purposes, Centre for Policy Modelling Discussion Paper CPM-17-238, http://cfpm.org/discussionpapers/192/


Chappin, E.J.L. (2018) Escaping the modelling crisis. Review of Artificial Societies and Social Simulation, 12th October 2018. https://rofasss.org/2018/10/12/ec/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)