By: Mike Bithell1, Giangiacomo Bravo2, Edmund Chattoe-Brown3, René Mellema4, Harko Verhagen5 and Thorid Wagenblast6
- Formerly Department of Geography, University of Cambridge
- Center for Data Intensive Sciences and Applications, Linnaeus University
- School of Media, Communication and Sociology, University of Leicester
- Department of Computing Science, Umeå Universitet
- Department of Computer and Systems Sciences, Stockholm University
- Department of Multi-Actor Systems, Delft University of Technology
Background
This piece arose from a Lorentz Center (Leiden) workshop on Agent Based Simulations for Societal Resilience in Crisis Situations held from 27 February to 3 March 2023 (https://www.lorentzcenter.nl/agent-based-simulations-for-societal-resilience-in-crisis-situations.html). During the week, our group was tasked with discussing requirements for Agent-Based Models (hereafter ABM) that could be useful in a crisis situation. Here we report on our discussion and propose some key challenges for platform support where models deal with such challenges.
Introduction
When it comes to crisis situations, modelling can provide insights into which responses are best, how to avoid further negative spill over consequences of policy interventions, and which arrangements could be useful to increase present or future resilience. This approach can be helpful in preparation for a crisis situation, for management during the event itself, or in the post-crisis evaluation of response effectiveness. Further, evaluation of performance in these areas can also lead to subsequent progressive improvement of the models themselves. However, to serve these ends, models need to be built in the most effective way possible. Part of the goal of this piece is to outline what might be needed to make such models effective in various ways and why: Reliability, validity, flexibility and so on. Often, diverse models seem to be built ad hoc when the crisis situation occurs, putting the modellers under time pressure, which can lead to important system aspects being neglected (https://www.jasss.org/24/4/reviews/1.html). This is part of a more general tendency, contrary to say the development of climate modelling, to merely proliferate ABM rather than progress them (https://rofasss.org/2021/05/11/systcomp/). Therefore, we propose some guidance about how to make models for crises that may better inform policy makers about the potential effects of the policies under discussion. Furthermore, we draw attention to the fact that modelling may need to be just part of a wider process of crisis response that occurs both before and after the crisis and not just while it is happening.
Crisis and Resilience: A Working Definition
A crisis can be defined as an initial (relatively stable) state that is disrupted in some way (e.g., through a natural disaster such as a flood) and after some time reaches a new relatively stable state, possibly inferior (or rarely superior – as when an earthquake leads to reconstruction of safer housing) to the initial one (see Fig. 1).

Fig. 1: Potential outcomes of a disruption of an initial (stable) state.
While some data about daily life may be routinely collected for the initial state and perhaps as the disruption evolves, it is rarely known how the disruption will affect the initial state and how it will subsequently evolve into the new state. (The non-feasibility of collecting much data during a crisis may also draw attention to methods that can more effectively be used, for example, oral history data – see, for example, Holmes and Pilkington 2011.) ABM can help increase the understanding of those changes by providing justified – i. e. process based – scenarios under different circumstances. Based on this definition, and justifying it, we can identify several distinct senses of resilience (for a wider theoretical treatment see, for example, Holing 2001). We decided to use the example of flooding because the group did not have much pre-existing expertise and because it seemed like a fairly typical kind of crisis to draw potentially generalisable conclusions from. However, it should be recognised that not all crises are “known” and building effective resilience capacity for “unknown” crises (like alien invasion) remains an open challenge.
Firstly, a system can be resilient if it is able to return quickly to a desirable state after disruption. For example, a system that allows education and healthcare to become available again in at least their previous forms soon after the water goes down.
Secondly, however, the system is not resilient if it cannot return to anything like its original state (i. e. the society was only functioning at a particular level because it happened that there was no flood in a flood zone) usually owing to resource constraints, poor governance and persistent social inequality. (It is probably only higher income countries that can afford to “build back better” after a crisis. All low income countries can often do is hope they do not happen.) This raises the possibility that more should be invested in resilience without immediate payoff to create a state you can actually return to (or, better, one where vulnerability is reduced) rather than a “Fool’s Paradise” state. This would involve comparison of future welfare streams and potential trade-offs under different investment strategies.
Thirdly, and probably more usually, the system can be considered resilient if it can deliver alternative modes of provision (for example of food) during the crisis. People can no longer go shopping when they want but they can be fed effectively at local community centres which they are nonetheless able to reach despite the flood water.
The final insight that we took from these working definitions is that daily routines operate over different time scales and it may be these scales that determine the unfolding nature of different crises. For example, individuals in a flood area must immediately avoid drowning. They will very rapidly need clean water to drink and food to eat. Soon after, they may well have shelter requirements. After that, there may be a need for medical care and only in the rather longer term for things like education, restored housing and community infrastructure.
Thus, an effective response to a crisis is one that is able to provide what is needed over the timescale at which it occurs (initially escape routes or evacuation procedures, then distribution of water and food and so on), taking into account different levels of need. It is an inability to do this (or one goal conflicting with another as when people escape successfully but in a way that means they cannot then be fed) which leads to the various causes of death (and, in the longer term things like impoverishment – so ideally farmers should be able to save at least some of their livestock as well as themselves) like drowning, starvation, death by waterborne diseases and so on. The effects of some aspects of a crisis (like education disruption and “learning loss”, destruction of community life and of mental health or loss of social capital) may be very long term if they cannot be avoided (and there may therefore be a danger of responding mainly to the “most obvious” effects which may not ultimately be the most damaging).
Preparing for the Model
To deal effectively with a crisis, it is crucial not to “just start building an ABM”, but to approach construction in a structured manner. First, the initial state needs to be defined and modelled. As well as making use of existing data (and perhaps identifying the need to collect additional data going forward, see Gilbert et al. 2021), this is likely to involve engaging with stakeholders, including policy makers, to collect information, for example, on decision-making procedures. Ideally, the process will be carried out in advance of the crisis and regularly updated if changes in the represented system occur (https://rofasss.org/2018/08/22/mb/). This idea is similar to a digital twin https://www.arup.com/perspectives/digital-twin-managing-real-flood-risks-in-a-virtual-world or the “PetaByte Playbook” suggested by Joshua Epstein – Epstein et al. 2011. Second, as much information as possible about potential disruptions should be gathered. This is the sort of data often revealed by emergency planning exercises (https://www.osha.gov/flood), for example involving flood maps, climate/weather assessments (https://check-for-flooding.service.gov.uk/) or insight into general system vulnerabilities – for example the effects of parts of the road network being underwater – as well as dissections of failed crisis responses in the particular area being modelled and elsewhere (https://www.theguardian.com/environment/2014/feb/02/flooding-winter-defences-environment-climate-change). Third, available documents such as flood plans (https://www.peterborough.gov.uk/council/planning-and-development/flood-and-water-management/water-data) should be checked to get an idea of official crisis response (and also objectives, see below) and thus provide face validity for the proposed model. It should be recognised that certain groups, often disadvantaged, may be engaging in activities – like work – “under the radar” of official data collection: https://www.nytimes.com/2021/09/27/nyregion/hurricane-ida-aid-undocumented-immigrants.html. Engaging with such communities as well as official bodies is likely to be an important aspect of successful crisis management (e.g. Mathias et al. 2020). The general principle here is to do as much effective work as possible before any crisis starts and to divide what can be done in readiness from what can only be done during or after a crisis.
Scoping the Model
As already suggested above, one thing that can and should be done before the crisis is to scope the model for its intended use. This involves reaching a consensus on who the model and its outputs are for and what it is meant to achieve. There is some tendency in ABM for modellers to assume that whatever model they produce (even if they don’t attend much to a context of data or policy) has to be what policy makers and other users must need. Besides asking policy makers, this may also require the negotiation of power relationships so that the needs of the model don’t just reflect the interests/perspective of politicians but also numerous and important but “politically weak” groups like small scale farmers or local manufacturers. Scoping refers not just to technical matters (Is the code effectively debugged? What evidence can be provided that the policy makers should trust the model?) but also to “softer” preparations like building trust and effective communication with the policy makers themselves. This should probably focus any literature reviewing exercise on flood management using models that are least to some extent backed by participatory approaches (for example, work like Mehryar et al. 2021 and Gilligan et al. 2015). It would also be useful to find some way to get policy makers to respond effectively to the existing set of models to direct what can most usefully be “rescued” from them in a user context. (The models that modellers like may not be the ones that policy makers find most useful.)
At the same time, participatory approaches face the unavoidable challenge of interfacing with the scientific process. No matter how many experts believe something to be true, the evidence may nonetheless disagree. So another part of the effective collaboration is to make sure that, whatever its aims, the model is still constructed according to an appropriate methodology (for example being designed to answer clear and specific research questions). This aim obliges us to recognise that the relationship between modellers and policy makers may not just involve evidence and argument but also power, so that modellers then have to decide what compromises they are willing to make to maintain a relationship. In the limit, this may involve negotiating the popular perception that policy makers only listen to academics when they confirm decisions that have already been taken for other reasons. But the existence of power also suggests that modelling may not only be effective with current governments (the most “obvious” power source) but also with opposition parties, effective lobbyists, and NGOs, in building bridges to enhance the voice of “the academic community” and so on.
Finally, one important issue may be to consider whether “the model” is a useful response at all. In order to make an effective compromise (or meet various modelling challenges) it might be necessary to design a set of models with different purposes and scales and consider how/whether they should interface. The necessity for such integration in human-environments systems is already widely recognised (see for example Luus et al. 2013) but it may need to be adjusted more precisely to crisis management models. This is also important because it may be counter-productive to reify policy makers and equate them to the activities of the central government. It may be more worthwhile to get emergency responders or regional health planners, NGOs or even local communities interested in the modelling approach in the first instance.
Large Scale Issues of Model Design
Much as with the research process generally, effective modelling has to proceed through a sequence of steps, each one dependent on the quality of the steps before it. Having characterised a crisis (and looked at existing data/modelling efforts) and achieved a workable measure of consensus regarding who the model is for and (broadly) what it needs to do, the next step is to consider large scale issues of model design (as opposed, for example, to specific details of architecture or coding.)
Suppose, for example, that a model was designed to test scenarios to minimise the death toll in the flooding of a particular area so that governments could focus their flood prevention efforts accordingly (build new defences, create evacuation infrastructure, etc.) The sort of large scale issues that would need to be addressed are as follows:
Model Boundaries: Does it make sense just to model the relevant region? Can deaths within the region be clearly distinguished from those outside it (for example people who escape to die subsequently)? Can the costs and benefits of specific interventions similarly be limited to being clearly inside a model region? What about the extent to which assistance must, by its nature, come from outside the affected area? In accordance with general ABM methodology (Gilbert and Troitzsch 2005), the model needs to represent a system with a clearly and coherently specified “inside” and “outside” to work effectively. This is another example of an area where there will have to be a compromise between the sway of policy makers (who may prefer a model that can supposedly do everything) and the value of properly followed scientific method.
Model Scale: This will also inevitably be a compromise between what is desirable in the abstract and what is practical (shaped by technical issues). Can a single model run with enough agents to unfold the consequences of a year after a flood over a whole region? If the aim is to consider only deaths, then does it need to run that long or that widely? Can the model run fast enough (and be altered fast enough) to deliver the answers that policy makers need over the time scale at which they need them? This kind of model practicality, when compared with the “back of an envelope” calculations beloved of policy advisors, is also a strong argument for progressive modelling (where efforts can be combined in one model rather than diffused among many.)
Model Ontology: One advantage of the modelling process is to serve as a checklist for necessary knowledge. For example, we have to assume something about how individuals make decisions when faced with rising water levels. Ontology is about the evidence base for putting particular things in models or modelling in certain ways. For example, on what grounds do we build an ABM rather than a System Dynamics model beyond doing what we prefer? On what grounds are social networks to be included in a model of emergency evacuation (for example that people are known to rescue not just themselves but their friends and kin in real floods)? Based on wider experience of modelling, the problems here are that model ontologies are often non-empirical, that the assumptions of different models contradict each other and so on. It is unlikely that we already have all the data we need to populate these models but we are required for their effectiveness to be honest about the process where we ideally proceed from completely “made up” models to steadily increasing quality/consensus of ontology. This will involve a mixture of exploring existing models, integrating data with modelling and methods for testing reliability, and perhaps drawing on wider ideas (like modularisation where some modellers specialise in justifying cognitive models, others in transport models and so on). Finally, the ontological dimension may have to involve thinking effectively about what it means to interface a hydrological model (say) with a model of human behaviour and how to separate out the challenges of interfacing the best justified model of each kind. This connects to the issue above about how many models we may need to build an effective compromise with the aims of policy makers.
It should be noted that these dimensions of large scale design may interact. For example, we may need less fine grained models of regions outside the flooded area to understand the challenges of assistance (perhaps there are infrastructure bottlenecks unrelated to the flooding) and escape (will we be able to account for and support victims of the flood who scatter to friends and relatives in other areas? Might escapees create spill over crises in other regions of a low income country?). Another example of such interactions would be that ecological considerations might not apply to very short term models of evacuation but might be much more important to long term models of economic welfare or environmental sustainability in a region. It is instructive to recall that in Ancient Egypt, it was the absence of Nile flooding that was the disaster!
Technical Issues: One argument in favour of trying to focus on specific challenges (like models of flood crises suitable for policy makers) is that they may help to identify specific challenges to modelling or innovations in technique. For example, if a flooding crisis can be usefully divided into phases (immediate, medium and long term) then we may need sets of models each of which creates starting conditions for the next. We are not currently aware of any attention paid to this “model chaining” problem. Another example is the capacity that workshop participants christened “informability”, the ability of a model to easily and quickly incorporate new data (and perhaps even new behaviours) as a situation unfolds. There is a tendency, not always well justified, for ABM to be “wound up” with fixed behaviours and parameters and just left to run. This is only sometimes a good approximation to the social world.
Crisis, Response and Resilience Features: This has already been touched on in the preparatory phase but is also clearly part of large scale model design. What is known (and needs to be known) about the nature of flooding? (For example, one important factor we discovered from looking at a real flood plan was that in locations with dangerous animals, additional problems can be created by these also escaping to unflooded locations (https://www.youtube.com/watch?v=PPpvciP5im8). We would have never worked that out “from the armchair”, meaning it would be left out of a model we would have created.) What policy interventions are considered feasible and how are they supposed to work? (Sometimes the value of modelling is just to show that a plausible sounding intervention doesn’t actually do what you expect.) What aspects of the system are likely to promote (tendency of households to store food) or impede (highly centralised provision of some services) resilience in practice? (And this in turn relates to a good understanding of as many aspects of the pre-crisis state as possible.)
Although a “single goal” model has been used as an example, it would also be a useful thought experiment to consider how the model would need to be different if the aim was the conservation of infrastructure rather than saving lives. When building models really intended for crisis management, however, single issue models are likely to be problematic, since they might show damage in different areas but make no assessment of trade-offs. We experienced a recent example of this where epidemiological COVID models focusing on COVID deaths but not on deaths caused by postponed operations or the health impact from the economic costs of interventions – for example depression and suicide caused by business failure. For an example of attempts at multi-criteria analyses see for example the UK NEA synthesis of key findings (http://uknea.unep-wcmc.org/Resources/tabid/82/Default.aspx), and the IPCC AR6 synthesis for policy makers (https://report.ipcc.ch/ar6syr/pdf/IPCC_AR6_SYR_SPM.pdf).
Model Quality Assurance and “Overheads”
Quality assurance runs right through the development of effective crisis models. Long before you start modelling it is necessary to have an agreement on what the model should do and the challenge of ontology is to justify why the model is as it is and not some other way to successfully achieve this goal. Here, ABM might benefit from more clearly following the idea of “research design”: a clear research question leading to a specifically chosen method, corresponding data collection and analysis leading to results that “provably” answer the right question. This is clearly very different from the still rather widespread “here’s a model and it does some stuff” approach. But the large scale design for the model should also (feeding into the specifics of implementation) set up standards to decide how the model is performing. In the case of crises rather than everyday repeated behaviours, this may require creative conceptual thinking about, for instance, “testing” the model on past flooding incidents (perhaps building on ideas about retrodiction, see for example, Kreps and Ernst 2017). At the same time, it is necessary to be aware of the “overheads” of the model: What new data is needed to fill discovered gaps in the ontology and what existing data must continue to be collected to keep the model effective. Finally, attention must be paid to mundane quality control. How do we assure the absence of disastrous programming bugs? How sensitive is the model to specific assumptions, particularly those with limited empirical support? The answers to these questions obviously matter far more when someone is actually using the model for something “real” and where decisions may be taken that affect people’s livelihoods.
The “Dark Side”
It is also necessary to have a reflexive awareness of ways in which floods are not merely technocratic or philanthropic events. What if the unstated aims of a government in flood control are actually preserving the assets of their political allies? What if a flood model needs to take account of looters and rapists as well as the thirsty and homeless? And, of course, the modellers themselves have to guard against the possibility that models and their assumptions discriminate against the poor, the powerless, or the “socially invisible”. For example, while we have to be realistic about answering the questions that policy makers want answered, we also have to be scientifically critical about what problems they show no interest in.
Conclusion and Next Steps
One way to organise the conclusion of a rather wide-ranging group discussion is to say that the next steps are to make the best use of what already exists and (building on this) to most effectively discover what does not. This could be everything from a decent model of “decision making” during panic to establishing good will from relevant policy makers. At the same time, the activities proposed have to take place within a broad context of academic capabilities and dissemination channels (when people are very busy and have to operate within academic incentive structures). This process can be divided into a number of parts.
- Getting the most out of models: What good work has been done in flood modelling and on what basis do we call it good? What set of existing model elements can we justify drawing on to build a progressive model? This would be an obvious opportunity for a directed literature review, perhaps building on the recent work of Zhuo and Han (2020).
- Getting the most out of existing data: What is actually known about flooding that could inform the creation of better models? Do existing models use what is already known? Are there stylised facts that could prune the existing space of candidate models? Can an ABM synthesise interviews, statistics and role playing successfully? How? What appears not to be known? This might also suggest a complementary literature review or “data audit”. This data auditing process may also create specific sub-questions: How much do we know about what happens during a crisis and how do we know it? (For example, rather than asking responders to report when they are busy and in danger, could we make use of offline remote analysis of body cam data somehow?)
- Getting the most out of the world: This involves combining modelling work with the review of existing data to argue for additional or more consistent data collection. If data matters to the agreed effectiveness of the model, then somehow it has to be collected. This is likely to be carried out through research grants or negotiation with existing data collection agencies and (except in a few areas like experiments) seems to be a relatively neglected aspect of ABM.
- Getting the most out of policy makers: This is probably the largest unknown quantity. What is the “opening position” of policy makers on models and what steps do we need to take to move them towards a collaborative position if possible? This may have to be as basic as re-education from common misperceptions about the technique (for example that ABM are unavoidably ad hoc.) While this may include more standard academic activities like publishing popular accounts where policy makers are more likely to see them, really the only way to proceed here seems to be to have as many open-minded interactions with as many relevant people as possible to find out what might help the dialogue next.
- Getting the most out of the population: This overlaps with the other categories. What can the likely actors in a crisis contribute before, during and after the crisis to more effective models? Can there be citizen science to collect data or civil society interventions with modelling justifications? What advantages might there be to discussions that don’t simply occur between academics and central government? This will probably involve the iteration of modelling, science communication and various participatory activities, all of which are already carried out in some areas of ABM.
- Getting the most out of modellers: One lesson from the COVID crisis is that there is a strong tendency for the ABM community to build many separate (and ultimately non-comparable) models from scratch. We need to think both about how to enforce responsibility for quality where models are actually being used and also whether we can shift modelling culture towards more collaborative and progressive modes (https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/). One way to do this may be precisely to set up a test case on which people can volunteer to work collaboratively to develop this new approach in the hope of demonstrating its effectiveness.
If this piece can get people to combine to make these various next steps happen then it may have served its most useful function!
Acknowledgements
This piece is a result of discussions (both before and after the workshop) by Mike Bithell, Giangiacomo Bravo, Edmund Chattoe-Brown, Corinna Elsenbroich, Aashis Joshi, René Mellema, Mario Paolucci, Harko Verhagen and Thorid Wagenblast. Unless listed as authors above, these participants bear no responsibility for the final form of the written document summarising the discussion! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such productive enterprises.
References
Epstein, J. M., Pankajakshan, R., and Hammond, R. A. (2011) ‘Combining Computational Fluid Dynamics and Agent-Based Modeling: A New Approach to Evacuation Planning’, PLoS ONE, 6(5), e20139. doi:10.1371/journal.pone.0020139
Gilbert, N., Chattoe-Brown, E., Watts, C., and Robertson, D. (2021) ‘Why We Need More Data before the Next Pandemic’, Sociologica, 15(3), pp. 125-143. doi:10.6092/issn.1971-8853/13221
Gilbert, N. G., and Troitzch, K. G. (2005) Simulation for the Social Scientist (Buckingham: Open University Press).
Gilligan, J. M., Brady, C., Camp, J. V., Nay, J. J., and Sengupta, P. (2015) ‘Participatory Simulations of Urban Flooding for Learning and Decision Support’, 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, pp. 3174-3175. doi:10.1109/WSC.2015.7408456.
Holling, C. (2001) ‘Understanding the Complexity of Economic, Ecological, and Social Systems’, Ecosystems, 4, pp. 390-405. doi:10.1007/s10021-001-0101-5
Holmes, A. and Pilkington, M. (2011) ‘Storytelling, Floods, Wildflowers and Washlands: Oral History in the River Ouse Project’, Oral History, 39(2), Autumn, pp. 83-94. https://www.jstor.org/stable/41332167
Krebs, F. and Ernst, A. (2017) ‘A Spatially Explicit Agent-Based Model of the Diffusion of Green Electricity: Model Setup and Retrodictive Validation’, in Jager, W., Verbrugge, R., Flache, A., de Roo, G., Hoogduin, L. and Hemelrijk, C. (eds.) Advances in Social Simulation 2015 (Cham: Springer), pp. 217-230. doi:10.1007/978-3-319-47253-9_19
Luus, K. A., Robinson, D. T., and Deadman, P. J. (2013) ‘Representing ecological processes in agent-based models of land use and cover change’, Journal of Land Use Science, 8(2), pp. 175-198. doi:10.1080/1747423X.2011.640357
Mathias, K., Rawat, M., Philip, S. and Grills, N. (2020) ‘“We’ve Got Through Hard Times Before”: Acute Mental Distress and Coping among Disadvantaged Groups During COVID-19 Lockdown in North India: A Qualitative Study’, International Journal for Equity in Health, 19, article 224. doi:10.1186/s12939-020-01345-7
Mehryar, S., Surminski, S., and Edmonds, B. (2021) ‘Participatory Agent-Based Modelling for Flood Risk Insurance’, in Ahrweiler, P. and Neumann, M. (eds) Advances in Social Simulation, ESSA 2019 (Springer: Cham), pp. 263-267. doi:10.1007/978-3-030-61503-1_25
Zhuo, L. and Han, D. (2020) ‘Agent-Based Modelling and Flood Risk Management: A Compendious Literature Review’, Journal of Hydrology, 591, 125600. doi:10.1016/j.jhydrol.2020.125600
Bithell, M., Bravo, G., Chattoe-Brown, E., Mellema, R., Verhagen, H. and Wagenblast, T. (2023) Designing Crisis Models: Report of Workshop Activity and Prospectus for Future Research. Review of Artificial Societies and Social Simulation, 3 May 2023. https://rofasss.org/2023/05/03/designingcrisismodels
© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)