Tag Archives: tools

Exascale computing and ‘next generation’ agent-based modelling

By Gary Polhill, Alison Heppenstall, Michael Batty, Doug Salt, Ricardo Colasanti, Richard Milton and Matt Hare

Introduction

In the past decade we have seen considerable gains in the amount of data and computational power that are available to us as scientific researchers.  Whilst the proliferation of new forms of data can present as many challenges as opportunities (linking data sets, checking veracity etc.), we can now begin to construct models that are capable of answering ever more complex and interrelated questions.  For example, what happens to individual health and the local economy if we pedestrianize a city centre?  What is the impact of increasing travel costs on the price of housing? How can we divert economic investment to places in economic decline from prosperous cities and regions. These advances are slowly positioning agent-based modelling to support decision-makers to make informed evidence-based decisions.  However, there is still a lack of ABMs being used outside of academia and policy makers find it difficult to mobilise and apply such tools to inform real world problems: here we explore the background in computing that helps address the question why such models are so underutilised in practice.

Whilst reaching a level of maturity (defined as being an accepted tool) within the social sciences, agent-based modelling still has several methodological barriers to cross.  These were first highlighted by Crooks et al. (2008) and revisited by Heppenstall et al. (2020) and include robust validation, elicitation of behaviour from data and scaling up.  Whilst other disciplines, such as meteorology, are able to conduct large numbers of simulations (ensemble modelling) using high-performance computing, there is a relative absence of this capability within agent-based modelling. Moreover, many different kinds of agent-based models are being devised, and key issues concern the number and type of agents and these are reflected in the whole computational context in which such models are developed. Clearly there is potential for agent-based modelling to establish itself as a robust policy tool, but this requires access to large-scale computing.

Exascale high-performance computing is defined with respect to speed of calculation with orders of magnitude defined as 10^18 (a billion-billion) floating point operations per second (flops). That is fast enough to calculate the ratios of the ages of each of every possible pair of people in China in roughly a second. By comparison, modern-day personal computers are around 10^9 flops (gigascale) – a billion times slower. The same rather pointless calculation of age ratios of the Chinese would take just over thirty years on a standard laptop at the time of writing (2023). Though agent-based modellers are more interested in instructions incorporating the rules operated by each agent executed per second than in floating-point operations, the speed of the two is approximately the same.

Anecdotally, the majority of simulations of agent-based models are on personal computers operating on the desktop. However, there are examples of the use of high-performance computing environments such as computing clusters (terascale) and cloud services such as Microsoft’s Azure, Amazon’s AWS or Google Cloud (tera- to peta-scale). High-performance computing provides the capacity to do more of what we already do (more runs for calibration, validation and sensitivity analysis) and/or at a larger scale (regional or sub-national scale rather than local scale) with the number of agents scaled accordingly. As a rough guide, however, since terascale computing is a million times slower than exascale computing, an experiment that currently takes a few days or weeks in a high-performance computing environment could be completed in a fraction of a second at exascale.

We are all familiar with poor user interface design in everyday computing, and in particular the frustration of waiting for the hourglasses, spinning wheels and progress bars to finish so that we can get on with our work. In fact, the ‘Doherty Threshold’ (Yablonski 2020) stipulates 400ms interaction time between human action and computer response for best productivity. If going from 10^9 to 10^18 flops is simply a case of multiplying the speed of computation by a billion, the Doherty threshold is potentially feasible with exascale computing when applied to simulation experiments that now require very long wait times for completion.

The scale of performance of exascale computers means that there is scope to go beyond doing-more-of-what-we-already-do to thinking more deeply about what we could achieve with agent-based modelling. Could we move past some of these methodological barriers that are characteristic of agent-based modelling? What could we achieve if we had appropriate software support, and how this would affect the processes and practices by which agent-based models are built? Could we move agent-based models to having the same level of ‘robustness’ as climate models, for example? We can conceive of a productivity loop in which an empirical agent-based model is used for sequential experimentation with continual adaptation and change, continued experiment with perhaps a new model emerging from these workflows to explore tangential issues. But currently we need to have tools that help us build empirical agent-based models much more rapidly, and critically, to find, access and preprocess empirical data that the model will use for initialisation, then finding and affirming parameter values.

The ExAMPLER project

The ExAMPLER (Exascale Agent-based Modelling for PoLicy Evaluation in Real-time) project is an eighteen-month project funded by the Engineering and Physical Sciences Research Council to explore the software, data and institutional requirements to support agent-based modelling at exascale.

With high-performance computing use not being commonplace in the agent-based modelling community, we are interested in finding out what the state-of-the-art is in high-performance computing use by agent-based modellers, undertaking a systematic literature review to assess the community’s ‘exascale-readiness’. This is not just a question of whether the community has the necessary technical skills to use the equipment. It is also a matter that covers whether the hardware is appropriate to the computational demands that agent-based modellers have, whether the software in which agent-based models are built can take advantage of the hardware, and whether the institutional processes by which agent-based modellers access high-performance computing – especially with respect to information requested of applicants – is aware of their needs.

We will then benchmark the state-of-the-art against high-performance computing use in other domains of research: ecology and microsimulation, which are comparable to agent-based social simulation (ABSS); and fields such as transportation, land use and urban econometric  modelling that are  not directly comparable to ABSS, but have similar computational challenges (e.g. having to simulate many interactions, needing to explore a vast uncharted parameter space, containing multiple qualitatively different outcomes from the same initial conditions, and so on). Ecology might not simulate agents with decision-making algorithms as computationally demanding as some of those used by agent-based modellers of social systems, while a crude characterisation of microsimulation work is that it does not simulate interactions among heterogeneous agents, which affects the parallelisation of simulating them. Land use and transport models usually rely on aggregates of agents but increasingly there are being disaggregated to finer and fine spatial units with these units themselves being treated more like agents. The ‘discipline-to-be-decided’ might have a community with generally higher technical computing skills than would be expected among social scientists. Benchmarking would allow us to gain better insights into the specific barriers faced by social scientists in accessing high-performance computing.

Two other strands of work in ExAMPLER feature significant engagement with the agent-based modelling community. The project’s imaginary starting point is a computer powerful enough to experiment with an agent-based model which run in fractions of a second. With a pre-existing agent-based model, we could use such a computer in a one-day workshop to enable a creative discussion with decision-makers about how to handle problems and policies associated with an emerging crisis. But what if we had the tools at our disposal to gather and preprocess data and build models such that these activities could also be achievable in the same day? or even the same hour? Some of our land use and transportation models are already moving in this direction (Horni, Nagel, and Axhausen, 2016). Agent-based modelling would thus become a social activity that facilitates discussion and decision-making that is mindful of complexity and cascading consequences. The practices and procedures associated with building an agent-based model would then have evolved significantly from what they are now, as have the institutions built around accessing and using high-performance computing.

The first strand of work co-constructs with the agent-based modelling community various scenarios by which agent-based modelling is transformed by the dramatic improvements in computational power that exascale computing entails. These visions will be co-constructed primarily through workshops, the first of which is being held at the Social Simulation Conference in Glasgow – a conference that is well-attended by the European (and wider international) agent-based social simulation community. However, we will also issue a questionnaire to elicit views from the wider community of those who cannot attend one of our events. There are two purposes to these exercises: to understand the requirements of the community and their visions for the future, but also to advertise the benefits that exascale computing could have.

In a second series of workshops, we will develop a roadmap for exascale agent-based modelling that identifies the institutional, scientific and infrastructure support needed to achieve the envisioned exascale agent-based modelling use-cases. In essence, what do we need to have in place to make exascale a reality for the everyday agent-based modeller? This activity is underpinned by training ExAMPLER’s research team in the hardware, software and algorithms that can be used to achieve exascale computation more widely. That knowledge, together with the review of the state-of-the-art in high-performance computing use with agent-based models, can be used to identify early opportunities for the community to make significant gains (Macal, and North, 2008)

Discussion

Exascale agent-based modelling is not simply a case of providing agent-based modellers with usernames and passwords on an exascale computer and letting them run their models on it. There are many institutional, scientific and infrastructural barriers that need to be addressed.

On the scientific side, exascale agent-based modelling could be potentially revolutionary in transforming the practices, methods and audiences for agent-based modelling. As a highly diverse community, methodological development is challenged both by the lack of opportunity to make it happen, and by the sheer range of agent-based modelling applications. Too much standardization and ritualized behaviour associated with ‘disciplining’ agent-based modelling risks some of the creative benefits of having the cross-disciplinary discussions that agent-based modelling enables us to have. Nevertheless, it is increasingly clear that off-the-shelf methods for designing, implementing and assessing models are ill-suited to agent-based modelling, or – especially in the case of the last of these – fail to do it justice (Polhill and Salt 2017, Polhill et al. 2019). Scientific advancement in agent-based modelling is predicated on having the tools at our disposal to tell the whole story of its benefits, and enabling non-agent-based modelling colleagues to understand how to work with the ABM community.

Hence, hardware is only a small part of the story of the infrastructure supporting exascale agent-based modelling. Exascale computers are built using GPUs (Graphical Processing Units) – which, bluntly-speaking, are specialized computing engines for performing matrix calculations and ‘drawing millions of triangles as quickly as possible’ – they are, in any case, different from CPU-based computing. In Table 4 of Kravari and Bassiliades’ (2015) survey of agent-based modelling platforms, only two of the 24 platforms reviewed (Cormas – Bommel et al. 2016 and GAMA – Taillandier et al. 2019) are not listed as involving Java and/or the Java Virtual Machine. (As it turns out, GAMA does use Java.) TornadoVM (Papadimitriou et al. 2019) is one tool allowing Java Virtual Machines to run on GPUs. Even if we can then run NetLogo on a GPU, specialist GPU-based agent-based modelling platforms such as Richmond et al.’s (2010, 2022) FLAME GPU may be preferable in order to make best use of the highly parallelized computing environment on GPUs.

Such software simply achieves getting an agent-based model running on an exascale computer. Realizing some of the visions of future exascale-enabled agent-based modelling means rather more in the way of software support. For example, the one-day workshop in which an agent-based modelling is co-constructed with stakeholders asks either a great deal of the developers in terms of building a bespoke application in tens of minutes, or many stakeholders trusting pre-constructed modular components that can be brought together rapidly using a specialist software tool.

As has been noted (e.g. Alessa et al. 2006, para 3.4), agent-based modelling is already challenging for social scientists without programming expertise, and GPU programming is a highly specialized domain in the world of software environments. Exascale computing intersects GPU programming with high-performance computing; issues with the ways in which high-performance computing clusters are typically administered make access to them a significant obstacle for agent-based modellers (Polhill 2022). There are therefore institutional barriers that need to be broken down for the benefits of exascale agent-based modelling to be realized in a community primarily interested in the dynamics of social and/or ecological complexity, and rather less in the technology that enables them to pursue that interest. ExAMPLER aims to provide us with a voice that gets our requirements heard so that we are not excluded from taking best advantage of advanced development in computing hardware.

Acknowledgements

The ExAMPLER project is funded by the EPSRC under grant number EP/Y008839/1.  Further information is available at: https://exascale.hutton.ac.uk

References

Alessa, L. N., Laituri, M. and Barton, M. (2006) An “all hands” call to the social science community: Establishing a community framework for complexity modeling using cyberinfrastructure. Journal of Artificial Societies and Social Simulation 9 (4), 6. https://www.jasss.org/9/4/6.html

Bommel, P., Becu, N., Le Page, C. and Bousquet, F. (2016) Cormas: An agent-based simulation platform for coupling human decisions with computerized dynamics. In Kaneda, T., Kanegae, H., Toyoda, Y. and Rizzi, P. (eds.) Simulation and Gaming in the Network Society. Translational Systems Sciences 9, pp. 387-410. doi:10.1007/978-981-10-0575-6_27

Crooks, A. T., C. J. E. Castle, and M. Batty. (2008). “Key Challenges in Agent-Based Modelling for Geo-spatial Simulation.” Computers, Environment and Urban Systems  32(6),  417– 30.

Heppenstall A, Crooks A, Malleson N, Manley E, Ge J, Batty M. (2020). Future Developments in Geographical Agent-Based Models: Challenges and Opportunities. Geographical Analysis. 53(1): 76 – 91 doi:10.1111/gean.12267

Horni, A, Nagel, K and Axhausen, K W. (eds)(2016) The Multi-Agent Transport Simulation MATSim, Ubiquity Press, London, 447–450

Kravari, K. and Bassiliades, N. (2015) A survey of agent platforms. Journal of Artificial Societies and Social Simulation 18 (1), 11. https://www.jasss.org/18/1/11.html

Macal, C. M., and North, M. J. (2008) Agent-Based Modeling And Simulation for EXASCALE Computing, http://www.scidac.org

Papadimitriou, M., Fumero, J., Stratikopoulos, A. and Kotselidis, C. (2019) Towards prototyping and acceleration of Java programs onto Intel FPGAs. Proceedings of the 2019 IEEE 27th Annueal International Symposium on Field-Programmable Custom Computing Machines (FCCM). doi:10.1109/FCCM.2019.00051

Polhill, G. (2022) Antisocial simulation: using shared high-performance computing clusters to run agent-based models. Review of Artificial Societies and Social Simulation, 14 Dec 2022. https://rofasss.org/2022/12/14/antisoc-sim

Polhill, G. and Salt, D. (2017) The importance of ontological structure: why validation by ‘fit-to-data’ is insufficient. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity (2nd edition), pp. 141-172. doi:10.1007/978-3-319-66948-9_8

Polhill, J. G., Ge, J., Hare, M. P., Matthews, K. B., Gimona, A., Salt, D. and Yeluripati, J. (2019) Crossing the chasm: a ‘tube-map’ for agent-based simulation of policy scenarios in spatially-distributed systems. Geoinformatica 23, 169-199. doi:10.1007/s10707-018-00340-z

Richmond, P., Chisholm, R., Heywood, P., Leach, M. and Kabiri Chimeh, M. (2022) FLAME GPU (2.0.0-rc). Zenodo. doi:10.5281/zenodo.5428984

Richmond, P., Walker, D., Coakley, S. and Romano, D. (2010) High performance cellular level agent-based simulation with FLAME for the GPU. Briefings in Bioinformatics 11 (3), 334-347. doi:10.1093/bib/bbp073

Taillandier, P., Gaudou, B., Grignard, A.,Huynh, Q.-N., Marilleau, N., P. Caillou, P., Philippon, D. and Drogoul, A. (2019). Building, composing and experimenting complex spatial models with the GAMA platform. Geoinformatica 23 (2), 299-322, doi:10.1007/s10707-018-00339-6

Yablonski, J. (2020) Laws of UX. O’Reilly. https://www.oreilly.com/library/view/laws-of-ux/9781492055303/


Polhill, G., Heppenstall, A., Batty, M., Salt, D., Colasanti, R., Milton, R. and Hare, M. (2023) Exascale computing and ‘next generation’ agent-based modelling. Review of Artificial Societies and Social Simulation, 9 Mar 2023. https://rofasss.org/2023/09/29/exascale-computing-and-next-gen-ABM


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

An Institute for Crisis Modelling (ICM) – Towards a resilience center for sustained crisis modeling capability

By Fabian Lorig1*, Bart de Bruin2, Melania Borit3, Frank Dignum4, Bruce Edmonds5, Sinéad M. Madden6, Mario Paolucci7, Nicolas Payette8, Loïs Vanhée4

*Corresponding author
1 Internet of Things and People Research Center, Malmö University, Sweden
2 Delft University of Technology, Netherlands
3 CRAFT Lab, Arctic University of Norway, Tromsø, Norway
4 Department of Computing Science, Umeå University, Sweden
5 Centre for Policy Modelling, Manchester Metropolitan University Business School, UK
6 School of Engineering, University of Limerick, Ireland
7 Laboratory of Agent Based Social Simulation, ISTC/CNR, Italy
8 Complex Human-Environmental Systems Simulation Laboratory, University of Oxford, UK

The Need for an ICM

Most crises and disasters do occur suddenly and hit the society while it is unprepared. This makes it particularly challenging to react quick to their occurrence, to adapt to the resulting new situation, to minimize the societal impact, and to recover from the disturbance. A recent example was the Covid-19 crisis, which revealed weak points of our crisis preparedness. Governments were trying to put restrictions in place to limit the spread of the virus while ensuring the well-being of the population and at the same time preserving economic stability. It quickly became clear that interventions which worked well in some countries did not seem to have the intended effect in other countries and the reason for this is that the success of interventions to a great extent depends on individual human behavior.

Agent-based Social Simulations (ABSS) explicitly model the behavior of the individuals and their interactions in the population and allow us to better understand social phenomena. Thus, ABSS are perfectly suited for investigating how our society might be affected by different crisis scenarios and how policies might affect the societal impact and consequences of these disturbances. Particularly during the Covid-19 crisis, a great number of ABSS have been developed to inform policy making around the globe (e.g., Dignum et al. 2020, Balkely et al. 2021, Lorig et al. 2021). However, weaknesses in creating useful and explainable simulations in a short time also became apparent and there is still a lack of consistency to be better prepared for the next crisis (Squazzoni et al. 2020). Especially, ABSS development approaches are, at this moment, more geared towards simulating one particular situation and validating the simulation using data from that situation. In order to be prepared for a crisis, instead, one needs to simulate many different scenarios for which data might not yet be available. They also typically need a more interactive interface where stake holders can experiment with different settings, policies, etc.

For ABSS to become an established, reliable, and well-esteemed method for supporting crisis management, we need to organize and consolidate the available competences and resources. It is not sufficient to react once a crisis occurs but instead, we need to proactively make sure that we are prepared for future disturbances and disasters. For this purpose, we also need to systematically address more fundamental problems of ABSS as a method of inquiry and particularly consider the specific requirements for the use of ABSS to support policy making, which may differ from the use of ABSS in academic research. We therefore see the need for establishing an Institute for Crisis Modelling (ICM), a resilience center to ensure sustained crisis modeling capability.

The vision of starting an Institute for Crisis Modelling was the result of the discussions and working groups at the Lorentz Center workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” that took place in Leiden, Netherlands from 27 February to 3 March 2023**.

Vision of the ICM

“To have tools suitable to support policy actors in situations that are of
big uncertainty, large consequences, and dependent on human behavior.”

The ICM consists of a taskforce for quickly and efficiently supporting policy actors (e.g., decision makers, policy makers, policy analysts) in situations that are of big uncertainty, large consequences, and dependent on human behavior. For this purpose, the taskforce consists of a larger (informal) network of associates that contribute with their knowledge, skills, models, tools, and networks. The group of associates is composed of a core group of multidisciplinary modeling experts (ranging from social scientists and formal modelers to programmers) as well as of partners that can contribute to specific focus areas (like epidemiology, water management, etc.). The vision of ICM is to consolidate and institutionalize the use of ABSS as a method for crisis management. Although physically ABSS competences may be distributed over a variety of universities, research centers, and other institutions, the ICM serves as a virtual location that coordinates research developments and provides a basic level of funding and communication channel for ABSS for crisis management. This does not only provide policy actors with a single point of contact, making it easier for them to identify who to reach when simulation expertise is needed and to develop long-term trust relationships. It also enables us to jointly and systematically evolve ABSS to become a valuable and established tool for crisis response. The center combines all necessary resources, competences, and tools to quickly develop new models, to adapt existing models, and to efficiently react to new situations.

To achieve this goal and to evolve and establish ABSS as a valuable tool for policy makers in crisis situations, research is needed in different areas. This includes the collection, development, critical analysis, and review of fundamental principles, theories, methods, and tools used in agent-based modeling. This also includes research on data handling (analysis, sharing, access, protection, visualization), data repositories, ontologies, user-interfaces, methodologies, documentation, and ethical principles. Some of these points are concisely described in (Dignum, 2021, Ch. 14 and 15).

The ICM shall be able to provide a wide portfolio of models, methods, techniques, design patterns, and components required to quickly and effectively facilitate the work of policy actors in crisis situations by providing them with adequate simulation models. For the purpose of being able to provide specialized support, the institute will coordinate the human effort (e.g., the modelers) and have specific focus areas for which expertise and models are available. This might be, for instance, pandemics, natural disasters, or financial crises. For each of these focus areas, the center will develop different use cases, which ensures and facilitates rapid responses due to the availability of models, knowledge, and networks.

Objectives of the ICM

To achieve this vision, there are a series of objectives that a resilience center for sustained crisis modeling capability in crisis situations needs to address:

1) Coordinate and promote research

Providing quick and appropriate support for policy actors in crisis situations requires not only a profound knowledge on existing models, methods, tools, and theories but also the systematic development of new approaches and methodologies. This is to advance and evolve ABSS for being better prepared for future crises and will serve as a beacon for organizing the ABSS research oriented towards practical applications.

2) Enable trusted connections with policy actors

Sustainable collaborations and interactions with decision-makers and policy analysts as well as other relevant stakeholders is a great challenge in ABSS. Getting in contact with the right actors, “speaking the same language”, and having realistic expectations are only some of the common problems that need to be addressed. Thus, the ICM should not only connect to policy actors in times of crises, but have continuous interactions, provide sample simulations, develop use cases, and train the policy actors wherever possible.

3) Enable sustainability of the institute itself

Classic funding schemes are unfit for responding in crises, which require fast responses with always-available resources as well as the continuous build-up of knowledge, skills, network, and technological buildup requires long-term. Sustainable funding is needed that for enabling such a continuity, for which the IBM provides a demarked, unifying frame.

4) Actively maintain the network of associates

Maintaining a network of experts is challenging because it requires different competences and experiences. PhD candidates, for instance, might have a great practical experience in using different simulation frameworks, however, after their graduation, some might leave academia and others might continue to other positions where they do not have the opportunity to use their simulation expertise. Thus, new experts need to be acquired continuously to form a resilient and balanced network.

5) Inform policy actors

The most advanced and profound models cannot do any good in crisis situations in case of a lacking demand from policy actors. Many modelers perceive a certain hesitation from policy actors regarding the use of ABSS which might be due to them being unfamiliar with the potential benefits and use-cases of ABSS, lacking trust in the method itself, or simply due to a lack of awareness that ABSS actually exists. Hence, the center needs to educate policy makers and raise awareness as well as improve trust in ABSS.

6) Train the next generation of experts

To quickly develop suitable ABSS models in critical situations requires a variety of expertise. In addition to objective 4, the acquisition of associates, it is also of great importance to educate and train the next generation of experts. ABSS research is still a niche and not taught as an inherent part of the spectrum of methods of most disciplines. The center shall promote and strengthen ABSS education to ensure the training of the next generation of experts.

7) Engage the general public

Finally, the success of ABSS does not only depend on the trust of policy actors but also on how it is perceived by the general public. When developing interventions during the Covid-19 crisis and giving recommendations, the trust in the method was a crucial success factor. Also, developing realistic models requires the active participation of the general public.

Next steps

For ABSS to become a valuable and established tool for supporting policy actors in crisis situations, we are convinced that our efforts need to be institutionalized. This allows us to consolidate available competences, models, and tools as well as to coordinate research endeavors and the development of new approaches required to ensure a sustained crisis modeling capability.

To further pursue this vision, a Special Interest Group (SIG) on Building ResilienCe with Social Simulations (BRICSS) was established at the European Social Simulation Association (ESSA). Moreover, Special Tracks will be organized at the 2023 Social Simulation Conference (SSC) to bring together interested experts.

However, for this vision to become reality, the next steps towards establishing an Institute for Crisis Modelling consist of bringing together ambitious and competent associates as well as identifying core funding opportunities for the center. If the readers feel motivated to contribute in any way to this topic, they are encouraged to contact Frank Dignum, Umeå University, Sweden or any of the authors of this article.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise. The final report of the workshop as well as more information can be found on the webpage of the Lorentz Center: https://www.lorentzcenter.nl/agent-based-simulations-for-societal-resilience-in-crisis-situations.html

References

Blakely, T., Thompson, J., Bablani, L., Andersen, P., Ouakrim, D. A., Carvalho, N., Abraham, P., Boujaoude, M.A., Katar, A., Akpan, E., Wilson, N. & Stevenson, M. (2021). Determining the optimal COVID-19 policy response using agent-based modelling linked to health and cost modelling: Case study for Victoria, Australia. Medrxiv, 2021-01.

Dignum, F., Dignum, V., Davidsson, P., Ghorbani, A., van der Hurk, M., Jensen, M., Kammler C., Lorig, F., Ludescher, L.G., Melchior, A., Mellema, R., Pastrav, C., Vanhee, L. & Verhagen, H. (2020). Analysing the combined health, social and economic impacts of the coronavirus pandemic using agent-based social simulation. Minds and Machines, 30, 177-194. doi: 10.1007/s11023-020-09527-6

Dignum, F. (ed.). (2021) Social Simulation for a Crisis; Results and Lessons from Simulating the COVID-19 Crisis. Springer.

Lorig, Fabian, Johansson, Emil and Davidsson, Paul (2021) ‘Agent-Based Social Simulation of the Covid-19 Pandemic: A Systematic Review’ Journal of Artificial Societies and Social Simulation 24(3), 5. http://jasss.soc.surrey.ac.uk/24/3/5.html. doi: 10.18564/jasss.4601

Squazzoni, F. et al. (2020) ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action‘ Journal of Artificial Societies and Social Simulation 23(2), 10. http://jasss.soc.surrey.ac.uk/23/2/10.html. doi: 10.18564/jasss.4298


Lorig, F., de Bruin, B., Borit, M., Dignum, F., Edmonds, B., Madden, S.M., Paolucci, M., Payette, N. and Vanhée, L. (2023) An Institute for Crisis Modelling (ICM) –
Towards a resilience center for sustained crisis modeling capability. Review of Artificial Societies and Social Simulation, 22 May 2023. https://rofasss.org/2023/05/22/icm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)