Tag Archives: response

A Tale of Three Pandemic Models: Lessons Learned for Engagement with Policy Makers Before, During, and After a Crisis

By Emil Johansson1,2, Vittorio Nespeca3, Mikhail Sirenko4, Mijke van den Hurk5, Jason Thompson6, Kavin Narasimhan7, Michael Belfrage1, 2, Francesca Giardini8, and Alexander Melchior5,9

  1. Department of Computer Science and Media Technology, Malmö University, Sweden
  2. Internet of Things and People Research Center, Malmö University, Sweden
  3. Computational Science Lab, University of Amsterdam, The Netherlands
  4. Faculty of Technology, Policy and Management, Delft University of Technology, The Netherlands
  5. Department of Information and Computing Sciences, Utrecht University, The Netherlands
  6. Transport, Health and Urban Design Research Lab, The University of Melbourne, Australia
  7. Centre for Research in Social Simulation, University of Surrey, United Kingdom
  8. Department of Sociology & Agricola School for Sustainable Development, University of Groningen, The Netherlands
  9. Ministry of Economic Affairs and Climate Policy and Ministry of Agriculture, Nature and Food Quality, The Netherlands

Motivation

Pervasive and interconnected crises such as the COVID-19 pandemic, global energy shortages, geopolitical conflicts, and climate change have shown how a stronger collaboration between science, policy, and crisis management is essential to foster societal resilience. As modellers and computational social scientists we want to help. Several cases of model-based policy support have shown the potential of using modelling and simulation as tools to prepare for, learn from (Adam and Gaudou, 2017), and respond to crises (Badham et al., 2021). At the same time, engaging with policy-makers to establish effective crisis-management solutions remains a challenge for many modellers due to lacking forums that promote and help develop sustained science-policy collaborations. Equally challenging is to find ways to provide effective solutions under changing circumstances, as it is often the case with crises.

Despite the existing guidance regarding how modellers can engage with policy makers e.g. (Vennix, 1996; Voinov and Bousquet, 2010), this guidance often does not account for the urgency that characterizes crisis response. In this article, we tell the stories of three different models developed during the COVID-19 pandemic in different parts of the world. For each of the models, we draw key lessons for modellers regarding how to engage with policy makers before, during, and after crises. Our goal is to communicate the findings from our experiences to  modellers and computational scientists who, like us, want to engage with policy makers to provide model-based policy and crisis management support. We use selected examples from Kurt Vonnegut’s 2004 lecture on ‘shapes of stories’ alongside analogy with Lewis Carroll’s Alice In Wonderland as inspiration for these stories.

Boy Meets Girl (Too Late)

A Social Simulation On the Corona Crisis’ (ASSOCC) tale

The perfect love story between social modellers and stakeholders would be they meet (pre-crisis), build a trusting foundation and then, when a crisis hits, they work together as a team, maybe have some fight, but overcome the crisis together and have a happily ever after.

In the case of the ASSOCC project, we as modellers met our stakeholders too late, (i.e., while we were already in the middle of the COVID-19 crisis). The stakeholders we aimed for had already met their ‘boy’: Epidemiological modellers. For them, we were just one of the many scientists showing new models and telling them that ours should be looked at. Although, for example, our model showed that using a track and tracing-app would not help reduce the rate of new COVID-19 infections (as turned out to be the case), our psychological and social approach was novel for them. It was not the right time to explain the importance of integrating these kinds of concepts in epidemiological models, so without this basic trust, they were reluctant to work with us.

The moral of our story is that not only should we invest in a (working) relationship during non-crisis times to get the stakeholders on board during a crisis, such an approach would be helpful for us modelers too. For example, we integrated both social and epidemiological models within the ASSOCC project. We wanted to validate our model with that used by Oxford University. However, our model choices were not compatible with this type of validation. Had we been working with these types of researchers before a pandemic, we could have built a proper foundation for validation.

So, our biggest lesson learned is the importance of having a good relationship with stakeholders before a crisis hits, when there is time to get into social models and show the advantages of using these. When you invest in building and consolidating this relationship over time, we promise a happily ever after for every social modeler and stakeholder (until the next crisis hits).

Modeller’s Adventures in Wonderland

A Health Emergency Response in Interconnected Systems (HERoS) tale

If you are a modeler, you are likely to be curious and imaginative, like Alice from “Alice’s Adventures in Wonderland.” You like to think about how the world works and make models that can capture these sometimes weird mechanisms. We are the same. When Covid came, we made a model of a city to understand how its citizens would behave.

But there is more. When Alice first saw the White Rabbit, she found him fascinating. A rabbit with a pocket watch which is too late, what could be more interesting? Similarly, our attention got caught by policymakers who wear waistcoats, who are always busy but can bring change. They must need a model that we made! But why are they running away? Our model is so helpful, just let us explain! Or maybe our model is not good enough?

Yes, we fell down deep into a rabbit hole. Our first encounter with a policymaker didn’t result in a happy “yes, let’s try your model out.” However, we kept knocking on doors. How many did Alice try? But alright, there is one. It seems too tiny. We met with a group of policymakers but had only 10 minutes to explain our large-scale data-driven agent-based-like model. How can we possibly do that? Drink from a “Drink me” bottle, which will make our presentation smaller! Well, that didn’t help. We rushed over all the model complexities too fast and got applause, but that’s it. Ok, we have the next one, which will last 1 hour. Quickly! Eat an “Eat me” cake that will make the presentation longer! Oh, too many unnecessary details this time. To the next venue!

We are in the garden. The garden of crisis response. And it is full of policymakers: Caterpillar, Duchess, Cheshire Cat and Mad Hatter. They talk riddles: “We need to consult with the Head of Paperclip Optimization and Supply Management,” want different things: “Can you tell us what will be the impact of a curfew. Hmm, yesterday?” and shift responsibility from one to another. Thankfully there is no Queen of Hearts who would order to behead us.

If the world of policymaking is complex, then the world of policymaking during the crisis is a wonderland. And we all live in it. We must overgrow our obsession with building better models, learn about its fuzzy inhabitants, and find a way to instead work together. Constant interaction and a better understanding of each other’s needs must be at the centre of modeler-policymaker relations.

“But I don’t want to go among mad people,” Alice remarked.

“Oh, you can’t help that,” said the Cat: “we’re all mad here. I’m mad. You’re mad.”

“How do you know I’m mad?” said Alice.

“You must be,” said the Cat, “or you wouldn’t have come here.”

Lewis Carroll, Alice in Wonderland

Cinderella – A city’s tale

Everyone thought Melbourne was just too ugly to go to the ball…..until a little magic happened.

Once upon a time, the bustling Antipodean city of Melbourne, Victoria found itself in the midst of a dark and disturbing period. While all other territories in the great continent of Australia had ridded themselves of the dreaded COVID-19 virus, it was itself, besieged. Illness and death coursed through the land.

Shunned, the city faced scorn and derision. It was dirty. Its sisters called it a “plague state” and the people felt great shame and sadness as their family, friends and colleagues continued to fall to the virus. All they wanted was a chance to rejoin their families and countryfolk at the ball. What could they do?

Though downtrodden, the kind-hearted and resilient residents of Melbourne were determined to regain control over their lives. They longed for a glimmer of sunshine on these long, gloomy days – a touch of magic, perhaps? They turned to their embattled leaders for answers. Where was their Fairy Godmother now?

In this moment of despair, a group of scientists offered a gift in the form of a powerful agent-based model that was running on a supercomputer. This model, the scientists said, might just hold the key to transforming the fate of the city from vanquished to victor (Blakely et al., 2020). What was this strange new science? This magical black box?

Other states and scientists scoffed. “You can never achieve this!”, they said. “What evidence do you have? These models are not to be trusted. Such a feat as to eliminate COVID-19 at this scale has never been done in the history of the world!” But what of it? Why should history matter? Quietly and determinedly, the citizens of Melbourne persisted. They doggedly followed the plan.

Deep down, even the scientists knew it was risky. People’s patience and enchantment with the mystical model would not last forever. Still, this was Melbourne’s only chance. They needed to eliminate the virus so it would no longer have a grip on their lives. The people bravely stuck to the plan and each day – even when schools and businesses began to re-open – the COVID numbers dwindled from what seemed like impossible heights. Each day they edged down…

and down…

and down…until…

Finally! As the clock struck midnight, the people of Melbourne achieved the impossible: they had defeated COVID-19 by eliminating transmission. With the help of the computer model’s magic, illness and death from the virus stopped. Melbourne had triumphed, emerging stronger and more united than ever before (Thompson et al., 2022a).

From that day forth, Melbourne was internationally celebrated as a shining example of resilience, determination, and the transformative power of hope. Tens of thousands of lives were saved – and after enduring great personal and community sacrifice, its people could once again dance at the ball.

But what was the fate of the scientists and the model? Did such an experience change the way agent-based social simulation was used in public health? Not really. The scientists went back to their normal jobs and the magic of the model remained just that – magic. Its influence vanished like fairy dust on a warm Summer’s evening.

Even to this day the model and its impact largely remains a mystery (despite over 10,000 words of ODD documentation). Occasionally, policy-makers or researchers going about their ordinary business might be heard to say, “Oh yes, the model. The one that kept us inside and ruined the economy. Or perhaps it was the other way around? I really can’t recall – it was all such a blur. Anyway, back to this new social problem – Shall we attack it with some big data and ML techniques?”.

The fairy dust has vanished but the concrete remains.

And in fairness, while agent-based social simulation remains mystical and our descriptions opaque, we cannot begrudge others for ever choosing concrete over dust (Thompson et al, 2022b).

Conclusions

So what is the moral of these tales? We consolidate our experiences into these main conclusions:

  • No connection means no impact. If modellers wish for their models to be useful before, during or after a crisis, then it is up to them to start establishing a connection and building trust with policymakers.
  • The window of opportunity for policy modelling during crises can be narrow, perhaps only a matter of days. Capturing it requires both that we can supply a model within the timeframe (impossible as it may appear) and that our relationship with stakeholders is already established.
  • Engagement with stakeholders requires knowledge and skills that might be too much to ask of modelers alone, including project management, communication with individuals without a technical background, and insight into the policymaking process.
  • Being useful only sometimes means being excellent. A good model is one that is useful. By investing more in building relationships with policymakers and learning about each other, we have a bigger chance of providing the needed insight. Such a shift, however, is radical and requires us to give up our obsession with the models and engage with the fuzziness of the world around us.
  • If we cannot communicate our models effectively, we cannot expect to build trust with end-users over the long term, whether they be policy-makers or researchers. Individual models – and agent-based social simulation in general – needs better understanding that can only be achieved through greater transparency and communication, however that is achieved.

As taxing, time-consuming and complex as the process of making policy impact with simulation models might be, it is very much a fight worth fighting; perhaps even more so during crises. Assuming our models would have a positive impact on the world, not striving to make this impact could be considered admitting defeat. Making models useful to policymakers starts with admitting the complexity of their environment and willingness to dedicate time and effort to learn about it and work together. That is how we can pave the way for many more stories with happy endings.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise.

References

Adam, C. and Gaudou, B. (2017) ‘Modelling Human Behaviours in Disasters from Interviews: Application to Melbourne Bushfires’ Journal of Artificial Societies and Social Simulation 20(3), 12. http://jasss.soc.surrey.ac.uk/20/3/12.html. doi: 10.18564/jasss.3395

Badham, J., Barbrook-Johnson, P., Caiado, C. and Castellani, B. (2021) ‘Justified Stories with Agent-Based Modelling for Local COVID-19 Planning’ Journal of Artificial Societies and Social Simulation 24 (1) 8 http://jasss.soc.surrey.ac.uk/24/1/8.html. doi: 10.18564/jasss.4532

Crammond, B. R., & Kishore, V. (2021). The probability of the 6‐week lockdown in Victoria (commencing 9 July 2020) achieving elimination of community transmission of SARS‐CoV‐2. The Medical Journal of Australia, 215(2), 95-95. doi:10.5694/mja2.51146

Thompson, J., McClure, R., Blakely, T., Wilson, N., Baker, M. G., Wijnands, J. S., … & Stevenson, M. (2022). Modelling SARS‐CoV‐2 disease progression in Australia and New Zealand: an account of an agent‐based approach to support public health decision‐making. Australian and New Zealand Journal of Public Health, 46(3), 292-303. doi:10.1111/1753-6405.13221

Thompson, J., McClure, R., Scott, N., Hellard, M., Abeysuriya, R., Vidanaarachchi, R., … & Sundararajan, V. (2022). A framework for considering the utility of models when facing tough decisions in public health: a guideline for policy-makers. Health Research Policy and Systems, 20(1), 1-7. doi:10.1186/s12961-022-00902-6

Voinov, A., & Bousquet, F. (2010). Modelling with stakeholders. Environmental modelling & software, 25(11), 1268-1281. doi:10.1016/j.envsoft.2010.03.007

Vennix, J.A.M. (1996). Group Model Building: Facilitating Team Learning Using System Dynamics. Wiley.

Vonnegut, K. (2004). Lecture to Case College. https://www.youtube.com/watch?v=4_RUgnC1lm8


Johansson,E., Nespeca, V., Sirenko, M., van den Hurk, M., Thompson, J., Narasimhan, K., Belfrage, M., Giardini, F. and Melchior, A. (2023) A Tale of Three Pandemic Models: Lessons Learned for Engagement with Policy Makers Before, During, and After a Crisis. Review of Artificial Societies and Social Simulation, 15 Mar 2023. https://rofasss.org/2023/05/15/threepandemic


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

If you want to be cited, calibrate your agent-based model: A Reply to Chattoe-Brown

By Marijn A. Keijzer

This is a reply to a previous comment, (Chattoe-Brown 2022).

The social simulation literature has called on its proponents to enhance the quality and realism of their contributions through systematic validation and calibration (Flache et al., 2017). Model validation typically refers to assessments of how well the predictions of their agent-based models (ABMs) map onto empirically observed patterns or relationships. Calibration, on the other hand, is the process of enhancing the realism of the model by parametrizing it based on empirical data (Boero & Squazzoni, 2005). We would expect that presenting a validated or calibrated model serves as a signal of model quality, and would thus be a desirable characteristic of a paper describing an ABM.

In a recent contribution to RofASSS, Edmund Chattoe-Brown provocatively argued that model validation does not bear fruit for researchers interested in boosting their citations. In a sample of articles from JASSS published on opinion dynamics he observed that “the sample clearly divides into non-validated research with more citations and validated research with fewer” (Chattoe-Brown, 2022). Well-aware of the bias and limitations of the sample at hand, Chattoe-Brown calls on refutation of his hypothesis. An analysis of the corpus of articles in Web of Science, presented here, could serve that goal.

To test whether there exists an effect of model calibration and/or validation on the citation counts of papers, I compare citation counts of a larger number of original research articles on agent-based models published in the literature. I extracted 11,807 entries from Web of Science by searching for items that contained the phrases “agent-based model”, “agent-based simulation” or “agent-based computational model” in its abstract.[1] I then labeled all items that mention “validate” in its abstract as validated ABMs and those that mention “calibrate” as calibrated ABMs. This measure if rather crude, of course, as descriptions containing phrases like “we calibrated our model” or “others should calibrate our model” are both labeled as calibrated models. However, if mentioning that future research should calibrate or validate the model is not related to citations counts (which I would argue it indeed is not), then this inaccuracy does not introduce bias.

The shares of entries that mention calibration or validation are somewhat small. Overall, just 5.62% of entries mention validation, 3.21% report a calibrated model and 0.65% fall in both categories. The large sample size, however, will still enable the execution of proper statistical analysis and hypothesis testing.

How are mentions of calibration and validation in the abstract related to citation counts at face value? Bivariate analyses show only minor differences, as revealed in Figure 1. In fact, the distribution of citations for validated and non-validated ABMs (panel A) is remarkably similar. Wilcoxon tests with continuity correction—the nonparametric version of the simple t test—corroborate their similarity (W = 3,749,512, p = 0.555). The differences in citations between calibrated and non-calibrated models appear, albeit still small, more pronounced. Calibrated ABMs are cited slightly more often (panel B), as also supported by a bivariate test (W = 1,910,772, p < 0.001).

Picture 1

Figure 1. Distributions of number of citations of all the entries in the dataset for validated (panel A) and calibrated (panel B) ABMs and their averages with standard errors over years (panels C and D)

Age of the paper might be a more important determinant of citation counts, as panels C and D of Figure 1 suggest. Clearly, the age of a paper should be important here, because older papers have had much more opportunity to get cited. In particular, papers younger than 10 years seem to not have matured enough for its citation rates to catch up to older articles. When comparing the citation counts of purely theoretical models with calibrated and validated versions, this covariate should not be missed, because the latter two are typically much younger. In other words, the positive relationship between model calibration/validation and citation counts could be hidden in the bivariate analysis, as model calibration and validation are recent trends in ABM research.

I run a Poisson regression on the number of citations as explained by whether they are validated and calibrated (simultaneously) and whether they are both. The age of the paper is taken into account, as well as the number of references that the paper uses itself (controlling for reciprocity and literature embeddedness, one might say). Finally, the fields in which the papers have been published, as registered by Web of Science, have been added to account for potential differences between fields that explains both citation counts and conventions about model calibration and validation.

Table 1 presents the results from the four models with just the main effects of validation and calibration (model 1), the interaction of validation and calibration (model 2) and the full model with control variables (model 3).

Table 1. Poisson regression on the number of citations

# Citations
(1) (2) (3)
Validated -0.217*** -0.298*** -0.094***
(0.012) (0.014) (0.014)
Calibrated 0.171*** 0.064*** 0.076***
(0.014) (0.016) (0.016)
Validated x Calibrated 0.575*** 0.244***
(0.034) (0.034)
Age 0.154***
(0.0005)
Cited references 0.013***
(0.0001)
Field included No No Yes
Constant 2.553*** 2.556*** 0.337**
(0.003) (0.003) (0.164)
Observations 11,807 11,807 11,807
AIC 451,560 451,291 301,639
Note: *p<0.1; **p<0.05; ***p<0.01

The results from the analyses clearly suggest a negative effect of model validation and a positive effect of model calibration on the likelihood of being cited. The hypothesis that was so “badly in need of refutation” (Chattoe-Brown, 2022) will remain unrefuted for now. The effect does turn positive, however, when the abstract makes mention of calibration as well. In both the controlled (model 3) and uncontrolled (model 2) analyses, combining the effects of validation and calibration yields a positive coefficient overall.[2]

The controls in model 3 substantially affect the estimates from the three main factors of interest, while remaining in expected directions themselves. The age of a paper indeed helps its citation count, and so does the number of papers the item cites itself. The fields, furthermore, take away from the main effects somewhat, too, but not to a problematic degree. In an additional analysis, I have looked at the relationship between the fields and whether they are more likely to publish calibrated or validated models and found no substantial relationships. Citation counts will differ between fields, however. The papers in our sample are more often cited in, for example, hematology, emergency medicine and thermodynamics. The ABMs in the sample coming from toxicology, dermatology and religion are on the unlucky side of the equation, receiving less citations on average. Finally, I have also looked at papers published in JASSS specifically, due to the interest of Chattoe-Brown and the nature of this outlet. Surprisingly, the same analyses run on the subsample of these papers (N=376) showed a negative relationship between citation counts and model calibration/validation. Does the JASSS readership reveal its taste for artificial societies?

In sum, I find support for the hypothesis of Chattoe-Brown (2022) on the negative relationship between model validation and citations counts for papers presenting ABMs. If you want to be cited, you should not validate your ABM. Calibrated ABMs, on the other hand, are more likely to receive citations. What is more, ABMs that were both calibrated and validated are most the most successful papers in the sample. All conclusions were drawn considering (i.e. controlling for) the effects of age of the paper, the number of papers the paper cited itself, and (citation conventions in) the field in which it was published.

While the patterns explored in this and Chattoe-Brown’s recent contribution are interesting, or even puzzling, they should not distract from the goal of moving towards realistic agent-based simulations of social systems. In my opinion, models that combine rigorous theory with strong empirical foundations are instrumental to the creation of meaningful and purposeful agent-based models. Perhaps the results presented here should just be taken as another sign that citation counts are a weak signal of academic merit at best.

Data, code and supplementary analyses

All data and code used for this analysis, as well as the results from the supplementary analyses described in the text, are available here: https://osf.io/x9r7j/

Notes

[1] Note that the hyphen between “agent” and “based” does not affect the retrieved corpus. Both contributions that mention “agent based” and “agent-based” were retrieved.

[2] A small caveat to the analysis of the interaction effect is that the marginal improvement of model 2 upon model 1 is rather small (AIC difference of 269). This is likely (partially) due to the small number of papers that mention both calibration and validation (N=77).

Acknowledgements

Marijn Keijzer acknowledges IAST funding from the French National Research Agency (ANR) under the Investments for the Future (Investissements d’Avenir) program, grant ANR-17-EURE-0010.

References

Boero, R., & Squazzoni, F. (2005). Does empirical embeddedness matter? Methodological issues on agent-based models for analytical social science. Journal of Artificial Societies and Social Simulation, 8(4), 1–31. https://www.jasss.org/8/4/6.html

Chattoe-Brown, E. (2022) If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1st Feb 2022. https://rofasss.org/2022/02/01/citing-od-models

Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4). https://doi.org/10.18564/jasss.3521


Keijzer, M. (2022) If you want to be cited, calibrate your agent-based model: Reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation, 9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”

By Frank Dignum

This is a reply to a review in JASSS (Chattoe-Brown 2021) of (Dignum 2021).

Before responding to some of the specific concerns of Edmund I would like to thank him for the thorough review. I am especially happy with his conclusion that the book is solid enough to make it a valuable contribution to scientific progress in modelling crises. That was the main aim of the book and it seems that is achieved. I want to reiterate what we already remarked in the book; we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises. Nor do we claim that our simulations were without limitations. But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.

The concerns that are expressed by Edmund are certainly valid. I agree with some of them, but will nuance some others. First of all the concern about the fact that we seem to abandon the NetLogo implementation and move to Repast. This fact does not make the ABM itself any less valid! In itself it is also an important finding. It is not possible to scale such a complex model in NetLogo beyond around two thousand agents. This is not just a limitation of our particular implementation, but a more general limitation of the platform. It leads to the important challenge to get more computer scientists involved to develop platforms for social simulations that both support the modelers adequately and provide efficient and scalable implementations.

That the sheer size of the model and the results make it difficult to trace back the importance and validity of every factor on the results is completely true. We have tried our best to highlight the most important aspects every time. But, this leaves questions as to whether we make the right selection of highlighted aspects. As an illustration to this, we have been busy for two months to justify our results of the simulations of the effectiveness of the track and tracing apps. We basically concluded that we need much better integrated analysis tools in the simulation platform. NetLogo is geared towards creating one simulation scenario, running the simulation and analyzing the results based on a few parameters. This is no longer sufficient when we have a model with which we can create many scenarios and have many parameters that influence a result. We used R now to interpret the flood of data that was produced with every scenario. But, R is not really the most user friendly tool and also not specifically meant for analyzing the data from social simulations.

Let me jump to the third concern of Edmund and link it to the analysis of the results as well. While we tried to justify the results of our simulation on the effectiveness of the track and tracing app we compared our simulation with an epidemiological based model. This is described in chapter 12 of the book. Here we encountered the difference in assumed number of contacts per day a person has with other persons. One can take the results, as quoted by Edmund as well, of 8 or 13 from empirical work and use them in the model. However, the dispute is not about the number of contacts a person has per day, but what counts as a contact! For the COVID-19 simulations standing next to a person in the queue in a supermarket for five minutes can count as a contact, while such a contact is not a meaningful contact in the cited literature. Thus, we see that what we take as empirically validated numbers might not at all be the right ones for our purpose. We have tried to justify all the values of parameters and outcomes in the context for which the simulations were created. We have also done quite some sensitivity analyses, which we did not all report on just to keep the volume of the book to a reasonable size. Although we think we did a proper job in justifying all results, that does not mean that one can have different opinions on the value that some parameters should have. It would be very good to check the influence on the results of changes in these parameters. This would also progress scientific insights in the usefulness of complex models like the one we made!

I really think that an ABM crisis response should be institutional. That does not mean that one institution determines the best ABM, but rather that the ABM that is put forward by that institution is the result of a continuous debate among scientists working on ABM’s for that type of crisis. For us, one of the more important outcomes of the ASSOCC project is that we really need much better tools to support the types of simulations that are needed for a crisis situation. However, it is very difficult to develop these tools as a single group. A lot of the effort needed is not publishable and thus not valued in an academic environment. I really think that the efforts that have been put in platforms such as NetLogo and Repast are laudable. They have been made possible by some generous grants and institutional support. We argue that this continuous support is also needed in order to be well equipped for a next crisis. But we do not argue that an institution would by definition have the last word in which is the best ABM. In an ideal case it would accumulate all academic efforts as is done in the climate models, but even more restricted models would still be better than just having a thousand individuals all claiming to have a useable ABM while governments have to react quickly to a crisis.

The final concern of Edmund is about the empirical scale of our simulations. This is completely true! Given the scale and details of what we can incorporate we can only simulate some phenomena and certainly not everything around the COVID-19 crisis. We tried to be clear about this limitation. We had discussions about the Unity interface concerning this as well. It is in principle not very difficult to show people walking in the street, taking a car or a bus, etc. However, we decided to show a more abstract representation just to make clear that our model is not a complete model of a small town functioning in all aspects. We have very carefully chosen which scenarios we can realistically simulate and give some insights in reality from. Maybe we should also have discussed more explicitly all the scenarios that we did not run with the reasons why they would be difficult or unrealistic in our ABM. One never likes to discuss all the limitations of one’s labor, but it definitely can be very insightful. I have made up for this a little bit by submitting an to a special issue on predictions with ABM in which I explain in more detail, which should be the considerations to use a particular ABM to try to predict some state of affairs. Anyone interested to learn more about this can contact me.

To conclude this response to the review, I again express my gratitude for the good and thorough work done. The concerns that were raised are all very valuable to concern. What I tried to do in this response is to highlight that these concerns should be taken as a call to arms to put effort in social simulation platforms that give better support for creating simulations for a crisis.

References

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html


Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)