Tag Archives: jasss

Is The Journal of Artificial Societies and Social Simulation Parochial? What Might That Mean? Why Might It Matter?

By Edmund Chattoe-Brown

Introduction

The Journal of Artificial Societies and Social Simulation (hereafter JASSS) retains a distinctive position amongst journals publishing articles on social simulation and Agent-Based Modelling. Many journals have published a few Agent-Based Models, some have published quite a few but it is hard to name any other journal that predominantly does this and has consistently done so over two decades. Using Web of Science on 25.07.22, there are 5540 hits including the search term <“agent-based model”> anywhere in their text. JASSS does indeed have the most of any single journal with 268 hits (5% of the total to the nearest integer). The basic search returns about 200 distinct journals and about half of these have 10 hits or less. Since this search is arranged by hit count, this means that the unlisted journals have even fewer hits than those listed i. e. less than 7 per journal. This supports the claim that the great majority of journals have very limited engagement with Agent-Based Modelling. Note that the point here is to evidence tendencies effectively and not to claim that this specific search term tells us the precise relative frequency of articles on the subject of Agent-Based Modelling in different journals.

This being so, it seems reasonable – and desirable for other practical reasons like being entirely open access, online and readily searchable – to use JASSS as a sample – though clearly not necessarily a representative sample – of what may be happening in Agent-Based Modelling more generally. This is the case study approach (Yin 2009) where smaller samples may be practically unavoidable to discuss richer or more complex phenomena like the actual structures of arguments rather than something quantitative like, say, the number of sources cited by each article.

This piece is motivated by the scepticism that some reviewers have displayed about such a case study approach focused on JASSS and conclusions drawn from it. It is actually quite strange to have the editors and reviewers of a journal argue against its ability to tell us anything useful about wider Agent-Based Modelling research even as a starting point (particularly since this approach has been used in articles previously published in the journal, see for example, Meyer et al. 2009 and Hauke et al. 2017). Of course, it is a given that different journals have unique editorial policies, distinct reviewer pools and so on. Though this may mean, for example, that journals only irregularly publishing Agent-Based Models are actually less typical because it is more arbitrary who reviews for them and there may therefore be less reviewing skill and consensus about the value of articles involved. Anecdotally, I have found this to be true in medical journals where excellent articles rub shoulders with much more problematic ones in a small overall pool. The point of my argument is not to claim that JASSS can really stand in for ABM research as a whole – which it plainly cannot – but that, if the case study approach is to be accepted at all, JASSS is one of the few journals that successfully qualifies for it on empirically justifiable grounds. Conversely, given the potentially distinctive character of journals and the wide spread of Agent-Based Modelling, attempts at representative sampling may be very challenging in resource terms.

Method and Results

Again, using Web of Science on 04.07.22, I searched for the most highly cited articles containing the string “opinion dynamics”. I am well aware that this will not capture all articles that actually have opinion dynamics as their subject matter but this is not the intention. The intention is to describe a reproducible and measurable procedure correlated with the importance of articles so my results can be checked, criticised and extended. Comparing results based on other search terms would be part of that process. Then I took the first ten distinct journals that could be identified from this set of articles in order of citation count. The idea here was to see what journals had published the most important articles in the field overall – at least as identified by this particular search term – and then follow up their coverage of opinion dynamics generally. In addition, for each journal, I accessed the top 50 most cited articles and then checked how many articles containing the string “opinion dynamics” featured in that top 50. The idea here was to assess the extent to which opinion dynamics articles were important to the impact of a particular journal. Table 1 shows the results of this analysis.

Journal Title “opinion dynamics” Articles in the Top 50 Most Cited Most Highly Cited “opinion dynamics” Article Citations Number of Articles Containing the String “opinion dynamics”
Reviews of Modern Physics 0 2380 1
JASSS 6 1616 64
International Journal of Modern Physics C 4 376 72
Dynamic Games and Applications 1 338 5
Physical Review Letters 0 325 5
Global Challenges 1 272 1
IEEE Transactions on Automatic Control 0 269 38
SIAM Review 0 258 2
Central European Journal of Operations Research 1 241 1
Physica A: Statistical Mechanics and Its Applications 0 231 143

Table 1. The Coverage, Commitment and Importance of Different Journals in Regard to “opinion dynamics”: Top Ten by Citation Count of Most Influential Article.

This list attempts to provide two somewhat separate assessments of a journal with regard to “opinion dynamics”. The first is whether it has a substantial body of articles on the topic: Coverage. The second is whether, by the citation levels of the journal generally, “opinion dynamics” models are important to it: Commitment. These journals have been selected on a third dimension, their ability to contribute at least one very influential article to the literature as a whole: Importance.

The resulting patterns are interesting in several ways. Firstly, JASSS appears unique in this sample in being a clearly social science journal rather than a physical science journal or one dealing with instrumental problems like operations research or automatic control. It is an interesting corollary how many “opinion dynamics” models in a physics journal will have been reviewed by social scientists or modellers with a social science orientation at least. This is part of a wider question about whether, for example, physics journals are mainly interested in these models as formal systems rather than as having likely application to real societies. Secondly, 3 journals out of 10 have only a single “opinion dynamics” article – and a further journal has only 2 – which are nonetheless, extremely highly cited relative to such articles as a whole. It is unclear whether this “only one but what a one” pattern has any wider significance. It should also be noted that the most highly cited article in JASSS is four times more highly cited than the next most cited. Only 4 of these journals out of 10 could really be said to have a usable sample of such articles for case study analysis. Thirdly, only 2 journals out of 10 have a significant number of articles sufficiently important that they appear in the top 50 most cited and 5 journals have no “opinion dynamics” articles in their top 50 most cited at all. This makes the point that a journal can have good coverage of the topic and contain at least one highly cited article without “opinion dynamics” necessarily being a commitment of the journal.

Thus it seems that to be a journal contributing at least one influential article to the field as a whole, to have several articles that are amongst the most cited by that journal and to have a non-trivial number of articles overall is unusual. Only one other journal in the top 10 meets all three criteria (International Journal of Physics C). This result is corroborated in Table 2 which carries out the same analysis for all additional journals containing at least one highly cited “opinion dynamics” article (with an arbitrary cut off of at least 100 citations for that article). There prove to be fourteen such journals in addition to the ten above.

Journal Title “opinion dynamics” Articles in the Top 50 Most Cited Most Highly Cited “opinion dynamics” Article Citations Number of Articles Containing the String “opinion dynamics”
Mathematics of Operations Research 1 215 2
Information Sciences 0 186 14
Physica D: Nonlinear Phenomena 0 182 4
Journal of Complex Networks 1 177 5
Annual Reviews in Control 2 165 4
Information Fusion 0 154 11
IEEE Transactions on Control of Network Systems 3 151 12
Automatica 0 141 32
Public Opinion Quarterly 0 132 5
Physical Review E 0 129 74
SIAM Journal on Control and Optimization 0 127 13
Europhysics Letters 0 116 3
Knowledge-Based Systems 0 112 5
Scientific Reports 0 111 26

Table 2. The Coverage, Commitment and Importance of Different Journals in Regard to “opinion dynamics”: All Remaining Distinct Journals whose most important “opinion dynamics” article receives at least 100 citations.

Table 2 confirms the dominance of physical science journals and those solving instrumental problems as opposed to those evidently dealing with the social sciences: A few terms like complex networks are ambivalent in this regard however. Further it confirms the scarcity of journals that simultaneously contribute at least one influential article to the wider field, have a sensibly sized sample of articles on this topic – so that provisional but nonetheless empirical hypotheses might be derived from a case study – and have “opinion dynamics” articles in their top 50 most cited articles as a sign of the importance of the topic to the journal and its readers. To some extent, however, the latter confirmation is an unavoidable artefact of the sampling strategy. As the most cited article becomes less highly cited, the chance it will appear in the top 50 most cited for a particular journal will almost certainly fall unless the journal is very new or generally not highly cited.

As a third independent check, I again used Web of Science to identify all journals which had – somewhat arbitrarily – at least 30 articles on “opinion dynamics”, giving some sense of their contribution. Only two more journals (see Table 3) not already occurring in the two tables above were identified. Generally, this analysis considers only journal articles and not conference proceedings and book chapter serials whose peer review status is less clear/comparable.

Journal Title “opinion dynamics” Articles in the Top 50 Most Cited Most Highly Cited “opinion dynamics” Article Citations Number of Articles Containing the String “opinion dynamics”
Advances in Complex Systems 5 54 42
Plos One 0 53 32

Table 3. The Coverage, Commitment and Importance of Different Journals: All Journals with at Least 30 “opinion dynamics” hits not already listed in Tables 1 and 2.

This cross check shows that while the additional journals do have sample of articles large enough to form the basis for a case study, they either have not yet contributed a really influential article to the wider field – less than half the number of citations of the journals which qualify for Tables 1 and 2, do not have a high commitment to opinion dynamics – in terms of impact within the journal and among its readers – or both.

Before concluding this analysis, it is worth briefly reflecting on what these three criteria jointly tell us – though other criteria could also be used in further research. By sampling on highly cited articles we focus on journals that have managed to go beyond their core readership and influence the field as a whole. There is a danger that journals that have never done this are merely “talking to themselves” and may therefore form a less effective basis for a case study speaking to the field as a whole. By attending to the number of articles in the top 50 for the journal, we get a sense of whether the topic is central (or only peripheral) to that journal/its readership and, again, journals where the topic is central stand a chance of being better case studies than those where it is peripheral. The criteria for having enough articles is simply a practical one for conducting a meaningful case study. Researchers using different methods may disagree about how many instances you need to draw useful conclusions but there is general agreement that it is more than one!

Analysis and Conclusions

The present article was motivated by an attempt to evaluate the claim that JASSS may be parochial and therefore not constitute a suitable basis for provisional hypotheses generated by case study analysis of its articles. Although the argument presented here is clearly rough and ready – and could be improved on by subsequent researchers – it does not appear to support this claim. JASSS actually seems to be one of very few journals – arguably the only social science one – that simultaneously has made at least one really influential contribution to the wider field of opinion dynamics, has a large enough number of articles on the topic for plausible generalisation and has quite a few such articles in its top 50, which shows the importance of the topic to the journal and its wider readership. Unless one wishes to reject case study analysis altogether, there are – in fact – very few other journals on which it can effectively be done for this topic.

But actually, my main conclusion is a wider reflection on peer reviewing, sampling and scientific progress based on reviewer resistance to the case study approach. There are 1386 articles with the search term “opinion dynamics” in Web of Science as of 25.07.22. It is clearly not realistic for one article – or even one book – to analyse all that content, particularly qualitatively. This being so we have to consider what is practical and adequate to generate hypotheses suitable for publication and further development of research along these lines. Case studies of single journals are not the only strategy but do have a recognised academic tradition in methodology (Brown 2008). We could sample randomly from the population of articles but I have never yet seen such a qualitative analysis based on sampling and it is not clear whether it would be any better received by potential reviewers. (In particular, with many journals each having only a few examples of Agent-Based Models, realistically low sampling rates would leave many journals unrepresented altogether which would be a problem if they had distinctive approaches.)  Most journals – including JASSS – have word limits and this restricts how much you can report. Qualitative analysis is more drawn-out than quantitative analysis which limits this research style further in terms of practical sample sizes. Both reading whole articles for analysis and writing up the resulting conclusions takes more resources of time and word count. As long as one does not claim that a qualitative analysis from JASSS can stand for all Agent-Based Modelling – but is merely a properly grounded hypothesis for further investigation – and shows ones working properly to support that further investigation, it isn’t really clear why that shouldn’t be sufficient for publication. Particularly as I have now shown that JASSS isn’t notably parochial along several potentially relevant dimensions. If a reviewer merely conjectures that your results won’t generalise, isn’t the burden of proof then on them to do the corresponding analysis and publish it? Otherwise the danger is that we are setting conjecture against actual evidence – however imperfect – and this runs the risk of slowing scientific progress by favouring research compatible with traditionally approved perspectives in publication. It might be useful to revisit the everyday idea of burden of proof in assessing the arguments of reviewers. What does it take in terms of evidence and argument (rather than simply power) for a comment by a reviewer to scientifically require an answer? It is a commonplace that a disproved hypothesis is more valuable to science than a mere conjecture or something that cannot be proven one way or another. One reason for this is that scientific procedure illustrates methodological possibility as well as generating actual results. A sample from JASSS may not stand for all research but it shows how a conclusion might ultimately be reached for all research if the resources were available and the administrative constraints of academic publishing could be overcome.

As I have argued previously (Chattoe-Brown 2022), and has now been pleasingly illustrated (Keijzer 2022), this situation may create an important and distinctive role for RofASSS. It may be valuable to get hypotheses, particularly ones that potentially go against the prevailing wisdom, “out there” so they can subsequently be tested more rigorously rather than having to wait until the framer of the hypothesis can meet what may be a counsel of perfection from peer reviewers. Another issue with reviewing is a tendency to say what will not do rather than what will do. This rather the puts the author at the mercy of reviewers during the revision process. RofASSS can also be used to hive off “contextual” analyses – like this one regarding what it might mean for a journal to be parochial – so that they can be developed in outline for the general benefit of the Agent-Based Modelling community – rather than having to add length to specific articles depending on the tastes of particular reviewers.

Finally, as should be obvious, I have only suggested that JASSS is not parochial in regard to articles involving the string “opinion dynamics”. However, I have also illustrated how this kind of analysis could be done systematically for different topics to justify the claim that a particular journal can serve as a reasonable basis for a case study.

Acknowledgements

This analysis was funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5.

References

Brown, Patricia Anne (2008) ‘A Review of the Literature on Case Study Research’, Canadian Journal for New Scholars in Education/Revue Canadienne des Jeunes Chercheures et Chercheurs en Éducation, 1(1), July, pp. 1-13, https://journalhosting.ucalgary.ca/index.php/cjnse/article/view/30395.

Chattoe-Brown, E. (2022) ‘If You Want to Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly in Need of Refutation’, Review of Artificial Societies and Social Simulation, 1st Feb 2022. https://rofasss.org/2022/02/01/citing-od-models

Hauke, Jonas, Lorscheid, Iris and Meyer, Matthias (2017) ‘Recent Development of Social Simulation as Reflected in JASSS Between 2008 and 2014: A Citation and Co-Citation Analysis’, Journal of Artificial Societies and Social Simulation, 20(1), 5. https://www.jasss.org/20/1/5.html. doi:10.18564/jasss.3238

Keijzer, M. (2022) ‘If You Want to be Cited, Calibrate Your Agent-Based Model: Reply to Chattoe-Brown’, Review of Artificial Societies and Social Simulation, 9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown

Meyer, Matthias, Lorscheid, Iris and Troitzsch, Klaus G. (2009) ‘The Development of Social Simulation as Reflected in the First Ten Years of JASSS: A Citation and Co-Citation Analysis’, Journal of Artificial Societies and Social Simulation, 12(4), 12,. https://www.jasss.org/12/4/12.html.

Yin, R. K. (2009) Case Study Research: Design and Methods, fourth edition (Thousand Oaks, CA: Sage).


Chattoe-Brown, E. (2022) Is The Journal of Artificial Societies and Social Simulation Parochial? What Might That Mean? Why Might It Matter? Review of Artificial Societies and Social Simulation, 10th Sept 2022. https://rofasss.org/2022/09/10/is-the-journal-of-artificial-societies-and-social-simulation-parochial-what-might-that-mean-why-might-it-matter/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Reply to Frank Dignum

By Edmund Chattoe-Brown

This is a reply to Frank Dignum’s reply (about Edmund Chattoe-Brown’s review of Frank’s book)

As my academic career continues, I have become more and more interested in the way that people justify their modelling choices, for example, almost every Agent-Based Modeller makes approving noises about validation (in the sense of comparing real and simulated data) but only a handful actually try to do it (Chattoe-Brown 2020). Thus I think two specific statements that Frank makes in his response should be considered carefully:

  1. … we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises.” Firstly, negative claims (“This is not a banana”) are not generally helpful in argument. Secondly, readers want to know (or should want to know) what is being claimed and, importantly, how they would decide if it is true “objectively”. Given how many models sprang up under COVID it is clear that what is described here cannot be the only way to do it but the question is how do we know you did it “better?” This was also my point about institutionalisation. For me, the big lesson from COVID was how much the automatic response of the ABM community seems to be to go in all directions and build yet more models in a tearing hurry rather than synthesise them, challenge them or test them empirically. I foresee a problem both with this response and our possible unwillingness to be self-aware about it. Governments will not want a million “interesting” models to choose from but one where they have externally checkable reasons to trust it and that involves us changing our mindset (to be more like climate modellers for example, Bithell & Edmonds 2020). For example, colleagues and I developed a comparison methodology that allowed for the practical difficulties of direct replication (Chattoe-Brown et al. 2021).
  2. The second quotation which amplifies this point is: “But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.” Again, here one has to ask the right question for progress in modelling. On what scientific grounds should people do this? On what grounds should someone reuse this model rather than start their own? Why isn’t the Dignum et al. model built on another “market leader” to set a good example? (My point about programming languages was purely practical not scientific. Frank is right that the model is no less valid because the programming language was changed but a version that is now unsupported seems less useful as a basis for the kind of further development advocated here.)

I am not totally sure I have understood Frank’s point about data so I don’t want to press it but my concern was that, generally, the book did not seem to “tap into” relevant empirical research (and this is a wider problem that models mostly talk about other models). It is true that parameter values can be adjusted arbitrarily in sensitivity analysis but that does not get us any closer to empirically justified parameter values (which would then allow us to attempt validation by the “generative methodology”). Surely it is better to build a model that says something about the data that exists (however imperfect or approximate) than to rely on future data collection or educated guesses. I don’t really have the space to enumerate the times the book said “we did this for simplicity”, “we assumed that” etc. but the cumulative effect is quite noticeable. Again, we need to be aware of the models which use real data in whatever aspects and “take forward” those inputs so they become modelling standards. This has to be a collective and not an individualistic enterprise.

References

Bithell, M. and Edmonds, B. (2020) The Systematic Comparison of Agent-Based Policy Models – It’s time we got our act together!. Review of Artificial Societies and Social Simulation, 11th May 2021. https://rofasss.org/2021/05/11/SystComp/

Chattoe-Brown, E. (2020) A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation. Review of Artificial Societies and Social Simulation, 12th June 2020. https://rofasss.org/2020/06/12/abm-validation-bib/

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html

Chattoe-Brown, E., Gilbert, N., Robertson, D. A., & Watts, C. J. (2021). Reproduction as a Means of Evaluating Policy Models: A Case Study of a COVID-19 Simulation. medRxiv 2021.01.29.21250743; DOI: https://doi.org/10.1101/2021.01.29.21250743

Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8


Chattoe-Brown, E. (2021) Reply to Frank Dignum. Review of Artificial Societies and Social Simulation, 10th November 2021. https://rofasss.org/2021/11/10/reply-to-dignum/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”

By Frank Dignum

This is a reply to a review in JASSS (Chattoe-Brown 2021) of (Dignum 2021).

Before responding to some of the specific concerns of Edmund I would like to thank him for the thorough review. I am especially happy with his conclusion that the book is solid enough to make it a valuable contribution to scientific progress in modelling crises. That was the main aim of the book and it seems that is achieved. I want to reiterate what we already remarked in the book; we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises. Nor do we claim that our simulations were without limitations. But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.

The concerns that are expressed by Edmund are certainly valid. I agree with some of them, but will nuance some others. First of all the concern about the fact that we seem to abandon the NetLogo implementation and move to Repast. This fact does not make the ABM itself any less valid! In itself it is also an important finding. It is not possible to scale such a complex model in NetLogo beyond around two thousand agents. This is not just a limitation of our particular implementation, but a more general limitation of the platform. It leads to the important challenge to get more computer scientists involved to develop platforms for social simulations that both support the modelers adequately and provide efficient and scalable implementations.

That the sheer size of the model and the results make it difficult to trace back the importance and validity of every factor on the results is completely true. We have tried our best to highlight the most important aspects every time. But, this leaves questions as to whether we make the right selection of highlighted aspects. As an illustration to this, we have been busy for two months to justify our results of the simulations of the effectiveness of the track and tracing apps. We basically concluded that we need much better integrated analysis tools in the simulation platform. NetLogo is geared towards creating one simulation scenario, running the simulation and analyzing the results based on a few parameters. This is no longer sufficient when we have a model with which we can create many scenarios and have many parameters that influence a result. We used R now to interpret the flood of data that was produced with every scenario. But, R is not really the most user friendly tool and also not specifically meant for analyzing the data from social simulations.

Let me jump to the third concern of Edmund and link it to the analysis of the results as well. While we tried to justify the results of our simulation on the effectiveness of the track and tracing app we compared our simulation with an epidemiological based model. This is described in chapter 12 of the book. Here we encountered the difference in assumed number of contacts per day a person has with other persons. One can take the results, as quoted by Edmund as well, of 8 or 13 from empirical work and use them in the model. However, the dispute is not about the number of contacts a person has per day, but what counts as a contact! For the COVID-19 simulations standing next to a person in the queue in a supermarket for five minutes can count as a contact, while such a contact is not a meaningful contact in the cited literature. Thus, we see that what we take as empirically validated numbers might not at all be the right ones for our purpose. We have tried to justify all the values of parameters and outcomes in the context for which the simulations were created. We have also done quite some sensitivity analyses, which we did not all report on just to keep the volume of the book to a reasonable size. Although we think we did a proper job in justifying all results, that does not mean that one can have different opinions on the value that some parameters should have. It would be very good to check the influence on the results of changes in these parameters. This would also progress scientific insights in the usefulness of complex models like the one we made!

I really think that an ABM crisis response should be institutional. That does not mean that one institution determines the best ABM, but rather that the ABM that is put forward by that institution is the result of a continuous debate among scientists working on ABM’s for that type of crisis. For us, one of the more important outcomes of the ASSOCC project is that we really need much better tools to support the types of simulations that are needed for a crisis situation. However, it is very difficult to develop these tools as a single group. A lot of the effort needed is not publishable and thus not valued in an academic environment. I really think that the efforts that have been put in platforms such as NetLogo and Repast are laudable. They have been made possible by some generous grants and institutional support. We argue that this continuous support is also needed in order to be well equipped for a next crisis. But we do not argue that an institution would by definition have the last word in which is the best ABM. In an ideal case it would accumulate all academic efforts as is done in the climate models, but even more restricted models would still be better than just having a thousand individuals all claiming to have a useable ABM while governments have to react quickly to a crisis.

The final concern of Edmund is about the empirical scale of our simulations. This is completely true! Given the scale and details of what we can incorporate we can only simulate some phenomena and certainly not everything around the COVID-19 crisis. We tried to be clear about this limitation. We had discussions about the Unity interface concerning this as well. It is in principle not very difficult to show people walking in the street, taking a car or a bus, etc. However, we decided to show a more abstract representation just to make clear that our model is not a complete model of a small town functioning in all aspects. We have very carefully chosen which scenarios we can realistically simulate and give some insights in reality from. Maybe we should also have discussed more explicitly all the scenarios that we did not run with the reasons why they would be difficult or unrealistic in our ABM. One never likes to discuss all the limitations of one’s labor, but it definitely can be very insightful. I have made up for this a little bit by submitting an to a special issue on predictions with ABM in which I explain in more detail, which should be the considerations to use a particular ABM to try to predict some state of affairs. Anyone interested to learn more about this can contact me.

To conclude this response to the review, I again express my gratitude for the good and thorough work done. The concerns that were raised are all very valuable to concern. What I tried to do in this response is to highlight that these concerns should be taken as a call to arms to put effort in social simulation platforms that give better support for creating simulations for a crisis.

References

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html


Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Should the family size be used in COVID-19 vaccine prioritization strategy to prevent variants diffusion? A first investigation using a basic ABM

By Gianfranco Giulioni

Department of Philosophical, Pedagogical and Economic-Quantitative Sciences, University of Chieti-Pescara, Italy

(A contribution to the: JASSS-Covid19-Thread)

When writing this document, few countries have made significant progress in vaccinating their population while many others still move first steps.

Despite the importance of COVID-19 adverse effects on society, there seems to be too little debate on the best option for progressing the vaccination process after the front-line healthcare personnel has been immunized.

The overall adopted strategies in the front-runner countries prioritize people using their health fragility, and age. For example, this strategy’s effectiveness is supported by Bubar et al. (2021), who provide results based on a detailed age-stratified Susceptible, Exposed, Infectious, Recovered (SEIR) model.

During the Covid infection outbreak, the importance of families in COVID diffusion was stressed by experts and media. This observation motivates the present effort, which investigates if considering family size among the vaccine prioritization strategy can have a role.

This document describes an ABM model developed with the intent of analyzing the question. The model is basic and has the essentials features to investigate the issue.

As highlighted by Squazzoni et al. (2020) a careful investigation of pandemics requires the cooperation of many scientists from different disciplines. To ease this cooperation and to the aim of transparency (Barton et al. 2020), the code is made publicly available to allow further developments and accurate parameters calibration to those who might be interested. (https://github.com/gfgprojects/abseir_family)

The following part of the document will sketch the model functioning and provide some considerations on families’ effects on vaccination strategy.

Brief Model Description

The ABSEIR-family model code is written in Java, taking advantage of the Repast Simphony modeling system (https://repast.github.io/).

Figure 1 gives an overview of the current development state of the model core classes.

Briefly, the code handles the relevant events of a pandemic:

  • the appearance of the first case,
  • the infection diffusion by contacts,
  • the introduction of measures for diffusion limitation such as quarantine,
  • the activation and implementation of the immunization process.

The distinguishing feature of the model is that individuals are grouped in families. This grouping allows considering two different diffusion speeds: fast among family members and slower when contacts involve two individuals from different families.

Figure 1: relationships between the core classes of the ABSEIR-family model and their variables and methods.

It is perhaps worth describing the evolution of an individual state to sketch the functioning of the model.

An individual’s dynamic is guided by a variable named infectionAge. In the beginning, all the individuals have this variable at zero. The program increases the infectionAge of all the individuals having a non zero value of this variable at each time step.

When an individual has contact with an infectious, s/he can get the infection or not. If infected, the individual enters the latency period, i.e. her/his infectionAge is set to 1 and the variable starts moving ahead with time, but s/he is not infectious. Individuals whose infectionAge is greater than the latency period length (ll ) become infectious.

At each time step, an infectious meets all her/his family members and mof randomly chosen non-family members. S/he passes on the infection with probability pif to family members and pof to non-family members. The infection can be passed on only if the contacted individual’s infectionAge equals zero and if s/he is not in quarantine.

The infectious phase ends when the infection is discovered (quarantine) or when the individual recovers i.e., the infectionAge is greater than the latency period length plus the infection length parameter (li).

At the present stage of development, the code does not handle the virus adverse post-infection evolution. All the infected individuals in this model recover. The infectionAge is set at a negative value at recovery because recovereds stay immune for a while (lr). Similarly, vaccination set the individual’s  infectionAge to a (high) negative value (lv).

At the present state of the pandemic evolution it is perhaps useful to use the model to get insights into how the family size could affect the vaccination process’s effectiveness. This will be attempted hereafter.

Highlighting the relevance of families size by an ad-hoc example

The relevance of family size in vaccination strategy can be shown using the following ad-hoc example.

Suppose there are two covid-free villages (say village A and B) whose health authorities are about to start vaccinations to avoid the disease spreading.

Villages are identical in the other aspects except for the family size distribution. Each village has 50 inhabitants, but village A has 10 families with five components each, while village B has two five members families and 40 singletons. Five vaccines arrive each day in each village.

Some additional extreme assumptions are made to make differences straightforward.

First, healthy family members are infected for sure by a member who contracted the virus. Second, each individual has the same number of contacts (say n) outside the family and the probability to pass  on the virus in external contacts is lower than 1. Symptoms take several days before showing up.

Now, the health authority are about to start the vaccination process and has to decide how to employ the available vaccines.

Intuition would suggest that Village B’s health authority should immunize large families first. Indeed, if case zero arrives at the end of the second vaccination day, the spread of the disease among the population should be limited because the virus can be passed on by external contacts only; and the probability of transmitting the virus in external contacts is lower than in the family.

But, should this strategy be used even by village A health authority?

To answer this question, we compare the family-based vaccination strategy with a random-based vaccination strategy. In a random-based vaccination strategy, we expect one members to be immunized in each family at the end of the second vaccination day. In the family-based vaccination strategy, two families are immunized at the end of the second vaccination day. Now, suppose one of the not-immunized citizens gets the virus at the end of day two. It is easy to verify there will be an infected more in the family-based strategy (all the five components of the family) than in the random-based strategy (4 components because one of them was immunized before). Furthermore, this implies that there will be n additional dangerous external contacts in the family-based strategy than in the random-based strategy.

These observations make us conclude that a random vaccination strategy will slow down the infection dynamics in village A while it will speed up infections in village B, and the opposite is true for the family-based immunization strategy.

Some simulation exercises

In this part of the document, the model described above will be used to compare further the family-based and random-based vaccination strategy to be used against the appearance of a new case (or variant) in a situation similar to that described in the example but with a more realistic setting.

As one can easily imagine, the family size distribution and COVID transmission risk in families are crucial to our simulation exercises. It is therefore important to gather real-world information for these phenomena. Fortunately, recent scientific contributions can help.

Several authors point out that a Poisson distribution is a good statistical model representing the family size distribution. This distribution is suitable because a single parameter characterizes it, i.e., its average, but it has the drawback of having a positive probability for zero value. Recently, Jarosz (2020) confirms the Poisson distribution’s goodness for modeling family size and shows how shifting it by one unit would be a valid alternative to solve the zero family size problem.

Furthermore, average family sizes data can be easily found using, for example, the OECD family database (http://www.oecd.org/social/family/database.htm).

The current version of the database (updated on 06-12-2016) presents data for 2015 with some exceptions. It shows how the average size of families in OECD countries is 2.46, ranging from Mexico (3.93) to Sweden (1.8).

The result in Metlay et al. (2021) guides the choice of the infection in the family parameter. They  provide evidence of an overall household infection risk of 10.1%

Simulation exercises consist in parameters sensitivity analysis with respect to the benchmark parameter set reported hereafter.

The simulation initialization is done by loading the family size distribution. Two alternative distributions are used and are tuned to obtain a system with a total number of individuals close to 20000. The two distributions are characterized by different average family sizes (afs) and are shown in figure 2.

Figure 2: two family size distributions used to initialize the simulation. Figures by the dots inform on the frequency of the corresponding size. Black square relates to the distribution with an average of 2.5; red circles relate to the distribution with an average of 3.5

The description of the vaccination strategy gives a possibility to list other relevant parameters. The immunization center is endowed with nv doses of vaccine at each time starting from time tv. At time t0, the state of one of the individuals is changed from susceptible to infected. This subject (case zero) is taken from a family having three susceptibles among their components.

Case zero undergoes the same process as all other following infected individuals described above.

The relevant parameters of the simulations are reported in table 1.

var description values reference
ni number of individuals ≅20000
afs average family size 2.5;3.5 OECD
nv number of vaccine doses available at each time 50;100;150
tv vaccination starting time 1
t0 case zero appearance time 10
ll length of latency 3 Buran et al 2021
li length of infectious period 5 Buran et al 2021
pif probability to infect a family member 0.1 Metlay et al 2021
pof probability to infect a non-family individual 0.01;0.02;0.03
mof number of non-family contacts of an infectious 10

Table 1: relevant parameters of the model.

We are now going to discuss the results of our simulation exercises. We focus particularly on the number of people infected up to a given point in time.

Due to the presence of random elements, each run has a different trajectory. We limit these effects as much as possible to allow ceteris paribus comparisons. For example, we keep the family size distribution equal across runs by loading the distributions displayed in figure 2 instead of using the run-time random number generator. Again, we set the number of non-family contacts (mof) equal for all the agents, although the code could set it randomly at each time step. Despite these randomness reductions, significant differences in the dynamics remain within the same parametrization because of randomness in the network of contacts.

To allow comparisons among different parametrizations in the presence of different evolution, we use the cross-section distributions of the total number of infected at the end of the infection process (i.e. time 200).

Figure 3 reports the empirical cumulative distribution function (ecdf) of several parametrizations. To easily read the figure, we put the different charts as in a plane having the average family size (afs) in the abscissa and the number of available vaccines (nv) in the ordinate. From above, we know two values of afs (i.e. 2.5 and 3.5) and three values of nv (i.e. 50, 100 and 150) are considered. Therefore figure 3 is made up of 6 charts.

Each chart reports ecdfs corresponding to the three different pof levels reported in table 1. In particular, circles denote edcfs for pof = 0.01, squares are for  pof = 0.02 and triangles for  pof = 0.03. At the end, choosing a parameters values triplet (afs, nv, pof), two ecdfs are identified. The red one is for the random-based, while the black one is for the family-based vaccination strategy. The family based vaccination strategy prioritizes families with higher number of members not yet infected.

Figure 3 shows mixed results: the random-based vaccination strategy outperforms the family-based one (the red line is above the balck one) for some parameters combinations while the reverse holds for others. In particular, the random-based tends to dominate the family-based strategy in case of larger family (afs = 3.5) and low and high vaccination levels (nv = 50 and 150). The opposite is true with smaller families at the same vaccination levels. The intermediate level of vaccination provides exceptions.

Figure 3: empirical cumulative distribution function of several parametrizations. The ecdfs is build by taking the number of infected people at period 200 of 100 runs with different random seed for each parametrization.

It is perhaps useful to highlight how, in the model, the family-based vaccination strategy stops the diffusion of a new wave or variant with a significant probability for smaller average family size and low and high vaccination levels (bottom-left and top-left charts) and for large average family size and middle level of vaccination (middle-right chart).

A conclusive note

At present, the model is very simple and can be improved in several directions. The most useful would probably be the inclusion of family-specific information. Setting up the model with additional information on each family member’s age or health state would allow overcoming the “universal mixing assumption” (Watts et al., 2020) currently in the model. Furthermore, additional vaccination strategy prioritization based on multiple criteria (such as vaccinating the families of most fragile or elderly) could be compared.

Initializing the model with census data of a local community could give a chance to analyze a more realistic setting in the wake of Pescarmona et al. (2020) and be more useful and understandable to (local) policy makers (Edmonds, 2020).

Developing the model to provide estimations for hospitalization and mortality is another needed step towards more sound vaccination strategies comparison.

Vaccinating by families could balance direct (vaccinating highest risk individuals) and indirect protection, i.e., limiting the probability the virus reaches most fragiles by vaccinating people with many contacts. It could also have positive economic effects relaunching, for example, family tourism. However, it cannot be implemented at risk of worsening the pandemic.

The present text aims only at posing a question. Further assessments following Squazzoni et al.’s (2020) recommendations are needed.

References

Barton, C.M. et al. (2020) Call for transparency of COVID-19 models. Science, 368(6490), 482-483. doi:10.1126/science.abb8637

Bubar, K.M. et al. (2021) Model-informed COVID-19 vaccine prioritization strategies by age and serostatus. Science 371, 916–921. doi:10.1126/science.abe6959

Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/

Jarosz, B. (2021) Poisson Distribution: A Model for Estimating Households by Household Size. Population Research and Policy Review, 40, 149–162. doi:10.1007/s11113-020-09575-x

Metlay J.P., Haas J.S., Soltoff A.E., Armstrong KA. Household Transmission of SARS-CoV-2. (2021) JAMA Netw Open, 4(2):e210304. doi:10.1001/jamanetworkopen.2021.0304

Pescarmona, G., Terna, P., Acquadro, A., Pescarmona, P., Russo, G., and Terna, S. (2020) How Can ABM Models Become Part of the Policy-Making Process in Times of Emergencies – The S.I.S.A.R. Epidemic Model. Review of Artificial Societies and Social Simulation, 20th Oct 2020. https://rofasss.org/2020/10/20/sisar/

Watts, C.J., Gilbert, N., Robertson, D., Droy, L.T., Ladley, D and Chattoe-Brown, E. (2020) The role of population scale in compartmental models of COVID-19 transmission. Review of Artificial Societies and Social Simulation, 14th August 2020. https://rofasss.org/2020/08/14/role-population-scale/

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Giulioni, G. (2020) Should the family size be used in COVID-19 vaccine prioritization strategy to prevent variants diffusion? A first investigation using a basic ABM. Review of Artificial Societies and Social Simulation, 15th April 2021. https://rofasss.org/2021/04/15/famsize/


 

The Policy Context of Covid19 Agent-Based Modelling

By Edmund Chattoe-Brown

(A contribution to the: JASSS-Covid19-Thread)

In the recent discussions about the role of ABM and COVID, there seems to be an emphasis on the purely technical dimensions of modelling. This obviously involves us “playing to our strengths” but unfortunately it may reduce the effectiveness that our potential policy contributions can make. Here are three contextual aspects of policy for consideration to provide a contrast/corrective.

What is “Good” Policy?

Obviously from a modelling perspective good policy involves achieving stated goals. So a model that suggests a lower death rate (or less taxing of critical care facilities) under one intervention rather than another is a potential argument for that intervention. (Though of course how forceful the argument is depends on the quality of the model.) But the problem is that policy is predominantly a political and not a technical process (related arguments are made by Edmonds 2020). The actual goals by which a policy is evaluated may not be limited to the obvious technical ones (even if that is what we hear most about in the public sphere) and, most problematically, there may be goals which policy makers are unwilling to disclose. Since we do not know what these goals are, we cannot tell whether their ends are legitimate (having to negotiate privately with the powerful to achieve anything) or less so (getting re-elected as an end in itself).

Of course, by its nature (being based on both power and secrecy), this problem may be unfixable but even awareness of it may change our modelling perspective in useful ways. Firstly, when academic advice is accused of irrelevance, the academics can only ever be partly to blame. You can only design good policy to the extent that the policy maker is willing to tell you the full evaluation function (to the extent that they know it of course). Obviously, if policy is being measured by things you can’t know about, your advice is at risk of being of limited value. Secondly, with this is mind, we may be able to gain some insight into the hidden agenda of policy by looking at what kind of suggestions tend to be accepted and rejected. Thirdly, once we recognise that there may be “unknown unknowns” we can start to conjecture intelligently about what these could be and take some account of them in our modelling strategies. For example, how many epidemic models consider the financial costs of interventions even approximately? Is the idea that we can and will afford whatever it takes to reduce deaths a blind spot of the “medical model?”

When and How to Intervene

There used to be an (actually rather odd) saying: “You can’t get a baby in a month by making nine women pregnant”. There has been a huge upsurge in interest regarding modelling and its relationship to policy since start of the COVID crisis (of which this theme is just one example) but realising the value of this interest currently faces significant practical problems. Data collection is even harder than usual (as is scholarship in general), there is a limit to how fast good research can ever be done, peer review takes time and so on. The question here is whether any amount of rushing around at the present moment will compensate for neglected activities when scholarship was easier and had more time (an argument also supported by Bithell 2018). The classic example is the muttering in the ABM community about the Ferguson model being many thousands of lines of undocumented C code. Now we are in a crisis, even making the model available was a big ask, let alone making it easier to read so that people might “heckle” it. But what stopped it being available, documented, externally validated and so on before COVID? What do we need to do so that next time there is a pandemic crisis, which there surely will be, “we” (the modelling community very broadly defined) are able to offer the government a “ready” model that has the best features of various modelling techniques, evidence of unfudgeable quality against data, relevant policy scenarios and so on? (Specifically, how will ABM make sure it deserves to play a fit part in this effort?) Apart from the models themselves, what infrastructures, modelling practices, publishing requirements and so on do we need to set up and get working well while we have the time? In practice, given the challenges of making effective contributions right now (and the proliferation of research that has been made available without time for peer review may be actively harmful), this perspective may be the most important thing we can realistically carry into the “post lockdown” world.

What Happens Afterwards?

ABM has taken such a long time to “get to” policy based on data that looking further than the giving of such advice simply seems to have been beyond us. But since policy is what actually happens, we have a serious problem with counterfactuals. If the government decides to “flatten the curve” rather than seek “herd immunity” then we know how the policy implemented relates to the model “findings” (for good or ill) but not how the policy that was not implemented does. Perhaps the outturn of the policy that looked worse in the model would actually have been better had it been implemented?

Unfortunately (this is not a typo), we are about to have an unprecedently large social data set of comparative experiments in the nature and timing of epidemiological interventions, but ABM needs to be ready and willing to engage with this data. I think that ABM probably has a unique contribution to make in “endogenising” the effects of policy implementation and compliance (rather than seeing these, from a “model fitting” perspective, as structural changes to parameter values) but to make this work, we need to show much more interest in data than we have to date.

In 1971, Dutton and Starbuck, in a worryingly neglected article (cited only once in JASSS since 1998 and even then not in respect of model empirics) reported that 81% of the models they surveyed up to 1969 could not achieve even qualitative measurement in both calibration and validation (with only 4% achieving quantitative measurement in both). As a very rough comparison (but still the best available), Angus and Hassani-Mahmooei (2015) showed that just 13% of articles in JASSS published between 2010 and 2012 displayed “results elements” both from the simulation and using empirical material (but the reader cannot tell whether these are qualitative or quantitative elements or whether their joint presence involves comparison as ABM methodology would indicate). It would be hard to make the case that the situation in respect to ABM and data has therefore improved significantly in 4 decades and it is at least possible that it has got worse!

For the purposes of policy making (in the light of the comments above), what matters of course is not whether the ABM community believes that models without data continue to make a useful contribution but whether policy makers do.

References

Angus, S. D. and Hassani-Mahmooei, B. (2015) “Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012, Journal of Artificial Societies and Social Simulation, 18(4), 16. doi:10.18564/jasss.2952

Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb

Dutton, John M. and Starbuck, William H. (1971) Computer Simulation Models of Human Behavior: A History of an Intellectual Technology. IEEE Transactions on Systems, Man, and Cybernetics, SMC-1(2), 128–171. doi:10.1109/tsmc.1971.4308269

Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/


Chattoe-Brown, E. (2020) The Policy Context of Covid19 Agent-Based Modelling. Review of Artificial Societies and Social Simulation, 4th May 2020. https://rofasss.org/2020/05/04/policy-context/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

What more is needed for Democratically Accountable Modelling?

By Bruce Edmonds

(A contribution to the: JASSS-Covid19-Thread)

In the context of the Covid19 outbreak, the (Squazzoni et al 2020) paper argued for the importance of making complex simulation models open (a call reiterated in Barton et al 2020) and that relevant data needs to be made available to modellers. These are important steps but, I argue, more is needed.

The Central Dilemma

The crux of the dilemma is as follows. Complex and urgent situations (such as the Covid19 pandemic) are beyond the human mind to encompass – there are just too many possible interactions and complexities. For this reason one needs complex models, to leverage some understanding of the situation as a guide for what to do. We can not directly understand the situation, but we can understand some of what a complex model tells us about the situation. The difficulty is that such models are, themselves, complex and difficult to understand. It is easy to deceive oneself using such a model. Professional modellers only just manage to get some understanding of such models (and then, usually, only with help and critique from many other modellers and having worked on it for some time: Edmonds 2020) – politicians and the public have no chance of doing so. Given this situation, any decision-makers or policy actors are in an invidious position – whether to trust what the expert modellers say if it contradicts their own judgement. They will be criticised either way if, in hindsight, that decision appears to have been wrong. Even if the advice supports their judgement there is the danger of giving false confidence.

What options does such a policy maker have? In authoritarian or secretive states there is no problem (for the policy makers) – they can listen to who they like (hiring or firing advisers until they get advice they are satisfied with), and then either claim credit if it turned out to be right or blame the advisers if it was not. However, such decisions are very often not value-free technocratic decisions, but ones that involve complex trade-offs that affect people’s lives. In these cases the democratic process is important for getting good (or at least accountable) decisions. However, democratic debate and scientific rigour often do not mix well [note 1].

A Cautionary Tale

As discussed in (Adoha & Edmonds 2019) Scientific modelling can make things worse, as in the case of the North Atlantic Cod Fisheries Collapse. In this case, the modellers became enmeshed within the standards and wishes of those managing the situation and ended up confirming their wishful thinking. An effect of technocratising the decision-making about how much it is safe to catch had the effect of narrowing down the debate to particular measurement and modelling processes (which turned out to be gravely mistaken). In doing so the modellers contributed to the collapse of the industry, with severe social and ecological consequences.

What to do?

How to best interface between scientific and policy processes is not clear, however some directions are becoming apparent.

  • That the process of developing and giving advice to policy actors should become more transparent, including who is giving advice and on what basis. In particular, any reservations or caveats that the experts add should be open to scrutiny so the line between advice (by the experts) and decision-making (by the politicians) is clearer.
  • That such experts are careful not to over-state or hype their own results. For example, implying that their model can predict (or forecast) the future of complex situations and so anticipate the effects of policy before implementation (de Matos Fernandes and Keijzer 2020). Often a reliable assessment of results only occurs after a period of academic scrutiny and debate.
  • Policy actors need to learn a little bit about modelling, in particular when and how modelling can be reliably used. This is discussed in (Government Office for Science 2018, Calder et al. 2018) which also includes a very useful checklist for policy actors who deal with modellers.
  • That the public learn some maturity about the uncertainties in scientific debate and conclusions. Preliminary results and critiques tend to be jumped on too early to support one side within polarised debate or models rejected simply on the grounds they are not 100% certain. We need to collectively develop ways of facing and living with uncertainty.
  • That the decision-making process is kept as open to input as possible. That the modelling (and its limitations) should not be used as an excuse to limit what the voices that are heard, or the debate to a purely technical one, excluding values (Aodha & Edmonds 2017).
  • That public funding bodies and journals should insist on researchers making their full code and documentation available to others for scrutiny, checking and further development (readers can help by signing the Open Modelling Foundation’s open letter and the campaign for Democratically Accountable Modelling’s manifesto).

Some Relevant Resources

  • CoMSeS.net — a collection of resources for computational model-based science, including a platform for publicly sharing simulation model code and documentation and forums for discussion of relevant issues (including one for covid19 models)
  • The Open Modelling Foundation — an international open science community that works to enable the next generation modelling of human and natural systems, including its standards and methodology.
  • The European Social Simulation Association — which is planning to launch some initiatives to encourage better modelling standards and facilitate access to data.
  • The Campaign for Democratic Modelling — which campaigns concerning the issues described in this article.

Notes

note1: As an example of this see accounts of the relationship between the UK scientific advisory committees and the Government in the Financial Times and BuzzFeed.

References

Barton et al. (2020) Call for transparency of COVID-19 models. Science, Vol. 368(6490), 482-483. doi:10.1126/science.abb8637

Aodha, L.Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822. (see also http://cfpm.org/discussionpapers/236)

Calder, M., Craig, C., Culley, D., de Cani, R., Donnelly, C.A., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N. Hargrove, C., Hinds, D., Lane, D.C., Mitchell, D., Pavey, G., Robertson, D., Rosewell, B., Sherwin, S., Walport, M. & Wilson, A. (2018) Computational modelling for decision-making: where, why, what, who and how. Royal Society Open Science,

Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13th April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/

de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Government Office for Science (2018) Computational Modelling: Technological Futures. https://www.gov.uk/government/publications/computational-modelling-blackett-review

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Understanding the current COVID-19 epidemic: one question, one model

By the CoVprehension Collective

(A contribution to the: JASSS-Covid19-Thread)

On the evening of 16th March 2020, the French president, Emmanuel Macron announced the start of a national lockdown, for a period of 15 days. It would be effective from noon the next day (17th March). On the 18th March 2020 at 01:11 pm, the first email circulated in the MicMac team, who had been working on the micro-macro modelling of the spread of a disease in a transportation network a few years. This email was the start of CoVprehension. After about a week of intense emulation, the website was launched, with three questions answered. A month later, there were about fifteen questions on the website, and the group was composed of nearly thirty members from French research institutions, in a varied pool of disciplines, all contributing as volunteers from their confined residence.

CoVprehension in principles

This rapid dynamic originates from a very singular context. It is tricky to analyse it given that the COVID-19 crisis is still developing. However, we can highlight a few fundamental principles leading the project.

The first principle is undeniably a principle of action. To become an actor of the situation first, but this invitation extends to readers of the website, allowing them to run the simulation and to change its parameters; but also more broadly by giving them suggestions on how to link their actions to this global phenomenon which is hard to comprehend. This empowerment also touches upon principles of social justice and, longer term, democracy in the face of this health crisis. By accompanying the process of social awareness, we aim to guide the audience towards a free and informed consent (cf. code of public health) in order to confront the disease. Our first principle is spelled out on theCoVprehension website in the form of a list of objectives that the CoVprehension collective set themselves:

  • Comprehension (the propagation of the virus, the actions put in place)
  • Objectification (giving a more concrete shape to this event which is bigger than us and can be overwhelming)
  • Visualisation (showing the mechanisms at play)
  • Identification (the essential principles and actions to put in place)
  • Do something (overcoming fears and anxieties to become actors in the epidemic)

The second founding principle is that of an interdisciplinary scientific collective formed on a voluntary basis. CoVprehension is self-organised and rests on three pillars: volunteering, collaborative work and the will to be useful during the crisis by offering a space for information, reflection and interaction with a large audience.

As a third principle, we have agility and reactivity. The main idea of the project is to answer questions that people ask, with short posts based on a model or data analysis. This can only be done if the delay between question and answer remains short, which is a real challenge given the complexity of the subject, the high frequency of scientific literature being produced since the beginning of the crisis, and the large number of unknowns and uncertainties which characterise it.

The fourth principle, finally, is the autonomy of groups which form to answer the questions. This allows a multiplicity of perspectives and points of view, sometimes divergent. This necessity draws on the acknowledgement by the European simulation community that a lack of pluralism is even more harmful to support public decision-making than a lack of transparency.

A collaborative organisation and an interactive website

The four principles have lead us, quite naturally, to favour a functioning organisation which exploits short and frequent retroactions and relies of adapted tools. The questions asked online through a Framasoft form are transferred to all CoVprehension members, while a moderator is in charge of replying to them quickly and personally. Each question is integrated into a Trello management board, which allows each member of the collective to pick the questions they want to contribute to and to follow their progression until publication. The collaboration and debate on each of the questions is done using VoIP application Discord. Model prototypes are mostly developed on the Netlogo platform (with some javascript exceptions). Finally, the whole project and website is hosted on GitHub.

The website itself (https://covprehension.org/en) is freely accessible online. Besides the posts answering questions, it contains a simulator to rerun and reproduce the simulations showcased in the posts, a page with scientific resources on the COVID-19 epidemic, a page presenting the project members and a link to the form allowing anyone to ask the collective a question.

On the 28th April 2020, the collective counted 29 members (including 10 women): medical doctors, researchers, engineers and specialists in the fields of computer science, geography, epidemiology, mathematics, economy, data analysis, medicine, architecture and digital media production. The professional statuses of the team members vary (from PhD student to full professor, from intern to engineer, from lecturer to freelancer) whereas their skills complement each other (although a majority of them are complex system modellers). The collective effort enables CoVprehension to scale up on information collection, sharing and updating. This is also fueled by debates during the first take on questions by small teams. Such scaling up would otherwise only be possible in large epidemiology laboratories with massive funding. To increase visibility, the content of the website, initially all in French, is being translated into English progressively as new questions are published.

Simple simulation models

When a question requires a model, especially so for the first questions, our choice has been to build simple models (cf. Question 0). Indeed, the objective of CoVprehension models is not to predict. It is rather to describe, to explain and to illustrate some aspects of the COVID-19 epidemic and its consequences on population. KISS models (“Keep It Simple, Stupid!” cf. Edmonds  & Moss 2004) for the opposition between simple and “descriptive” models) seem better suited to our project. They can unveil broad tendencies and help develop intuitions about potential strategies to deal with the crisis, which can then be also shared with a broad audience.

By choosing a KISS posture, we implicitly reject KIDS postures in such crisis circumstances. Indeed, if the conditions and processes modelled were better informed and known, we could simulate a precise dynamic and generate a series of predictions and forecasts. This is what N. Ferguson’s team did for instance, with a model initially developed with regards to the H5N1 flu in Asia (Ferguson et al., 2005). This model was used heavily to inform public decision-making in the first days of the epidemic in the United Kingdom. Building and calibrating such models takes an awfully long time (Ferguson’s project dates back from 2005) and requires teams and recurring funding which is almost impossible to get nowadays for most teams. At the moment, we think that uncertainty is too big, and that the crisis and the questions that people have do not always necessitate the modelling of complex processes. A large area of the space of social questions mobilised can be answered without describing the mechanisms in so much detail. It is possible that this situation will change as we get information from other scientific disciplines. For now, demonstrating that even simple models are very sensitive to many elements which remain uncertain shows that the scientific discourse could gain by remaining humble: the website reveals how little we know about the future consequences of the epidemic and the political decisions made to tackle it.

Feedback on the questions received and answered

At the end of April, twenty-seven questions have been asked to the CoVprehension collective, through the online form. Seven of them are not really questions (they are rather remarks and comments from people supporting the initiative). Some questions happen to have been asked by colleagues and relatives. The intended outreach has not been fully realised since the website seems to reach people who are already capable of looking for information on the internet. This was to be expected given the circumstances. Everyone who has done some scientific outreach knows how hard it is to reach populations who have not been been made aware of or are interested in scientific facts in the first place. Some successful initiatives (like “les petits débrouillards” or “la main à la pâte” in France) spread scientific knowledge related to recent publications in collaboration with researchers, but they are much better equipped for that (since they do not rely mostly on institutional portals like we do). This large selection bias in our audience (almost impossible to solve, unless we create some specific buzz… which we will then have to handle in terms of new question influx, which is not possible at the moment given the size of the collective and its organisation) means that our website has been protected from trolling. However, we can expect that it might be used within educational programs for example, where STEM teachers could make the students use the various simulators in a question and answer type of game.

Figure 1 shows that the majority of questions are taken by small interdisciplinary teams of two or three members. The most frequent collaborations are between geographers and computer scientists. They are often joined by epidemiologists and mathematicians, and recently by economists. Most topics require the team to build and analyse a simulation model in order to answer the question. The timing of team formations reflects the arrival of new team members in the early days of the project, leading to a large number of questions to be tackled simultaneously. Since April, the rhythm has slowed, reflecting also the increasing complexity of questions, models and answers, but also the marginal “cost” of this investment on the other projects and responsibilities of the researchers involved.

Visualisation of the questions tackled by Covprehension.

Figure 1. Visualisation of the questions tackled by Covprehension.

Initially, the website prioritised questions on simulation and aggregation effects specifically connected with the distribution models of diffusion. For instance, the first questions aimed essentially at showing the most tautological results: with simple interaction rules, we illustrated logically expected effects. These results are nevertheless interesting because while they are trivial to simulation practitioners, they also serve to convince profane readers that they are able to follow the logic:

  • Reducing the density of interactions reduces the spread of the virus and therefore: maybe the lockdown can alter the infection curve (cf. Question 2 and Question 3).
  • By simply adding a variable for the number of hospital beds, we can visualise the impact of lockdown on hospital congestion (cf. Question 7).

For more elaborate questions to be tackled (and to rationalise the debates):

  • Some alternative policies have been highlighted (the Swedish case: Question 13; the deconfinement: Question 9);
  • Some indicators with contradicting impacts have been discussed, which shows the complexity of political decisions and leads readers to question the relevance of some of these indicators (cf. Question 6);
  • The hypotheses (behavioural ones in particular) have been largely discussed, which highlights the way in which the model deviates from what it represents in a simplified way (cf. Question 15).

More than half of the questions asked could not be answered through modelling. In the first phase of the project, we personnally replied to these questions and directed the person towards robust scientific websites or articles where their question could be better answered. The current evolution of the project is more fundamental: new researchers from complementary disciplines have shown some interest in the work done so far and are now integrated into the team (including two medical doctors operating in COVID-19 centres for instance). This will broaden the scope of questions tackled by the team from now on.

Our work fits into a type of education to critical thinking about formal models, one that has long been known as necessary to a technical democracy (Stengers, 2017). At this point, the website can be considered both as a result by itself and as a pilot to function as a model for further initiatives.

Conclusion

Feedback on the CoVprehension project has mostly been positive, but not exempt from limits and weaknesses. Firstly, the necessity of a prompt response has been detrimental to our capacity to fully explore different models, to evaluate their robustness and look for unexpected results. Model validation is unglamorous, slow and hard to communicate. It is crucial nevertheless when assessing the credibility to be associated with models and results. We are now trying to explore our models in parallel. Secondly, the website may suggest a homogeneity of perspectives and a lack of debates regarding how questions are to be answered. These debates do take place during the assessment of questions but so far remain hidden from the readers. It shows indirectly in the way some themes appear in different answers treated from different angles by different teams (for example: the lockdown, treated in question 6, 7, 9 and 14). We consider the possibility of publishing alternative answers to a given question in order to show this possible divergence. Finally, the project is facing a significant challenge: that of continuing its existence in parallel with its members’ activities, with the number of members increasing. The efforts in management, research, editing, publishing and translation have to be maintained while the transaction costs are going up as the size and diversity of the collective increases, as the debates become more and more specific and happen on different platforms… and while new questions keep arriving!

References

Edmonds, B., & Moss, S. (2004). From KISS to KIDS–an ‘anti-simplistic’ modelling approach. In International workshop on multi-agent systems and agent-based simulation (pp. 130-144). Springer, Berlin, Heidelberg. doi:10.1007/978-3-540-32243-6_11

Ferguson, N. M., Cummings, D. A., Cauchemez, S., Fraser, C., Riley, S., Meeyai, A. & Burke, D. S. (2005). Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature, 437(7056), 209-214. doi:10.1038/nature04017

Stengers I. (2017). Civiliser la modernité ? Whitehead et les ruminations du sens commun, Dijon, Les presses du réel. https://www.lespressesdureel.com/EN/ouvrage.php?id=3497


the CoVprehension Collective (2020) Understanding the current COVID-19 epidemic: one question, one model. Review of Artificial Societies and Social Simulation, 30th April 2020. https://rofasss.org/2020/04/30/covprehension/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

What can and cannot be feasibly modelled of the Covid-19 Pandemic

By Nick Gotts

(A contribution to the: JASSS-Covid19-Thread)

The place of modelling in informing policy has been highlighted by the Covid-19 pandemic. In the UK, a specific individual-based epidemiological model, that developed by Neil Ferguson of Imperial College London, has been credited with the government’s U-turn from pursuing a policy of building up “herd immunity” by allowing the Sars-CoV-2 virus to spread through the population in order to avoid a possible “second wave” next winter (while trying to limit the speed of spread so as to avoid overwhelming medical facilities, and to shield the most vulnerable), to a “lockdown” imposed in order to minimise the number of people infected. Ferguson’s model reportedly indicated several hundred thousand deaths if the original policy was followed, and this was judged unacceptable.

I do not doubt that the reversal of policy was correct – indeed, that the original policy should never have been considered – one prominent epidemiologist said he thought the report of it was “satire” when he first heard it (Hanage 2020). As Hanage says: “Vulnerable people should not be exposed to Covid-19 right now in the service of a hypothetical future”. But it has also been reported (Reynolds 2020) that Ferguson’s model is a rapid modification of one he built to study possible policy responses to a hypothetical influenza pandemic (Ferguson et al. 2006); and that (Ferguson himself says) this model consists of “thousands of lines of undocumented C”. That major policy decisions should be made on such a basis is both wrong in itself, and threatens to bring scientific modelling into disrepute – indeed, I have already seen the justified questioning of the UK government’s reliance on modelling used by climate change denialists in their ceaseless quest to attack climate science.

What can social simulation contribute in the Covid-19 crisis? I suggest that attempts to model the pandemic as a whole, or even in individual countries, are fundamentally misplaced at this stage: too little is known about the behaviour of the virus, and governments need to take decisions on a timescale that simply does not allow for responsible modelling practice. Where social simulation might be of immediate use is in relation to the local application of policies already decided on. To give one example, supermarkets in the UK (and I assume, elsewhere) are now limiting the number of shoppers in their stores at any one time, in an effort to apply the guidelines on maintaining physical distance between individuals from different households. But how many people should be permitted in a given store? Experience from traffic models suggests there may well be a critical point at which it rather suddenly becomes impossible to maintain distance as the number of shoppers increases – but where does it lie for a particular store? Could the goods on sale be rearranged in ways that allow larger numbers – for example, by distributing items in high demand across two or more aisles? Supermarkets collect a lot of information about what is bought, and which items tend to be bought together – could they shorten individual shoppers’ time in the store by improving their signage? (Under normal circumstances, of course, they are likely to want to retain shoppers as long as possible, and send them down as many aisles as possible, to encourage impulse buys.)

Agents in such a model could be assigned a list of desired purchases, speed of movement and of collecting items from shelves, and constraints on how close they come to other shoppers – probably with some individual variation. I would be interested to learn if any modelling teams have approached supermarket chains (or vice versa) with a proposal for such a model, which should be readily adaptable to different stores. Other possibilities include models of how police should be distributed over an area to best ensure they will see (and be seen by) individuals or groups disregarding constraints on gathering in groups, and of the “contagiousness” of such behaviour – which, unlike actual Covid-19 infection events, is readily observable. Social simulators, in summary, should look for things they can reasonably hope to do quickly and in conjunction with organisations that have or can readily collect the required data, not try to do what is way beyond what is possible in the time available.

References

Ferguson, N. M., Cummings, D. A., Fraser, C., Cajka, J. C., Cooley, P. C., & Burke, D. S. (2006). Strategies for mitigating an influenza pandemic. Nature, 442(7101), 448-452. doi:10.1038/nature04795

Hanage, W. (2020) I’m an epidemiologist. When I heard about Britain’s ‘herd immunity’ coronavirus plan, I thought it was satire. The Guardian, 2020-03-15. https://www.theguardian.com/commentisfree/2020/mar/15/epidemiologist-britain-herd-immunity-coronavirus-covid-19

Reynolds, C. (2020) Big Tech Fights Back: From Pandemic Simulation Code, to Immune Response. Computer Business Review 2020-03-15. https://www.cbronline.com/news/pandemic-simulation-code.


Gotts, N. (2020) What can and cannot be feasibly modelled of the Covid-19 Pandemic. Review of Artificial Societies and Social Simulation, 29th April 2020. https://rofasss.org/2020/04/29/feasibility/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

The Danger of too much Compassion – how modellers can easily deceive themselves

By Andreas Tolk

(A contribution to the: JASSS-Covid19-Thread)

In 2017, Shermer observed that in cases where moral and epistemological considerations are deeply intertwined, it is human nature to cherry-pick the results and data that support the current world view (Shermer 2017). In other words, we tend to look for data justifying our moral conviction. The same is an inherent challenge for simulations as well: we tend to favour our underlying assumptions and biases – often even unconsciously – when we implement our simulation systems. If now others use this simulation system in support of predictive analysis, we are in danger of philosophical regress: a series of statements in which a logical procedure is continually reapplied to its own result without approaching a useful conclusion. As stated in an earlier paper of mine (Tolk 2017):

The danger of the simulationist’s regress is that such predictions are made by the theory, and then the implementation of the theory in form of the simulation system is used to conduct a simulation experiment that is then used as supporting evidence. This, however, is exactly the regress we wanted to avoid: we test a hypothesis by implementing it as a simulation, and then use the simulated data in lieu of empirical data as supporting evidence justifying the propositions: we create a series of statements – the theory, the simulation, and the resulting simulated data – in which a logical procedure is continually reapplied to its own result….

In particular in cases where moral and epistemological considerations are deeply intertwined, it is human nature to cherry-pick the results and data that support the current world view (Shermer 2017). Simulationists are not immune to this, and as they can implement their beliefs into a complex simulation system that now can be used by others to gain quasi-empirical numerical insight into the behavior of the described complex system, their implemented world view can easily be confused with a surrogate for real world experiments.

I am afraid that we may have fallen into such a fallacy in some of our efforts to use simulation to better understand the Covid-19 crisis and what we can do. This is for sure a moral problem, as at the end of our recommendations this is about human lives! And we assumed that the recommendations of the medical community for social distancing and other non pharmaceutical interventions (NPI) is the best we can do, as it saves many lives. So we built our models to clearly demonstrate the benefits of social distancing and other NPIs, which leads to danger of regress: we assume that NPIs are the best action, so we write a simulation to show that NPIs are the best action, and then we use these simulations to prove that NPIs are the best action. But can we actually use empirical data to support these assumptions? Looking closely at the data, the correlation of success – measured as flattening the curves – and the amount and strictness of the NPIs is not always observable. So we may have missed something, as our model-based predictions are not supported as we hope for, which is a problem: do we just collect the wrong data and should use something else to validate the models, or are the models insufficient to explain the data? And how do we ensure that our passion doesn’t interfere with our scientific objectivity?

One way to address this issue is diversity of opinion implemented as a set of orchestrated models, to use a multitude of models instead of just one. In another comment, the idea of using exploratory analysis to support decision making under deep uncertainty is mentioned. I highly recommend to have a look at (Marchau, Bloemen & Popper 2019) Decision Making Under Deep Uncertainty: From Theory to Practice. I am optimistic that if we are inclusive of a diversity of ideas – even if we don’t like them – and allow for computational evaluation of ALL options using exploratory analysis, we may find a way for better supporting the community.

References

Marchau, V. A., Walker, W. E., Bloemen, P. J., & Popper, S. W. (2019). Decision making under deep uncertainty. Springer. doi:10.1007/978-3-030-05252-2

Tolk, A. (2017, April). Bias ex silico: observations on simulationist’s regress. In Proceedings of the 50th Annual Simulation Symposium. Society for Computer Simulation International. ANSS ’17: Proceedings of the 50th Annual Simulation Symposium, April 2017 Article No.: 15 Pages 1–9. https://dl.acm.org/citation.cfm?id=3106403

Shermer, M. (2017) How to Convince Someone When Facts Fail – Why worldview threats undermine evidence. Scientific American, 316, 1, 69 (January 2017). doi:10.1038/scientificamerican0117-69


Tolk, A. (2020) The Danger of too much Compassion - how modellers can easily deceive themselves. Review of Artificial Societies and Social Simulation, 28th April 2020. https://rofasss.org/2020/04/28/self-deception/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Designing social simulation to (seriously) support decision-making: COMOKIT, an agent-based modelling toolkit to analyse and compare the impacts of public health interventions against COVID-19

By Alexis Drogoul1, Patrick Taillandier2, Benoit Gaudou1,3, Marc Choisy4,8, Kevin Chapuis1,5,  Quang Nghi Huynh 1,6, Ngoc Doanh Nguyen1,7, Damien Philippon10, Arthur Brugière1, and Pierre Larmande8

1 UMI 209, UMMISCO, IRD, Sorbonne Université, Bondy, France. 2 UR 875, MIAT, INRAE, Toulouse University, Castanet Tolosan, France. 3 UMR 5505, IRIT, Université Toulouse 1 Capitole, Toulouse, France. 4 UMR 5290, MIVEGEC, IRD/CNRS/Univ. Montpellier, Montpellier, France. 5 UMR 228, ESPACE-DEV, IRD, Montpellier, France. 6 CICT, Can Tho University, Can Tho, Vietnam. 7 MSLab / WARM, Thuyloi University, Hanoi, Vietnam. 8 UMR 232, DIADE, IRD, Univ. Montpellier, Montpellier, France. 9 OUCRU, Centre for Tropical Medicine, Ho Chi Minh City, Viet Nam. 10 WHO Collaborating Centre for Infectious Disease Epidemiology and Control, School of Public Health, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong Special Administrative Region, China.

(A contribution to the: JASSS-Covid19-Thread)

In less than 4 months after its emergence in China, the COVID-19 pandemic has spread worldwide. In response to this health crisis, unprecedented in modern history, researchers have mobilized to produce knowledge and models in order to inform and support public decision-making, sometimes in real-time (Adam, D. 2020). However, the social modelling community is facing two challenges in this endeavour: the first one is its capacity to provide robust scientific knowledge and to translate it into evidences on concrete cases (and not only general principles) within a short time range; and the second one is to do it knowing (and anticipating the fact) that these evidences may have concrete social, economic or clinical impacts in the “real” world.

These two challenges require the design of realistic models that provide what B. Edmonds, in response to (Squazzoni & al. 2020), calls the “empirical grounding and validation needed to reliably support policy making” (Edmonds, 2020); in other words, spatially explicit, demographically realistic, data driven models that can be fed with both quantitative and qualitative (behavioural) data, and that can be easily experimented in huge numbers of scenarios so as to provide statistically sound results and evidences.

It is difficult to deny these requirements, but it is easier said than done. What we have witnessed, instead, these last 4 months, is an explosion of agent-based toy models representing, ad nauseam, the spread of the virus or similar dynamics within artificial populations without space, without behaviours, without friend nor family relations, without social networks, without even remotely realistic activities or mobility schemes; in short, populations of artificial agents devoid of everything that makes a human population slightly different from a mixture of homogeneous particles. How we, as a community, can claim to inform policy makers, in such a critical context, with such abstract and simplistic constructions is difficult to justify. Are public health decision makers really that interested, these days, in models that help them to understand the general principles, the inner mechanisms or hidden dynamics of this crisis? Or would they feel better supported if we could answer their questions on which interventions, at which place, at which spatial and temporal scale and on which populations, would have the best impact on the pandemic?

We tend to forget, however, that agent-based modelling (ABM), among other benefits, does not oppose these two objectives when building a model. And from the outset of the crisis, many of us were quick to advocate a modelling approach that would:

  • Be as close as possible to public decision making by having the possibility to answer to concrete, practical questions;
  • Be based on a detailed and realistic representation of space, as the spread of the epidemic is spatial and public health policies are also predominantly spatial (containment, social distancing, reduction of mobility, etc.);
  • Rely on spatial and social data that can be collected easily and, above all, quickly, and not be too dependent on the availability of large datasets (which may not be opened nor shared depending on the country of intervention);
  • Make it possible to represent as faithfully as possible the complexity of the social and ecological environments in which the pandemic is spreading;
  • Be generic, flexible and applicable to any case study, but also trustable as it relies on inner mechanisms that can be isolated and validated separately;
  • Be open and modular enough to support the cooperation of researchers across different disciplines while relying on rigorous scientific and computational principles;
  • Offer an easy access to large-scale experimentation and statistical validation by facilitating the exploration of its parameters;

This approach is currently being implemented by an interdisciplinary group of modellers, all signatories of this response, who have started to design and implement on the GAMA platform a generic model called COMOKIT, around which they now wish to gather the maximum number of modellers and researchers in epidemiology and social sciences. Being generic here means that COMOKIT is portable for almost any case study imaginable, from small towns to provinces or even countries, the only real limit to its application being the available RAM and computing power[1].

COMOKIT is an integrated model that, in its simplest incarnation, dynamically combines five sub-models:

  1. a sub-model of the individual clinical dynamics and epidemiological status of agents
  2. a sub-model of agent-to-agent direct transmission of the infection,
  3. a sub-model of environmental transmission through the built environment,
  4. a sub-model of policy design and implementation,
  5. an agenda-based model of people activities at a one-hour time step.

It allows, of course, to represent heterogeneity in individual characteristics (sex, age, household), agendas (depending on social structures, available services or age categories), social relationships and behaviours (e.g. respect of regulations).

COMOKIT has been designed as modular enough to allow modellers and users to represent different strategies and study their impacts in multiple scenarios. Using the experimental features provided by the underlying GAMA platform (Taillandier & al. 2019) (like advanced visualization, multi-simulation, batch experiments, easy large-scale explorations of parameters spaces on HPC infrastructures), it is made particularly easy and effective to compare the outcomes of these strategies. Modularity is also a key to facilitating its adoption by other modellers and users: COMOKIT is a basis that can be very easily extended (to new policies, people activities, actors, spatial features, etc.). For instance, more detailed socio-psychological models, like the ones described in ASSOCC (Ghorbani & al. 2020), could be interesting to test within realistic models. In that respect, COMOKIT is both a framework (for deriving new concrete models) and a model (that can be instantiated by itself on arbitrary datasets).

Finally, COMOKIT has been thought of as incrementally expandable: because of the urgency usually associated with its use, it can be instantiated on new case studies in a matter of minutes, by generating the built environment of an area and its synthetic population using a simple geolocalised boundary and reasonable defaults (which can of course be parametrized, or even, in the case of the population generation, be driven by a plugin called Gen* (Chapuis & al. 2018)). When more detailed data becomes available (about the population, peoples’ occupations, economic activities, public health policies, …) the same model can be fed with it in order to refine its initial outcomes.

 A screenshot of the experiments’ UI in COMOKIT

Figure 1. A screenshot of the experiments’ UI in COMOKIT: six scenarios of partial confinement are being compared with respect to the number of cases during and after a 3 months-long period. Son Loi case study, 9988 inhabitants from the 2019 Vietnamese census.

Up to now, COMOKIT has been implemented and evaluated on two cases of city confinement in Vietnam (i.e. Son Loi (Thanh & al. 2020) and Thua Duc). In these cases, which have served as testbeds to verify the correctness of the individual sub-models and their interactions, we have compared the impacts of a number of social-distancing strategies (e.g. with a ratio of the population allowed to move outside, for various durations, to various geographical extents, by activities, and so on), and other non-pharmaceutical interventions such as advising the population to wear masks, or closing the schools and public places. These studies have shown in particular that the process of ending an intervention is as much impactful as the process of starting it, in particular to avoid a second epidemic wave

We need you: social scientists, epidemiologists, modellers, computer scientists, web designers…

As the epidemic moves to countries with more limited health infrastructure and economic space, it becomes critical to devise, test and compare original public interventions that are adapted to these constraints, for instance interventions that would be more geographically and socially targeted than an entire lockdown of the whole population. COMOKIT, which is used since the beginning of April 2020 within the Rapid Response Team of the Steering Committee against COVID-19 of the Ministry of Health in Vietnam, can become an invaluable help in this endeavour. However, it must become even more realistic, reliable and robust than it is at present, so that decision-makers can build a relationship of trust with this new tool and hopefully with agent-based modelling in general.

All the documentation (with a complete ODD description and UML diagrams), commented source code (of the models and utilities), as well as five example datasets, are made available on the project’s webpage and Github repository to be shared, reused and adapted to other case studies. We strongly encourage anyone interested to try COMOKIT, apply it on their own case studies, improve it by adding new policies, activities, agents or scenarios, and share their studies, proposals, and results. Any help will be appreciated to show that we can collectively contribute, as a community, to the fight against this pandemic (and maybe the next ones): analysing the sub-models, documenting them, proposing access to data, fixing bugs, adding new sub-models, testing their integration, proposing HPC infrastructures to run large-scale experiments, everything can be helpful!

Notes

[1] To give a very rough idea, it takes approximately 15mn and 800Mb of RAM on one core of a laptop to simulate 6 months of a town of 10.000 inhabitants, at a 1-hour step, while displaying a 3D view and charts.

References

Adam, D. (2020). Special report: The simulations driving the world’s response to COVID-19. Nature. doi:10.1038/d41586-020-01003-6

Chapuis, K., Taillandier, P., Renaud, M., & Drogoul, A. (2018). Gen*: a generic toolkit to generate spatially explicit synthetic populations. International Journal of Geographical Information Science, 32(6), 1194-1210. doi:10.1080/13658816.2018.1440563

Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13rd April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/

Ghorbani, A., Lorig, F., de Bruin, B., Davidsson, P., Dignum, F., Dignum, V., van der Hurk, M., Jensen, M., Kammler, C., Kreulen, K., Ludescher, L. G., Melchior, A., Mellema, R., Păstrăv, C., Vanhée, L. and Verhagen, H. (2020) The ASSOCC Simulation Model: A Response to the Community Call for the COVID-19 Pandemic. Review of Artificial Societies and Social Simulation, 25th April 2020. https://rofasss.org/2020/04/25/the-assocc-simulation-model/

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Taillandier, P., Gaudou, P. Grignard, A. Huynh, Q.N., Marilleau, N., Caillou, P., Philippon, D., Drogoul, A. (2019) Building, Composing and Experimenting Complex Spatial Models with the GAMA Platform. GeoInformatica 23, 299-322. doi:10.1007/s10707-018-00339-6

Thanh, H. N., Van, T. N., Thu, H. N. T., Van, B. N., Thanh, B. D., Thu, H. P. T., … & Nguyen, T. A. (2020). Outbreak investigation for COVID-19 in northern Vietnam. The Lancet Infectious Diseases. DOI:10.1016/S1473-3099(20)30159-6


Drogoul, A., Taillandier, P., Gaudou, B., Choisy, M., Chapuis, K., Huynh, N. Q. , Nguyen, N. D., Philippon, D., Brugière, A., and Larmande, P. (2020) Designing social simulation to (seriously) support decision-making: COMOKIT, an agent-based modelling toolkit to analyze and compare the impacts of public health interventions against COVID-19 . Review of Artificial Societies and Social Simulation, 27th April 2020. https://rofasss.org/2020/04/27/comokit/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)