Tag Archives: complex system

Why we are failing at connecting opinion dynamics to the empirical world

By Dino Carpentras

ETH Zürich – Department of Humanities, Social and Political Sciences (GESS)

The big mystery

Opinion dynamics (OD) is field dedicated to studying the dynamic evolution of opinions and is currently facing some extremely cryptic mysteries. Since 2009 there have been multiple calls for OD models to be strongly grounded in empirical data (Castellano et al., 2009, Valori et al., 2012, Flache et al., 2017; Dong et al., 2018), however the number of articles moving in this direction is still extremely limited. This is especially puzzling when compared with the increase in the number of publications in this field (see Fig 1). Another surprising issue, which extends also beyond OD, is that validated models are not cited as often as we would expect them to be (Chattoe-Brown, 2022; Kejjzer, 2022).

Some may argue that this could be explained by a general lack of people interested in the empirical side of opinion dynamics. However, the World seems in desperate need of empirically grounded OD models that could help us in shape policies on topics such as vaccination and climate change. Thus, it is very surprising to see that almost nobody is interested in meeting such a big and pressing demand.

In this short piece, I will share my experience both as a writer and as a reviewer for empirical OD papers, as well as the information I gathered from discussions with other researchers in similar roles. This will help us understand much better what is going on in the world of empirical OD and, more in general, in the empirical parts of agent-based modelling (ABM) related to psychological phenomena.

fig 1 rofasss

Publications containing the term “opinion dynamics” in abstract or title. Total 2,527. Obtained from dimensions.ai

Theoretical versus empirical OD

The main issue I have noticed with works in empirical OD is that these papers do not conform to the standard framework of ABM papers. Indeed, in “classical” ABM we usually try to address research questions like:

  1. Can we develop a toy model to show how variables X and Y are linked?
  2. Can we explain some macroscopic phenomenon as the result of agents’ interaction?
  3. What happens to the outputs of a popular model if we add a new variable?

However, empirical papers do not fit into this framework. Indeed, empirical ABM papers ask questions such as:

  1. How accurate are the predictions made by a certain model when compared with data?
  2. How close is the micro-dynamic to the experimental data?
  3. How can we refine previous models to improve their predicting ability?

Unfortunately, many reviewers do not view the latter questions as genuine research inquiries, ending up in pushing the authors to modify their papers to meet the first set of questions.

For instance, my empirical works often receive the critique that “the research question is not clear”, even though the question was explicitly stated in the main text, abstract and even in the title (See, for example “Deriving An Opinion Dynamics Model From Experimental Data”, Carpentras et al. 2022). Similarly, once a reviewer acknowledged that the experiment presented in the paper was an interesting addition to it, but they requested me to demonstrate why it was useful. Notice that, also in this case, the paper was on developing a model from the dynamical behavior observed in an experiment; therefore, the experiment was not just “an add on”, but core of the paper. I also have reviewed some empirical OD papers where the authors are asked, by other reviewers, to showcase how their model informs us about the world in a novel way.

As we will see in a moment, this approach does not just make authors’ life harder, but it also generates a cascade of consequences on the entire field of opinion dynamics. But to better understand our world, let us move first to a fictitious scenario.

A quick tale of natural selection of researcher

Let us now imagine a hypothetical world where people have almost no knowledge of the principles of physics. However, to keep the thought experiment simple, let us also suppose they have already developed the peer-review process. Of course, this fictious scenario is far from being realistic, but it should still help us understand what is going on with empirical OD.

In this world, a scientist named Alice writes a paper suggesting that there is an upward force when objects enter water. She also shows that many objects can float on water, therefore “validating” her model. The community is excited about this new paper which took Alice 6 months to write.

Now, consider another scientist named Bob. Bob, inspired by Alice’s paper, in 6 months conducts a series of experiments demonstrating that when an object is submerged in water, it experiences an upward force that is proportional to its submerged volume. This pushes knowledge forward as Bob does not just claim that this force exists, but he shows how this force has some clear quantitative relationship to the volume of the object.

However, when reviewers read Bob’s work, they are unimpressed. They question the novelty of his research and fail to see the specific research question he is attempting to address. After all, Alice already showed that this force exists, so what is new in this paper? One of the reviewers suggests that Bob should show how his study may impact their understanding of the world.

As a result, Bob spends an additional six months to demonstrate that he could technically design a floating object made out of metal (i.e. a ship). He also describes the advantages for society if such an object was invented. Unfortunately, one of the reviewers is extremely skeptical as metal is known to be extremely heavy and should not float in water, and requests additional proof.

After multiple revisions, Bob’s work is eventually published. However, the publication process takes significantly longer than Alice’s work, and the final version of the paper addresses a variety of points, including empirical validation, the feasibility of constructing a metal boat, and evidence to support this claim. Consequently, the paper becomes densely technical, making it challenging for most people to read and understand.

At the end, Bob is left with a single paper which is hardly readable (and therefore citable), while Alice, in the meanwhile, published many other easier-to-read papers having a much bigger impact.

Solving the mystery of empirical opinion dynamics

The previous sections helped us in understanding the following points: (1) validation and empirical grounding are often not seen as a legitimate research goal by many members of the ABM community. (2) This leads to bigger struggle when trying to publish this kind of research, and (3) reviewers often try to push the paper into the more classic research questions, possibly resulting in a monster-paper which tries to address multiple points all at once. (4) This also generates lower readability and so less impact.

So to sum it up: empirical OD gives you the privilege of working much more to obtain way less. This, combined with the “natural selection” of the “publish or perish” explains the scarcity of publications in this field, as authors need either to adapt to more standard ABM formulas or to “perish.” I also personally know an ex-researcher who tried to publish empirical OD until he got fed up and left the field.

Some clarifications

Let me make clear that this is a bit of a simplification and that, of course, it is definitely possible to publish empirical work in opinion dynamics even without “perishing.” However, choosing this instead of the traditional ABM approach strongly enhances the difficulty. This is a little like running while carrying extra weight: it is still possible that you will win the race, but the weight strongly decreases the probability of this happening.

I also want to say that while here I am offering an explanation of the puzzles I presented, I do not claim that this is the only possible explanation. Indeed, I am sure that what I am offering here is only part of the full story.

Finally, I want to clarify that I do not believe anyone in the system has bad intentions. Indeed, I think reviewers are in good faith when suggesting empirically-oriented papers to take a more classical approach. However, even with good intentions, we are creating a lot of useless obstacles for an entire research field.

Trying to solve the problem

To address this issue, in the past I have suggested dividing ABM researchers into theoretical and empirically oriented (Carpentras, 2020). The division of research into two streams could help us in developing better standards for both developing toy models and for empirical ABMs.

To give you a practical example, my empirical ABM works usually receive long and detailed comments about the model properties and almost no comment on the nature of the experiment or data analysis. Am I that good in these last two steps? Or maybe reviewers in ABM focus very little on the empirical side of empirical ABMs? While the first explanation would be flattering for me, I am afraid that the reality is better depicted by the second option.

With this in mind, together with other members of the community, we have created a special interest group for Experimental ABM (see http://www.essa.eu.org/sig/sig-experimental-abm/). However, for this to be successful, we really need people to recognize the distinction between these two fields. We need to acknowledge that empirically-related research questions are still valid and not push papers towards the more classical approach.

I really believe empirical OD will raise, but how this will happen is still to decide. Will it be at the cost of many researchers facing bigger struggle or will we develop a more fertile environment? Or maybe some researchers will create an entire new niche outside of the ABM community? The choice is up to us!

References

Carpentras, D., Maher, P. J., O’Reilly, C., & Quayle, M. (2022). Deriving An Opinion Dynamics Model From Experimental Data. Journal of Artificial Societies & Social Simulation, 25(4). https://www.jasss.org/25/4/4.html

Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/

Castellano, C., Fortunato, S., & Loreto, V. (2009). Statistical physics of social dynamics. Reviews of modern physics, 81(2), 591. DOI: 10.1103/RevModPhys.81.591

Chattoe-Brown, E. (2022). If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1 Feb 2022. https://rofasss.org/2022/02/01/citing-od-models/

Dong, Y., Zhan, M., Kou, G., Ding, Z., & Liang, H. (2018). A survey on the fusion process in opinion dynamics. Information Fusion, 43, 57-65. DOI: 10.1016/j.inffus.2017.11.009

Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4). https://www.jasss.org/20/4/2.html

Keijzer, M. (2022). If you want to be cited, calibrate your agent-based model: a reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation.  9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown

Valori, L., Picciolo, F., Allansdottir, A., & Garlaschelli, D. (2012). Reconciling long-term cultural diversity and short-term collective social behavior. Proceedings of the National Academy of Sciences, 109(4), 1068-1073. DOI: 10.1073/pnas.1109514109


Carpentras, D. (2023) Why we are failing at connecting opinion dynamics to the empirical world. Review of Artificial Societies and Social Simulation, 8 Mar 2023. https://rofasss.org/2023/03/08/od-emprics


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

The inevitable “layering” of models to extend the reach of our understanding

By Bruce Edmonds

Just as physical tools and machines extend our physical abilities, models extend our mental abilities, enabling us to understand and control systems beyond our direct intellectual reach” (Calder  & al. 2018)

Motivation

There is a modelling norm that one should be able to completely understand one’s own model. Whilst acknowledging there is a trade-off between a model’s representational adequacy and its simplicity of formulation, this tradition assumes there will be a “sweet spot” where the model is just tractable but also good enough to be usefully informative about the target of modelling – in the words attributed to Einstein, “Everything should be made as simple as possible, but no simpler1. But what do we do about all the phenomena where to get an adequate model2 one has to settle for a complex one (where by “complex” I mean a model that we do not completely understand)? Despite the tradition in Physics to the contrary, it would be an incredibly strong assumption that there are no such phenomena, i.e. that an adequate simple model is always possible (Edmonds 2013).

There are three options in these difficult cases.

  • Do not model the phenomena at all until we can find an adequate model we can fully understand. Given the complexity of much around us this would mean to not model these for the foreseeable future and maybe never.
  • Accept inadequate simpler models and simply hope that these are somehow approximately right3. This option would allow us to get answers but with no idea whether they were at all reliable. There are many cases of overly simplistic models leading policy astray (Adoha & Edmonds 2017; Thompson 2022), so this is dangerous if such models influence decisions with real consequences.
  • Use models that are good for our purpose but that we only partially understand. This is the option examined in this paper.

When the purpose is empirical the last option is equivalent to preferring empirical grounding over model simplicity (Edmonds & Moss 2005).

Partially Understood Models

In practice this argument has already been won – we do not completely understand many computer simulations that we use and rely on. For example, due to the chaotic nature of the dynamics of the weather, forecasting models are run multiple times with slightly randomised inputs and the “ensemble” of forecasts inspected to get an idea of the range of different outcomes that could result (some of which might be qualitatively different from the others)4. Working out the outcomes in each case requires the computational tracking of a huge numbers of entities in a way that is far beyond what the human mind can do5. In fact, the whole of “Complexity Science” can be seen as different ways to get some understanding of systems for which there is no analytic solution6.

Of course, this raises the question of what is meant by “understand” a model, for this is not something that is formally defined. This could involve many things, including the following.

  1. That the micro-level – the individual calculations or actions done by the model each time step – is understood. This is equivalent to understanding each line of the computer code.
  2. That some of the macro-level outcomes that result from the computation of the whole model is understood in terms of partial theories or “rules of thumb”.
  3. That all the relevant macro-level outcomes can be determined to a high degree of accuracy without simulating the model (e.g. by a mathematical model).

Clearly, level (1) is necessary for most modelling purposes in order to know the model is behaving as intended. The specification of this micro-level is usually how such models are made, so if this differs from what was intended then this would be a bug. Thus this level would be expected of most models7. However, this does not necessarily mean that this is at the finest level of detail possible – for example, we usually do not bother about how random number generators work, but simply rely on its operation, but in this case we have very good level (3) of understanding for these sub-routines.

At the other extreme, a level (3) understanding is quite rare outside the realm of physics. In a sense, having this level of understanding makes the model redundant, so would probably not be the case for most working models (those used regularly)8. As discussed above, there will be many kinds of phenomena for which this level of understanding is not feasible.

Clearly, what many modelers find useful is a combination of levels (1) & (2) – that is, the detailed, micro-level steps that the model takes are well understood and the outcomes understood well enough for the intended task. For example, when using a model to establish a complex explanation9 (of some observed pattern in data using certain mechanisms or structures) then one might understand the implementation of the candidate mechanisms and verify that the outcomes fit the target pattern for a range of parameters, but not completely understand the detail of the causation involved. There might well be some understanding, for example how robust this is to minor variations in the initial conditions or the working of the mechanisms involved (e.g. by adding some noise to the processes). A complete understanding might not be accessible but this does not stop an explanation being established (although a  better understanding is an obvious goal for future research or avenue for critiques of the explanation).

Of course, any lack of a complete, formal understanding leaves some room for error. The argument here is not deriding the desirability of formal understanding, but is against prioritising that over model adequacy. Also the lack of a formal, level (3), understanding of a model does not mean we cannot take more pragmatic routes to checking it. For example: performing a series of well-designed simulation experiments that intend to potentially refute the stated conclusions, systematically comparing to other models, doing a thorough sensitivity analysis and independently reproducing models can help ensure their reliability. These can be compared with engineering methods – one may not have a proof that a certain bridge design is solid over all possible dynamics, but practical measures and partial modelling can ensure that any risk is so low as to be negligible. If we had to wait until bridge designs were proven beyond doubt, we would simply have to do without them.

Layering Models to Leverage some Understanding

As a modeller, if I do not understand something my instinct is to model it. This instinct does not change if what I do not understand is, itself, a model. The result is a model of the original model – a meta-model. This is, in fact, common practice. I may select certain statistics summarising the outcomes and put these on a graph; I might analyse the networks that have emerged during model runs; I may use maths to approximate or capture some aspect of the dynamics; I might cluster and visualise the outcomes using Machine Learning techniques; I might make a simpler version of the original and compare them. All of these might give me insights into the behaviour of the original model. Many of these are so normal we do not think of this as meta-modelling. Indeed, empirically-based models are already, in a sense, meta-models, since the data that they represent are themselves a kind of descriptive model of reality (gained via measurement processes).

This meta-modelling strategy can be iterated to produce meta-meta-models etc. resulting in “layers” of models, with each layer modelling some aspect of the one “below” until one reaches the data and then what the data measures. Each layer should be able to be compared and checked with the layer “below”, and analysed by the layer “above”.

An extended example of such layering was built during the SCID (Social Complexity of Immigration and Diversity) project10 and illustrated in Figure 1. In this a complicated simulation (Model 1) was built to incorporate some available data and what was known concerning the social and behavioural processes that lead people to bother to vote (or not). This simulation was used as a counter-example to show how assumptions about the chaining effect of interventions might be misplaced (Fieldhouse et al. 2016). A much simpler simulation was then built by theoretical physicists (Model 2), so that it produced the same selected outcomes over time and aa range of parameter values. This allowed us to show that some of the features in the original (such as dynamic networks) were essential to get the observed dynamics in it (Lafuerza et al. 2016a). This simpler model was in turn modelled by an even simpler model (Model 3) that was amenable to an analytic model (Model 4) that allowed us to obtain some results concerning the origin of a region of bistability in the dynamics (Lafuerza et al. 2016b).

Layering fig 1

Figure 1. The Layering of models that were developed in part of the SCID project

Although there are dangers in such layering – each layer could introduce a new weakness – there are also methodological advantages, including the following. (A) Each model in the chain (except model 4) is compared and checked against both the layer below and that above. Such multiple model comparisons are excellent for revealing hidden assumptions and unanticipated effects. (B) Whilst previously what might have happened was a “heroic” leap of abstraction from evidence and understanding straight to Model 3 or 4, here abstraction happens over a series of more modest steps, each of which is more amenable to checking and analysis. When you stage abstraction the introduced assumptions are more obvious and easier to analyse.

One can imagine such “layering” developing in many directions to leverage useful (but indirect) understanding, for example the following.

  • Using an AI algorithm to learn patterns in some data (e.g. medical data for disease diagnosis) but then modelling its working to obtain some human-accessible understanding of how it is doing it.
  • Using a machine learning model to automatically identify the different “phase spaces” in model results where qualitatively different model behaviour is exhibited, so one can then try to simplify the model within each phase.
  • Automatically identifying the processes and structures that are common to a given set of models to facilitate the construction of a more general, ‘umbrella’ model that approximates all the outcomes that would have resulted from the set, but within a narrower range of conditions.

As the quote at the top implies, we are used to settling for partial control of what machines do because it allows us to extend our physical abilities in useful ways. Each time we make their control more indirect, we need to check that this is safe and adequate for purpose. In the cars we drive there are ever more layers of electronic control between us and the physical reality it drives through which we adjust to – we are currently adjusting to more self-drive abilities. Of course, the testing and monitoring of these systems is very important but that will not stop the introduction of layers that will make them safer and more pleasant to drive.

The same is true of our modelling, which we will need to apply in ever more layers in order to leverage useful understanding which would not be accessible otherwise. Yes, we will need to use practical methods to test their fitness for purpose and reliability, and this might include the complete verification of some components (where this is feasible), but we cannot constrain ourselves to only models we completely understand.

Concluding Discussion

If the above seems obvious, then why am I bothering to write this? I think for a few reasons. Firstly, to answer the presumption that understanding one’s model must have priority over all other considerations (such as empirical adequacy) so that sometimes we must accept and use partially understood models. Secondly, to point out that such layering has benefits as well as difficulties – especially if it can stage abstraction into more verifiable steps and thus avoid huge leaps to simple but empirically-isolated models. Thirdly, because such layering will become increasingly common and necessary.

In order to extend our mental reach further, we will need to develop increasingly complicated and layered modelling. To do this we will need to accept that our understanding is leveraged via partially understood models, but also to develop the practical methods to ensure their adequacy for purpose.

Notes

[1] These are a compressed version of his actual words during a 1933 lecture, which were: “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” (Robinson 2018)
[2] Adequate for whatever our purpose for it is (Edmonds & al. 2019).
[3]The weasel words I once heard from a Mathematician excusing an analytic model he knew to be simplistic were: that, although he knew it was wrong, it was useful for “capturing core dynamics” (though how he knew that they were not completely wrong eludes me).
[4] For an introduction to this approach read the European Centre for Medium-Range Weather Forecasts’ fact sheet on “Ensemble weather forecasting” at: https://www.ecmwf.int/en/about/media-centre/focus/2017/fact-sheet-ensemble-weather-forecasting
[5] In principle, a person could do all the calculations involved in a forecast but only with the aid of exterior tools such as pencil and paper to keep track of it all so it is arguable whether the person doing the individual calculations has an “understanding” of the complete picture. Lewis Fry Richardson, who pioneered the idea of numerical forecasting of weather in the 1920s, did a 1-day forecast by hand to illustrate his method (Lynch 2008), but this does not change the argument.
[6] An analytic solution is when one can obtain a closed-form equation that characterises all the outcomes by manipulating the mathematical symbols in a proof. If one has to numerically calculate outcomes for different initial conditions and parameters this is a computational solution.
[7] For purely predictive models, whose purpose is only to anticipate an unknown value to a useful level of accuracy, this is not strictly necessary. For example, how some AI/Machine learning models work may not clear at the micro-level, but as long as it works (successfully predicts) this does not matter – even if its predictive ability is due to a bug.
[8] Models may still be useful in this case, for example to check the assumptions made in the matching mathematical or other understanding.
[9] For more on this use see (Edmonds et al. 2019).
[10] For more about this project see http://cfpm.org/scid

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, 2019-2023, grant number ES/S015159/1 and was supported as part of the EPSRC-funded “SCID” project 2010-2016, grant number EP/H02171X/1.

References

Calder, M., Craig, C., Culley, D., de Cani, R., Donnelly, C.A., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N. Hargrove, C., Hinds, D., Lane, D.C., Mitchell, D., Pavey, G., Robertson, D., Rosewell, B., Sherwin, S., Walport, M. and Wilson, A. (2018) Computational modelling for decision-making: where, why, what, who and how. Royal Society Open Science, DOI:10.1098/rsos.172096.

Edmonds, B. (2013) Complexity and Context-dependency. Foundations of Science, 18(4):745-755. DOI:10.1007/s10699-012-9303-x

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144. DOI:10.1007/978-3-540-32243-6_11

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. DOI:10.18564/jasss.3993

Fieldhouse, E., Lessard-Phillips, L. & Edmonds, B. (2016) Cascade or echo chamber? A complex agent-based simulation of voter turnout. Party Politics. 22(2):241-256.  DOI:10.1177/1354068815605671

Lafuerza, LF, Dyson, L, Edmonds, B & McKane, AJ (2016a) Simplification and analysis of a model of social interaction in voting, European Physical Journal B, 89:159. DOI:10.1140/epjb/e2016-70062-2

Lafuerza L.F., Dyson L., Edmonds B., & McKane A.J. (2016b) Staged Models for Interdisciplinary Research. PLoS ONE, 11(6): e0157261. DOI:10.1371/journal.pone.0157261

Lynch, P. (2008). The origins of computer weather prediction and climate modeling. Journal of Computational Physics, 227(7), 3431-3444. DOI:10.1016/j.jcp.2007.02.034

Robinson, A. (2018) Did Einstein really say that? Nature, 557, 30. DOI:10.1038/d41586-018-05004-4

Thompson, E. (2022) Escape from Model Land. Basic Books. ISBN-13: 9781529364873


Edmonds, B. (2023) The inevitable “layering” of models to extend the reach of our understanding. Review of Artificial Societies and Social Simulation, 9 Feb 2023. https://rofasss.org/2023/02/09/layering


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Socio-Cognitive Systems – a position statement

By Frank Dignum1, Bruce Edmonds2 and Dino Carpentras3

1Department of Computing Science, Faculty of Science and Technology, Umeå University, frank.dignum@umu.se
2Centre for Policy Modelling, Manchester Metropolitan University, bruce@edmonds.name
3Department of Psychology, University of Limerick, dino.carpentras@gmail.com

In this position paper we argue for the creation of a new ‘field’: Socio-Cognitive Systems. The point of doing this is to highlight the importance of a multi-levelled approach to understanding those phenomena where the cognitive and the social are inextricably intertwined – understanding them together.

What goes on ‘in the head’ and what goes on ‘in society’ are complex questions. Each of these deserves serious study on their own – motivating whole fields to answer them. However, it is becoming increasingly clear that these two questions are deeply related. Humans are fundamentally social beings, and it is likely that many features of their cognition have evolved because they enable them to live within groups (Herrmann et al. 20007). Whilst some of these social features can be studied separately (e.g. in a laboratory), others only become fully manifest within society at large. On the other hand, it is also clear that how society ‘happens’ is complicated and subtle and that these processes are shaped by the nature of our cognition. In other words, what people ‘think’ matters for understanding how society ‘is’ and vice versa. For many reasons, both of these questions are difficult to answer. As a result of these difficulties, many compromises are necessary in order to make progress on them, but each compromise also implies some limitations. The main two types of compromise consist of limiting the analysis to only one of the two (i.e. either cognition or society)[1]. To take but a few examples of this.

  1. Neuro-scientists study what happens between systems of neurones to understand how the brain does things and this is so complex that even relatively small ensembles of neurones are at the limits of scientific understanding.
  2. Psychologists see what can be understood of cognition from the outside, usually in the laboratory so that some of the many dimensions can be controlled and isolated. However, what can be reproduced in a laboratory is a limited part of behaviour that might be displayed in a natural social context.
  3. Economists limit themselves to the study of the (largely monetary) exchange of services/things that could occur under assumptions of individual rationality, which is a model of thinking not based upon empirical data at the individual level. Indeed it is known to contradict a lot of the data and may only be a good approximation for average behaviour under very special circumstances.
  4. Ethnomethodologists will enter a social context and describe in detail the social and individual experience there, but not generalise beyond that and not delve into the cognition of those they observe.
  5. Other social scientists will take a broader view, look at a variety of social evidence, and theorise about aspects of that part of society. They (almost always) do not include individual cognition into account in these and do not seek to integrate the social and the cognitive levels.

Each of these in the different ways separate the internal mechanisms of thought from the wider mechanisms of society or limits its focus to a very specific topic. This is understandable; what each is studying is enough to keep them occupied for many lifetimes. However, this means that each of these has developed their own terms, issues, approaches and techniques which make relating results between fields difficult (as Kuhn, 1962, pointed out).

SCS Picture 1

Figure 1: Schematic representation of the relationship between the individual and society. Individuals’ cognition is shaped by society, at the same time, society is shaped by individuals’ beliefs and behaviour.

This separation of the cognitive and the social may get in the way of understanding many things that we observe. Some phenomena seem to involve a combination of these aspects in a fundamental way – the individual (and its cognition) being part of society as well as society being part of the individual. Some examples of this are as follows (but please note that this is far from an exhaustive list).

  • Norms. A social norm is a constraint or obligation upon action imposed by society (or perceived as such). One may well be mistaken about a norm (e.g. whether it is ok to casually talk to others at a bus stop), thus it is also a belief – often not told to one explicitly but something one needs to infer from observation. However, for a social norm to hold it also needs to be an observable convention. Decisions to violate social norms require that the norm is an explicit (referable) object in the cognitive model. But the violation also has social consequences. If people react negatively to violations the norm can be reinforced. But if violations are ignored it might lead to a norm disappearing. How new norms come about, or how old ones fade away, is a complex set of interlocking cognitive and social processes. Thus social norms are a phenomena that essentially involves both the social and the cognitive (Conte et al. 2013).
  • Joint construction of social reality. Many of the constraints on our behaviour come from our perception of social reality. However, we also create this social reality and constantly update it. For example, we can invent a new procedure to select a person as head of department or exit a treaty and thus have different ways of behaving after this change. However, these changes are not unconstrained in themselves. Sometimes the time is “ripe for change”, while at other times resistance is too big for any change to take place (even though a majority of the people involved would like to change). Thus what is socially real for us depends on what people individually believe is real, but this depends in complex ways on what other people believe and their status. And probably even more important: the “strength” of a social structure depends on the use people make of it. E.g. a head of department becomes important if all decisions in the department are deferred to the head. Even though this might not be required by university or law.
  • Identity. Our (social) identity determines the way other people perceive us (e.g. a sports person, a nerd, a family man) and therefore creates expectations about our behaviour. We can create our identities ourselves and cultivate them, but at the same time, when we have a social identity, we try to live up to it. Thus, it will partially determine our goals and reactions and even our feeling of self-esteem when we live up to our identity or fail to do so. As individuals we (at least sometimes) have a choice as to our desired identity, but in practice, this can only be realised with the consent of society. As a runner I might feel the need to run at least three times a week in order for other people to recognize me as runner. At the same time a person known as a runner might be excused from a meeting if training for an important event. Thus reinforcing the importance of the “runner” identity.
  • Social practices. The concept already indicates that social practices are about the way people habitually interact and through this interaction shape social structures. Practices like shaking hands when greeting do not always have to be efficient, but they are extremely socially important. For example, different groups, countries and cultures will have different practices when greeting and performing according to the practice shows whether you are part of the in-group or out-group. However, practices can also change based on circumstances and people, as it happened, for example, to the practice of shaking hands during the covid-19 pandemic. Thus, they are flexible and adapting to the context. They are used as flexible mechanisms to efficiently fit interactions in groups, connecting persons and group behaviour.

As a result, this division between cognitive and the social gets in the way not only of theoretical studies, but also in practical applications such as policy making. For example, interventions aimed at encouraging vaccination (such as compulsory vaccination) may reinforce the (social) identity of the vaccine hesitant. However, this risk and its possible consequences for society cannot be properly understood without a clear grasp of the dynamic evolution of social identity.

Computational models and systems provide a way of trying to understand the cognitive and the social together. For computational modellers, there is no particular reason to confine themselves to only the cognitive or only the social because agent-based systems can include both within a single framework. In addition, the computational system is a dynamic model that can represent the interactions of the individuals that connect the cognitive models and the social models. Thus the fact that computational models have a natural way to represent the actions as an integral and defining part of the socio-cognitive system is of prime importance. Given that the actions are an integral part of the model it is well suited to model the dynamics of socio-cognitive systems and track changes at both the social and the cognitive level. Therefore, within such systems we can study how cognitive processes may act to produce social phenomena whilst, at the same time, as how social realities are shaping the cognitive processes. Caarley and Newell (1994) discusses what is necessary at the agent level for sociality, Hofested et al. (2021) talk about how to understand sociality using computational models (including theories of individual action) – we want to understand both together. Thus, we can model the social embeddedness that Granovetter (1985) talked about – going beyond over- or under-socialised representations of human behaviour. It is not that computational models are innately suitable for modelling either the cognitive or the social, but that they can be appropriately structured (e.g. sets of interacting parts bridging micro-, meso- and macro-levels) and include arbitrary levels of complexity. Lots of models that represent the social have entities that stand for the cognitive, but do not explicitly represent much of that detail – similarly much cognitive modelling implies the social in terms of the stimuli and responses of an individual that would be to other social entities, but where these other entities are not explicitly represented or are simplified away.

Socio-Cognitive Systems (SCS) are: those models and systems where both cognitive and social complexity are represented with a meaningful level of processual detail.

A good example of an application where this appeared of the biggest importance was in simulations for the covid-19 crisis. The spread of the corona virus on macro level could be given by an epidemiological model, but the actual spreading depended crucially on the human behaviour that resulted from individuals’ cognitive model of the situation. In Dignum (2021) it was shown how the socio-cognitive system approach was fundamental to obtaining better insights in the effectiveness of a range of covid-19 restrictions.

Formality here is important. Computational systems are formal in the sense that they can be unambiguously passed around (i.e. unlike language, it is not differently re-interpreted by each individual) and operate according to their own precisely specified and explicit rules. This means that the same system can be examined and experimented on by a wider community of researchers. Sometimes, even when the researchers from different fields find it difficult to talk to one another, they can fruitfully cooperate via a computational model (e.g. Lafuerza et al. 2016). Other kinds of formal systems (e.g. logic, maths) are geared towards models that describe an entire system from a birds eye view. Although there are some exceptions like fibred logics Gabbay (1996), these are too abstract to be of good use to model practical situations. The lack of modularity and has been addressed in context logics Giunchiglia, F., & Ghidini, C. (1998). However, the contexts used in this setting are not suitable to generate a more general societal model. It results in most typical mathematical models using a number of agents which is either one, two or infinite (Miller and Page 2007), while important social phenomena happen with a “medium sized” population. What all these formalisms miss is a natural way of specifying the dynamics of the system that is modelled, while having ways to modularly describe individuals and the society resulting from their interactions. Thus, although much of what is represented in Socio-Cognitive Systems is not computational, the lingua franca for talking about them is.

The ‘double complexity’ of combining the cognitive and the social in the same system will bring its own methodological challenges. Such complexity will mean that many socio-cognitive systems will be, themselves, hard to understand or analyse. In the covid-19 simulations, described in (Dignum 2021), a large part of the work consisted of analysing, combining and representing the results in ways that were understandable. As an example, for one scenario 79 pages of graphs were produced showing different relations between potentially relevant variables. New tools and approaches will need to be developed to deal with this. We only have some hints of these, but it seems likely that secondary stages of analysis – understanding the models – will be necessary, resulting in a staged approach to abstraction (Lafuerza et al. 2016). In other words, we will need to model the socio-cognitive systems, maybe in terms of further (but simpler) socio-cognitive systems, but also maybe with a variety of other tools. We do not have a view on this further analysis, but this could include: machine learning, mathematics, logic, network analysis, statistics, and even qualitative approaches such as discourse analysis.

An interesting input for the methodology of designing and analysing socio-cognitive systems is anthropology and specifically ethnographical methods. Again, for the covid-19 simulations the first layer of the simulation was constructed based on “normal day life patterns”. Different types of persons were distinguished that each have their own pattern of living. These patterns interlock and form a fabric of social interactions that overall should satisfy most of the needs of the agents. Thus we calibrate the simulation based on the stories of types of people and their behaviours. Note that doing the same just based on available data of behaviour would not account for the underlying needs and motives of that behaviour and would not be a good basis for simulating changes. The stories that we used looked very similar to the type of reports ethnographers produce about certain communities. Thus further investigating this connection seems worthwhile.

For representing the output of the complex socio-cognitive systems we can also use the analogue of stories. Basically, different stories show the underlying (assumed) causal relations between phenomena that are observed. E.g. seeing an increase in people having lunch with friends can be explained by the fact that a curfew prevents people having dinner with their friends, while they still have a need to socialize. Thus the alternative of going for lunch is chosen more often. One can see that the explaining story uses both social as well as cognitive elements to describe the results. Although in the covid-19 simulations we have created a number of these stories, they were all created by hand after (sometimes weeks) of careful analysis of the results. Thus for this kind of approach to be viable, new tools are required.

Although human society is the archetypal socio-cognitive system, it is not the only one. Both social animals and some artificial systems also come under this category. These may be very different from the human, and in the case of artificial systems completely different. Thus, Socio-Cognitive Systems is not limited to the discussion of observable phenomena, but can include constructed or evolved computational systems, and artificial societies. Examination of these (either theoretically or experimentally) opens up the possibility of finding either contrasts or commonalities between such systems – beyond what happens to exist in the natural world. However, we expect that ideas and theories that were conceived with human socio-cognitive systems in mind might often be an accessible starting point for understanding these other possibilities.

In a way, Socio-Cognitive Systems bring together two different threads in the work of Herbert Simon. Firstly, as in Simon (1948) it seeks to take seriously the complexity of human social behaviour without reducing this to overly simplistic theories of individual behaviour. Secondly, it adopts the approach of explicitly modelling the cognitive in computational models (Newell & Simon 1972). Simon did not bring these together in his lifetime, perhaps due to the limitations and difficulty of deploying the computational tools to do so. Instead, he tried to develop alternative mathematical models of aspects of thought (Simon 1957). However, those models were limited by being mathematical rather than computational.

To conclude, a field of Socio-Cognitive Systems would consider the cognitive and the social in an integrated fashion – understanding them together. We suggest that computational representation or implementation might be necessary to provide concrete reference between the various disciplines that are needed to understand them. We want to encourage research that considers the cognitive and the social in a truly integrated fashion. If by labelling a new field does this it will have achieved its purpose. However, there is the possibility that completely new classes of theory and complexity may be out there to be discovered – phenomena that are denied if either the cognitive or the social are not taken together – a new world of a socio-cognitive systems.

Notes

[1] Some economic models claim to bridge between individual behaviour and macro outcomes, however this is traditionally notional. Many economists admit that their primary cognitive models (varieties of economic rationality) are not valid for individuals but are what people on average do – i.e. this is a macro-level model. In other economic models whole populations are formalised using a single representative agent. Recently, there are some agent-based economic models emerging, but often limited to agree with traditional models.

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, grant number ES/S015159/1.

References

Carley, K., & Newell, A. (1994). The nature of the social agent. Journal of mathematical sociology, 19(4): 221-262. DOI: 10.1080/0022250X.1994.9990145

Conte R., Andrighetto G. and Campennì M. (eds) (2013) Minding Norms – Mechanisms and dynamics of social order in agent societies. Oxford University Press, Oxford.

Dignum, F. (ed.) (2021) Social Simulation for a Crisis; Results and Lessons from Simulating the COVID-19 Crisis. Springer.

Herrmann E., Call J, Hernández-Lloreda MV, Hare B, Tomasello M (2007) Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science 317(5843): 1360-1366. DOI: 10.1126/science.1146282

Hofstede, G.J, Frantz, C., Hoey, J., Scholz, G. and Schröder, T. (2021) Artificial Sociality Manifesto. Review of Artificial Societies and Social Simulation, 8th Apr 2021. https://rofasss.org/2021/04/08/artsocmanif/

Gabbay, D. M. (1996). Fibred Semantics and the Weaving of Logics Part 1: Modal and Intuitionistic Logics. The Journal of Symbolic Logic, 61(4), 1057–1120.

Ghidini, C., & Giunchiglia, F. (2001). Local models semantics, or contextual reasoning= locality+ compatibility. Artificial intelligence, 127(2), 221-259. DOI: 10.1016/S0004-3702(01)00064-9

Granovetter, M. (1985) Economic action and social structure: The problem of embeddedness. American Journal of Sociology 91(3): 481-510. DOI: 10.1086/228311

Kuhn, T,S, (1962) The structure of scientific revolutions. University of Chicago Press, Chicago

Lafuerza L.F., Dyson L., Edmonds B., McKane A.J. (2016) Staged Models for Interdisciplinary Research. PLoS ONE 11(6): e0157261, DOI: 10.1371/journal.pone.0157261

Miller, J. H., Page, S. E., & Page, S. (2009). Complex adaptive systems. Princeton university press.

Newell A, Simon H.A. (1972) Human problem solving. Prentice Hall, Englewood Cliffs, NJ

Simon, H.A. (1948) Administrative behaviour: A study of the decision making processes in administrative organisation. Macmillan, New York

Simon, H.A. (1957) Models of Man: Social and rational. John Wiley, New York


Dignum, F., Edmonds, B. and Carpentras, D. (2022) Socio-Cognitive Systems – A Position Statement. Review of Artificial Societies and Social Simulation, 2nd Apr 2022. https://rofasss.org/2022/04/02/scs


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Reply to Frank Dignum

By Edmund Chattoe-Brown

This is a reply to Frank Dignum’s reply (about Edmund Chattoe-Brown’s review of Frank’s book)

As my academic career continues, I have become more and more interested in the way that people justify their modelling choices, for example, almost every Agent-Based Modeller makes approving noises about validation (in the sense of comparing real and simulated data) but only a handful actually try to do it (Chattoe-Brown 2020). Thus I think two specific statements that Frank makes in his response should be considered carefully:

  1. … we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises.” Firstly, negative claims (“This is not a banana”) are not generally helpful in argument. Secondly, readers want to know (or should want to know) what is being claimed and, importantly, how they would decide if it is true “objectively”. Given how many models sprang up under COVID it is clear that what is described here cannot be the only way to do it but the question is how do we know you did it “better?” This was also my point about institutionalisation. For me, the big lesson from COVID was how much the automatic response of the ABM community seems to be to go in all directions and build yet more models in a tearing hurry rather than synthesise them, challenge them or test them empirically. I foresee a problem both with this response and our possible unwillingness to be self-aware about it. Governments will not want a million “interesting” models to choose from but one where they have externally checkable reasons to trust it and that involves us changing our mindset (to be more like climate modellers for example, Bithell & Edmonds 2020). For example, colleagues and I developed a comparison methodology that allowed for the practical difficulties of direct replication (Chattoe-Brown et al. 2021).
  2. The second quotation which amplifies this point is: “But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.” Again, here one has to ask the right question for progress in modelling. On what scientific grounds should people do this? On what grounds should someone reuse this model rather than start their own? Why isn’t the Dignum et al. model built on another “market leader” to set a good example? (My point about programming languages was purely practical not scientific. Frank is right that the model is no less valid because the programming language was changed but a version that is now unsupported seems less useful as a basis for the kind of further development advocated here.)

I am not totally sure I have understood Frank’s point about data so I don’t want to press it but my concern was that, generally, the book did not seem to “tap into” relevant empirical research (and this is a wider problem that models mostly talk about other models). It is true that parameter values can be adjusted arbitrarily in sensitivity analysis but that does not get us any closer to empirically justified parameter values (which would then allow us to attempt validation by the “generative methodology”). Surely it is better to build a model that says something about the data that exists (however imperfect or approximate) than to rely on future data collection or educated guesses. I don’t really have the space to enumerate the times the book said “we did this for simplicity”, “we assumed that” etc. but the cumulative effect is quite noticeable. Again, we need to be aware of the models which use real data in whatever aspects and “take forward” those inputs so they become modelling standards. This has to be a collective and not an individualistic enterprise.

References

Bithell, M. and Edmonds, B. (2020) The Systematic Comparison of Agent-Based Policy Models – It’s time we got our act together!. Review of Artificial Societies and Social Simulation, 11th May 2021. https://rofasss.org/2021/05/11/SystComp/

Chattoe-Brown, E. (2020) A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation. Review of Artificial Societies and Social Simulation, 12th June 2020. https://rofasss.org/2020/06/12/abm-validation-bib/

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html

Chattoe-Brown, E., Gilbert, N., Robertson, D. A., & Watts, C. J. (2021). Reproduction as a Means of Evaluating Policy Models: A Case Study of a COVID-19 Simulation. medRxiv 2021.01.29.21250743; DOI: https://doi.org/10.1101/2021.01.29.21250743

Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8


Chattoe-Brown, E. (2021) Reply to Frank Dignum. Review of Artificial Societies and Social Simulation, 10th November 2021. https://rofasss.org/2021/11/10/reply-to-dignum/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”

By Frank Dignum

This is a reply to a review in JASSS (Chattoe-Brown 2021) of (Dignum 2021).

Before responding to some of the specific concerns of Edmund I would like to thank him for the thorough review. I am especially happy with his conclusion that the book is solid enough to make it a valuable contribution to scientific progress in modelling crises. That was the main aim of the book and it seems that is achieved. I want to reiterate what we already remarked in the book; we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises. Nor do we claim that our simulations were without limitations. But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.

The concerns that are expressed by Edmund are certainly valid. I agree with some of them, but will nuance some others. First of all the concern about the fact that we seem to abandon the NetLogo implementation and move to Repast. This fact does not make the ABM itself any less valid! In itself it is also an important finding. It is not possible to scale such a complex model in NetLogo beyond around two thousand agents. This is not just a limitation of our particular implementation, but a more general limitation of the platform. It leads to the important challenge to get more computer scientists involved to develop platforms for social simulations that both support the modelers adequately and provide efficient and scalable implementations.

That the sheer size of the model and the results make it difficult to trace back the importance and validity of every factor on the results is completely true. We have tried our best to highlight the most important aspects every time. But, this leaves questions as to whether we make the right selection of highlighted aspects. As an illustration to this, we have been busy for two months to justify our results of the simulations of the effectiveness of the track and tracing apps. We basically concluded that we need much better integrated analysis tools in the simulation platform. NetLogo is geared towards creating one simulation scenario, running the simulation and analyzing the results based on a few parameters. This is no longer sufficient when we have a model with which we can create many scenarios and have many parameters that influence a result. We used R now to interpret the flood of data that was produced with every scenario. But, R is not really the most user friendly tool and also not specifically meant for analyzing the data from social simulations.

Let me jump to the third concern of Edmund and link it to the analysis of the results as well. While we tried to justify the results of our simulation on the effectiveness of the track and tracing app we compared our simulation with an epidemiological based model. This is described in chapter 12 of the book. Here we encountered the difference in assumed number of contacts per day a person has with other persons. One can take the results, as quoted by Edmund as well, of 8 or 13 from empirical work and use them in the model. However, the dispute is not about the number of contacts a person has per day, but what counts as a contact! For the COVID-19 simulations standing next to a person in the queue in a supermarket for five minutes can count as a contact, while such a contact is not a meaningful contact in the cited literature. Thus, we see that what we take as empirically validated numbers might not at all be the right ones for our purpose. We have tried to justify all the values of parameters and outcomes in the context for which the simulations were created. We have also done quite some sensitivity analyses, which we did not all report on just to keep the volume of the book to a reasonable size. Although we think we did a proper job in justifying all results, that does not mean that one can have different opinions on the value that some parameters should have. It would be very good to check the influence on the results of changes in these parameters. This would also progress scientific insights in the usefulness of complex models like the one we made!

I really think that an ABM crisis response should be institutional. That does not mean that one institution determines the best ABM, but rather that the ABM that is put forward by that institution is the result of a continuous debate among scientists working on ABM’s for that type of crisis. For us, one of the more important outcomes of the ASSOCC project is that we really need much better tools to support the types of simulations that are needed for a crisis situation. However, it is very difficult to develop these tools as a single group. A lot of the effort needed is not publishable and thus not valued in an academic environment. I really think that the efforts that have been put in platforms such as NetLogo and Repast are laudable. They have been made possible by some generous grants and institutional support. We argue that this continuous support is also needed in order to be well equipped for a next crisis. But we do not argue that an institution would by definition have the last word in which is the best ABM. In an ideal case it would accumulate all academic efforts as is done in the climate models, but even more restricted models would still be better than just having a thousand individuals all claiming to have a useable ABM while governments have to react quickly to a crisis.

The final concern of Edmund is about the empirical scale of our simulations. This is completely true! Given the scale and details of what we can incorporate we can only simulate some phenomena and certainly not everything around the COVID-19 crisis. We tried to be clear about this limitation. We had discussions about the Unity interface concerning this as well. It is in principle not very difficult to show people walking in the street, taking a car or a bus, etc. However, we decided to show a more abstract representation just to make clear that our model is not a complete model of a small town functioning in all aspects. We have very carefully chosen which scenarios we can realistically simulate and give some insights in reality from. Maybe we should also have discussed more explicitly all the scenarios that we did not run with the reasons why they would be difficult or unrealistic in our ABM. One never likes to discuss all the limitations of one’s labor, but it definitely can be very insightful. I have made up for this a little bit by submitting an to a special issue on predictions with ABM in which I explain in more detail, which should be the considerations to use a particular ABM to try to predict some state of affairs. Anyone interested to learn more about this can contact me.

To conclude this response to the review, I again express my gratitude for the good and thorough work done. The concerns that were raised are all very valuable to concern. What I tried to do in this response is to highlight that these concerns should be taken as a call to arms to put effort in social simulation platforms that give better support for creating simulations for a crisis.

References

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html


Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

The Systematic Comparison of Agent-Based Policy Models – It’s time we got our act together!

By Mike Bithell and Bruce Edmonds

Model Intercomparison

The recent Covid crisis has led to a surge of new model development and a renewed interest in the use of models as policy tools. While this is in some senses welcome, the sudden appearance of many new models presents a problem in terms of their assessment, the appropriateness of their application and reconciling any differences in outcome. Even if they appear similar, their underlying assumptions may differ, their initial data might not be the same, policy options may be applied in different ways, stochastic effects explored to a varying extent, and model outputs presented in any number of different forms. As a result, it can be unclear what aspects of variations in output between models are results of mechanistic, parameter or data differences. Any comparison between models is made tricky by differences in experimental design and selection of output measures.

If we wish to do better, we suggest that a more formal approach to making comparisons between models would be helpful. However, it appears that this is not commonly undertaken most fields in a systematic and persistent way, except for the field of climate change, and closely related fields such as pollution transport or economic impact modelling (although efforts are underway to extend such systematic comparison to ecosystem models –  Wei et al., 2014, Tittensor et al., 2018⁠). Examining the way in which this is done for climate models may therefore prove instructive.

Model Intercomparison Projects (MIP) in the Climate Community

Formal intercomparison of atmospheric models goes back at least to 1989 (Gates et al., 1999)⁠ with the first atmospheric model inter-comparison project (AMIP), initiated by the World Climate Research Programme. By 1999 this had contributions from all significant atmospheric modelling groups, providing standardised time-series of over 30 model variables for one particular historical decade of simulation, with a standard experimental setup. Comparisons of model mean values with available data helped to reveal overall model strengths and weaknesses: no single model was best at simulation of all aspects of the atmosphere, with accuracy varying greatly between simulations. The model outputs also formed a reference base for further inter-comparison experiments including targets for model improvement and reduction of systematic errors, as well as a starting point for improved experimental design, software and data management standards and protocols for communication and model intercomparison. This led to AMIPII and, subsequently, to a series of Climate model inter-comparison projects (CMIP) beginning with CMIP I in 1996. The latest iteration (CMIP 6) is a collection of 23 separate model intercomparison experiments covering atmosphere, ocean, land surface, geo-engineering, and the paleoclimate. This collection is aimed at the upcoming 2021 IPCC process (AR6). Participating projects go through an endorsement process for inclusion, (a process agreed with modelling groups), based on 10 criteria designed to ensure some degree of coherence between the various models – a further 18 MIPS are also listed as currently active (https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip6). Groups contribute to a central set of common experiments covering the period 1850 to the near-present. An overview of the whole process can be found in (Eyring et al., 2016).

The current structure includes a set of three overarching questions covering the dynamics of the earth system, model systematic biases and understanding possible future change under uncertainty. Individual MIPS may build on this to address one or more of a set of 7 “grand science challenges” associated with the climate. Modelling groups agree to provide outputs in a standard form, obtained from a specified set of experiments under the same design, and to provide standardised documentation to go with their models. Originally (up to CMIP 5), outputs were then added to a central public repository for further analysis, however the output grew so large under CMIP6 that now the data is held dispersed over repositories maintained by separate groups.

Other Examples

Two further more recent examples of collective model  development may also be helpful to consider.

Firstly, an informal network collating models across more than 50 research groups has already been generated as a result of the COVID crisis –  the Covid Forecast Hub (https://covid19forecasthub.org). This is run by a small number of research groups collaborating with the US Centre for Disease Control and is strongly focussed on the epidemiology. Participants are encouraged to submit weekly forecasts, and these are integrated into a data repository and can be vizualized on the website – viewers can look at forward projections, along with associated confidence intervals and model evaluation scores, including those for an ensemble of all models. The focus on forecasts in this case arises out of the strong policy drivers for the current crisis, but the main point is that it is possible to immediately view measures of model performance and to compare the different model types: one clear message that rapidly becomes apparent is that many of the forward projections have 95% (and at some times, even 50%) confidence intervals for incident deaths that more than span the full range of the past historic data. The benefit of comparing many different models in this case is apparent, as many of the historic single-model projections diverge strongly from the data (and the models most in error are not consistently the same ones over time), although the ensemble mean tends to be better.

As a second example, one could consider the Psychological Science Accelerator (PSA: Moshontz et al 2018, https://psysciacc.org/). This is a collaborative network set up with the aim of addressing the “replication crisis” in psychology: many previously published results in psychology have proved problematic to replicate as a result of small or non-representative sampling or use of experimental designs that do not generalize well or have not been used consistently either within or across studies. The PSA seeks to ensure accumulation of reliable and generalizable evidence in psychological science, based on principles of inclusion, decentralization, openness, transparency and rigour. The existence of this network has, for example, enabled the reinvestigation of previous  experiments but with much larger and less nationally biased samples (e.g. Jones et al 2021).

The Benefits of the Intercomparison Exercises and Collaborative Model Building

More specifically, long-term intercomparison projects help to do the following.

  • Build on past effort. Rather than modellers re-inventing the wheel (or building a new framework) with each new model project, libraries of well-tested and documented models, with data archives, including code and experimental design, would allow researchers to more efficiently work on new problems, building on previous coding effort
  • Aid replication. Focussed long term intercomparison projects centred on model results with consistent standardised data formats would allow new versions of code to be quickly tested against historical archives to check whether expected results could be recovered and where differences might arise, particularly if different modelling languages were being used
  • Help to formalize. While informal code archives can help to illustrate the methods or theoretical foundations of a model, intercomparison projects help to understand which kinds of formal model might be good for particular applications, and which can be expected to produce helpful results for given desired output measures
  • Build credibility. A continuously updated set of model implementations and assessment of their areas of competence and lack thereof (as compared with available datasets) would help to demonstrate the usefulness (or otherwise) of ABM as a way to represent social systems
  • Influence Policy (where appropriate). Formal international policy organisations such as the IPCC or the more recently formed IPBES are effective partly through an underpinning of well tested and consistently updated models. As yet it is difficult to see whether such a body would be appropriate or effective for social systems, as we lack the background of demonstrable accumulated and well tested model results.

Lessons for ABM?

What might we be able to learn from the above, if we attempted to use a similar process to compare ABM policy models?

In the first place, the projects started small and grew over time: it would not be necessary, for example, to cover all possible ABM applications at the outset. On the other hand, the latest CMIP iterations include a wide range of different types of model covering many different aspects of the earth system, so that the breadth of possible model types need not be seen as a barrier.

Secondly, the climate inter-comparison project has been persistent for some 30 years – over this time many models have come and gone, but the history of inter-comparisons allows for an overview of how well these models have performed over time – data from the original AMIP I models is still available on request, supporting assessments concerning  long-term model improvement.

Thirdly, although climate models are complex – implementing a variety of different mechanisms in different ways – they can still be compared by use of standardised outputs, and at least some (although not necessarily all) have been capable of direct comparison with empirical data.

Finally, an agreed experimental design and public archive for documentation and output that is stable over time is needed; this needs to be done via a collective agreement among the modelling groups involved so as to ensure a long-term buy-in from the community as a whole, so that there is a consistent basis for long-term model development, building on past experience.

The need for aligning or reproducing ABMs has long been recognised within the community (Axtell et al. 1996; Edmonds & Hales 2003), but on a one-one basis for verifying the specification of models against their implementation, although (Hales et al. 2003) discusses a range of possibilities. However, this is far from a situation where many different models of basically the same phenomena are systematically compared – this would be a larger scale collaboration lasting over a longer time span.

The community has already established a standardised form of documentation in the ODD protocol. Sharing of model code is also becoming routine, and can be easily achieved through COMSES, Github or similar. The sharing of data in a long-term archive may require more investigation. As a starting project COVID-19 provides an ideal opportunity for setting up such a model inter-comparison project – multiple groups already have running examples, and a shared set of outputs and experiments should be straightforward to agree on. This would potentially form a basis for forward looking experiments designed to assist with possible future pandemic problems, and a basis on which to build further features into the existing disease-focussed modelling, such as the effects of economic, social and psychological issues.

Additional Challenges for ABMs of Social Phenomena

Nobody supposes that modelling social phenomena is going to have the same set of challenges that climate change models face. Some of the differences include:

  • The availability of good data. Social science is bedevilled by a paucity of the right kind of data. Although an increasing amount of relevant data is being produced, there are commercial, ethical and data protection barriers to accessing it and the data rarely concerns the same set of actors or events.
  • The understanding of micro-level behaviour. Whilst the micro-level understanding of our atmosphere is very well established, those of the behaviour of the most important actors (humans) is not. However, it may be that better data might partially substitute for a generic behavioural model of decision-making.
  • Agreement upon the goals of modelling. Although there will always be considerable variation in terms of what is wanted from a model of any particular social phenomena, a common core of agreed objectives will help focus any comparison and give confidence via ensembles of projections. Although the MIPs and Covid Forecast Hub are focussed on prediction, it may be that empirical explanation may be more important in other areas.
  • The available resources. ABM projects tend to be add-ons to larger endeavours and based around short-term grant funding. The funding for big ABM projects is yet to be established, not having the equivalent of weather forecasting to piggy-back on.
  • Persistence of modelling teams/projects. ABM tends to be quite short-term with each project developing a new model for a new project. This has made it hard to keep good modelling teams together.
  • Deep uncertainty. Whilst the set of possible factors and processes involved in a climate change model are well established, which social mechanisms need to be involved in any model of any particular social phenomena is unknown. For this reason, there is deep disagreement about the assumptions to be made in such models, as well as sharp divergence in outcome due to changes brought about by a particular mechanism but not included in a model. Whilst uncertainty in known mechanisms can be quantified, assessing the impact of those due to such deep uncertainty is much harder.
  • The sensitivity of the political context. Even in the case of Climate Change, where the assumptions made are relatively well understood and done on objective bases, the modelling exercise and its outcomes can be politically contested. In other areas, where the representation of people’s behaviour might be key to model outcomes, this will need even more care (Adoha & Edmonds 2017).

However, some of these problems were solved in the case of Climate Change as a result of the CMIP exercises and the reports they ultimately resulted in. Over time the development of the models also allowed for a broadening and updating of modelling goals, starting from a relatively narrow initial set of experiments. Ensuring the persistence of individual modelling teams is easier in the context of an internationally recognised comparison project, because resources may be easier to obtain, and there is a consistent central focus. The modelling projects became longer-term as individual researchers could establish a career doing just climate change modelling and importance of the work increasingly recognised. An ABM modelling comparison project might help solve some of these problems as the importance of its work is established.

Towards an Initial Proposal

The topic chosen for this project should be something where there: (a) is enough public interest to justify the effort, (b) there are a number of models with a similar purpose in mind being developed.  At the current stage, this suggests dynamic models of COVID spread, but there are other possibilities, including: transport models (where people go and who they meet) or criminological models (where and when crimes happen).

Whichever ensemble of models is focussed upon, these models should be compared on a core of standard, with the same:

  • Start and end dates (but not necessarily the same temporal granularity)
  • Covering the same set of regions or cases
  • Using the same population data (though possibly enhanced with extra data and maybe scaled population sizes)
  • With the same initial conditions in terms of the population
  • Outputting a core of agreed measures (but maybe others as well)
  • Checked against their agreement against a core set of cases, with agreed data sets
  • Reported on in a standard format (though with a discussion section for further/other observations)
  • well documented and with code that is open access
  • Run a minimum of times with different random seeds

Any modeller/team that had a suitable model and was willing to adhere to the rules would be welcome to participate (commercial, government or academic) and these teams would collectively decide the rules, development and write any reports on the comparisons. Other interested stakeholder groups could be involved including professional/academic associations, NGOs and government departments but in a consultative role providing wider critique – it is important that the terms and reports from the exercise be independent or any particular interest or authority.

Conclusion

We call upon those who think ABMs have the potential to usefully inform policy decisions to work together, in order that the transparency and rigour of our modelling matches our ambition. Whilst model comparison exercises of the kind described are important for any simulation work, particular care needs to be taken when the outcomes can affect people’s lives.

References

Aodha, L. & Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822. (A version is at http://cfpm.org/discussionpapers/236)

Axtell, R., Axelrod, R., Epstein, J. M., & Cohen, M. D. (1996). Aligning simulation models: A case study and results. Computational & Mathematical Organization Theory, 1(2), 123-141. https://link.springer.com/article/10.1007%2FBF01299065

Edmonds, B., & Hales, D. (2003). Replication, replication and replication: Some hard lessons from model alignment. Journal of Artificial Societies and Social Simulation, 6(4), 11. http://jasss.soc.surrey.ac.uk/6/4/11.html

Eyring, V., Bony, S., Meehl, G. A., Senior, C. A., Stevens, B., Stouffer, R. J., & Taylor, K. E. (2016). Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geoscientific Model Development, 9(5), 1937–1958. https://doi.org/10.5194/gmd-9-1937-2016

Gates, W. L., Boyle, J. S., Covey, C., Dease, C. G., Doutriaux, C. M., Drach, R. S., Fiorino, M., Gleckler, P. J., Hnilo, J. J., Marlais, S. M., Phillips, T. J., Potter, G. L., Santer, B. D., Sperber, K. R., Taylor, K. E., & Williams, D. N. (1999). An Overview of the Results of the Atmospheric Model Intercomparison Project (AMIP I). In Bulletin of the American Meteorological Society (Vol. 80, Issue 1, pp. 29–55). American Meteorological Society. https://doi.org/10.1175/1520-0477(1999)080<0029:AOOTRO>2.0.CO;2

Hales, D., Rouchier, J., & Edmonds, B. (2003). Model-to-model analysis. Journal of Artificial Societies and Social Simulation, 6(4), 5. http://jasss.soc.surrey.ac.uk/6/4/5.html

Jones, B.C., DeBruine, L.M., Flake, J.K. et al. To which world regions does the valence–dominance model of social perception apply?. Nat Hum Behav 5, 159–169 (2021). https://doi.org/10.1038/s41562-020-01007-2

Moshontz, H. + 85 others (2018) The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network ,  1(4) 501-515. https://doi.org/10.1177/2515245918797607

Tittensor, D. P., Eddy, T. D., Lotze, H. K., Galbraith, E. D., Cheung, W., Barange, M., Blanchard, J. L., Bopp, L., Bryndum-Buchholz, A., Büchner, M., Bulman, C., Carozza, D. A., Christensen, V., Coll, M., Dunne, J. P., Fernandes, J. A., Fulton, E. A., Hobday, A. J., Huber, V., … Walker, N. D. (2018). A protocol for the intercomparison of marine fishery and ecosystem models: Fish-MIP v1.0. Geoscientific Model Development, 11(4), 1421–1442. https://doi.org/10.5194/gmd-11-1421-2018

Wei, Y., Liu, S., Huntzinger, D. N., Michalak, A. M., Viovy, N., Post, W. M., Schwalm, C. R., Schaefer, K., Jacobson, A. R., Lu, C., Tian, H., Ricciuto, D. M., Cook, R. B., Mao, J., & Shi, X. (2014). The north american carbon program multi-scale synthesis and terrestrial model intercomparison project – Part 2: Environmental driver data. Geoscientific Model Development, 7(6), 2875–2893. https://doi.org/10.5194/gmd-7-2875-2014


Bithell, M. and Edmonds, B. (2020) The Systematic Comparison of Agent-Based Policy Models - It’s time we got our act together!. Review of Artificial Societies and Social Simulation, 11th May 2021. https://rofasss.org/2021/05/11/SystComp/


 

Should the family size be used in COVID-19 vaccine prioritization strategy to prevent variants diffusion? A first investigation using a basic ABM

By Gianfranco Giulioni

Department of Philosophical, Pedagogical and Economic-Quantitative Sciences, University of Chieti-Pescara, Italy

(A contribution to the: JASSS-Covid19-Thread)

When writing this document, few countries have made significant progress in vaccinating their population while many others still move first steps.

Despite the importance of COVID-19 adverse effects on society, there seems to be too little debate on the best option for progressing the vaccination process after the front-line healthcare personnel has been immunized.

The overall adopted strategies in the front-runner countries prioritize people using their health fragility, and age. For example, this strategy’s effectiveness is supported by Bubar et al. (2021), who provide results based on a detailed age-stratified Susceptible, Exposed, Infectious, Recovered (SEIR) model.

During the Covid infection outbreak, the importance of families in COVID diffusion was stressed by experts and media. This observation motivates the present effort, which investigates if considering family size among the vaccine prioritization strategy can have a role.

This document describes an ABM model developed with the intent of analyzing the question. The model is basic and has the essentials features to investigate the issue.

As highlighted by Squazzoni et al. (2020) a careful investigation of pandemics requires the cooperation of many scientists from different disciplines. To ease this cooperation and to the aim of transparency (Barton et al. 2020), the code is made publicly available to allow further developments and accurate parameters calibration to those who might be interested. (https://github.com/gfgprojects/abseir_family)

The following part of the document will sketch the model functioning and provide some considerations on families’ effects on vaccination strategy.

Brief Model Description

The ABSEIR-family model code is written in Java, taking advantage of the Repast Simphony modeling system (https://repast.github.io/).

Figure 1 gives an overview of the current development state of the model core classes.

Briefly, the code handles the relevant events of a pandemic:

  • the appearance of the first case,
  • the infection diffusion by contacts,
  • the introduction of measures for diffusion limitation such as quarantine,
  • the activation and implementation of the immunization process.

The distinguishing feature of the model is that individuals are grouped in families. This grouping allows considering two different diffusion speeds: fast among family members and slower when contacts involve two individuals from different families.

Figure 1: relationships between the core classes of the ABSEIR-family model and their variables and methods.

It is perhaps worth describing the evolution of an individual state to sketch the functioning of the model.

An individual’s dynamic is guided by a variable named infectionAge. In the beginning, all the individuals have this variable at zero. The program increases the infectionAge of all the individuals having a non zero value of this variable at each time step.

When an individual has contact with an infectious, s/he can get the infection or not. If infected, the individual enters the latency period, i.e. her/his infectionAge is set to 1 and the variable starts moving ahead with time, but s/he is not infectious. Individuals whose infectionAge is greater than the latency period length (ll ) become infectious.

At each time step, an infectious meets all her/his family members and mof randomly chosen non-family members. S/he passes on the infection with probability pif to family members and pof to non-family members. The infection can be passed on only if the contacted individual’s infectionAge equals zero and if s/he is not in quarantine.

The infectious phase ends when the infection is discovered (quarantine) or when the individual recovers i.e., the infectionAge is greater than the latency period length plus the infection length parameter (li).

At the present stage of development, the code does not handle the virus adverse post-infection evolution. All the infected individuals in this model recover. The infectionAge is set at a negative value at recovery because recovereds stay immune for a while (lr). Similarly, vaccination set the individual’s  infectionAge to a (high) negative value (lv).

At the present state of the pandemic evolution it is perhaps useful to use the model to get insights into how the family size could affect the vaccination process’s effectiveness. This will be attempted hereafter.

Highlighting the relevance of families size by an ad-hoc example

The relevance of family size in vaccination strategy can be shown using the following ad-hoc example.

Suppose there are two covid-free villages (say village A and B) whose health authorities are about to start vaccinations to avoid the disease spreading.

Villages are identical in the other aspects except for the family size distribution. Each village has 50 inhabitants, but village A has 10 families with five components each, while village B has two five members families and 40 singletons. Five vaccines arrive each day in each village.

Some additional extreme assumptions are made to make differences straightforward.

First, healthy family members are infected for sure by a member who contracted the virus. Second, each individual has the same number of contacts (say n) outside the family and the probability to pass  on the virus in external contacts is lower than 1. Symptoms take several days before showing up.

Now, the health authority are about to start the vaccination process and has to decide how to employ the available vaccines.

Intuition would suggest that Village B’s health authority should immunize large families first. Indeed, if case zero arrives at the end of the second vaccination day, the spread of the disease among the population should be limited because the virus can be passed on by external contacts only; and the probability of transmitting the virus in external contacts is lower than in the family.

But, should this strategy be used even by village A health authority?

To answer this question, we compare the family-based vaccination strategy with a random-based vaccination strategy. In a random-based vaccination strategy, we expect one members to be immunized in each family at the end of the second vaccination day. In the family-based vaccination strategy, two families are immunized at the end of the second vaccination day. Now, suppose one of the not-immunized citizens gets the virus at the end of day two. It is easy to verify there will be an infected more in the family-based strategy (all the five components of the family) than in the random-based strategy (4 components because one of them was immunized before). Furthermore, this implies that there will be n additional dangerous external contacts in the family-based strategy than in the random-based strategy.

These observations make us conclude that a random vaccination strategy will slow down the infection dynamics in village A while it will speed up infections in village B, and the opposite is true for the family-based immunization strategy.

Some simulation exercises

In this part of the document, the model described above will be used to compare further the family-based and random-based vaccination strategy to be used against the appearance of a new case (or variant) in a situation similar to that described in the example but with a more realistic setting.

As one can easily imagine, the family size distribution and COVID transmission risk in families are crucial to our simulation exercises. It is therefore important to gather real-world information for these phenomena. Fortunately, recent scientific contributions can help.

Several authors point out that a Poisson distribution is a good statistical model representing the family size distribution. This distribution is suitable because a single parameter characterizes it, i.e., its average, but it has the drawback of having a positive probability for zero value. Recently, Jarosz (2020) confirms the Poisson distribution’s goodness for modeling family size and shows how shifting it by one unit would be a valid alternative to solve the zero family size problem.

Furthermore, average family sizes data can be easily found using, for example, the OECD family database (http://www.oecd.org/social/family/database.htm).

The current version of the database (updated on 06-12-2016) presents data for 2015 with some exceptions. It shows how the average size of families in OECD countries is 2.46, ranging from Mexico (3.93) to Sweden (1.8).

The result in Metlay et al. (2021) guides the choice of the infection in the family parameter. They  provide evidence of an overall household infection risk of 10.1%

Simulation exercises consist in parameters sensitivity analysis with respect to the benchmark parameter set reported hereafter.

The simulation initialization is done by loading the family size distribution. Two alternative distributions are used and are tuned to obtain a system with a total number of individuals close to 20000. The two distributions are characterized by different average family sizes (afs) and are shown in figure 2.

Figure 2: two family size distributions used to initialize the simulation. Figures by the dots inform on the frequency of the corresponding size. Black square relates to the distribution with an average of 2.5; red circles relate to the distribution with an average of 3.5

The description of the vaccination strategy gives a possibility to list other relevant parameters. The immunization center is endowed with nv doses of vaccine at each time starting from time tv. At time t0, the state of one of the individuals is changed from susceptible to infected. This subject (case zero) is taken from a family having three susceptibles among their components.

Case zero undergoes the same process as all other following infected individuals described above.

The relevant parameters of the simulations are reported in table 1.

var description values reference
ni number of individuals ≅20000
afs average family size 2.5;3.5 OECD
nv number of vaccine doses available at each time 50;100;150
tv vaccination starting time 1
t0 case zero appearance time 10
ll length of latency 3 Buran et al 2021
li length of infectious period 5 Buran et al 2021
pif probability to infect a family member 0.1 Metlay et al 2021
pof probability to infect a non-family individual 0.01;0.02;0.03
mof number of non-family contacts of an infectious 10

Table 1: relevant parameters of the model.

We are now going to discuss the results of our simulation exercises. We focus particularly on the number of people infected up to a given point in time.

Due to the presence of random elements, each run has a different trajectory. We limit these effects as much as possible to allow ceteris paribus comparisons. For example, we keep the family size distribution equal across runs by loading the distributions displayed in figure 2 instead of using the run-time random number generator. Again, we set the number of non-family contacts (mof) equal for all the agents, although the code could set it randomly at each time step. Despite these randomness reductions, significant differences in the dynamics remain within the same parametrization because of randomness in the network of contacts.

To allow comparisons among different parametrizations in the presence of different evolution, we use the cross-section distributions of the total number of infected at the end of the infection process (i.e. time 200).

Figure 3 reports the empirical cumulative distribution function (ecdf) of several parametrizations. To easily read the figure, we put the different charts as in a plane having the average family size (afs) in the abscissa and the number of available vaccines (nv) in the ordinate. From above, we know two values of afs (i.e. 2.5 and 3.5) and three values of nv (i.e. 50, 100 and 150) are considered. Therefore figure 3 is made up of 6 charts.

Each chart reports ecdfs corresponding to the three different pof levels reported in table 1. In particular, circles denote edcfs for pof = 0.01, squares are for  pof = 0.02 and triangles for  pof = 0.03. At the end, choosing a parameters values triplet (afs, nv, pof), two ecdfs are identified. The red one is for the random-based, while the black one is for the family-based vaccination strategy. The family based vaccination strategy prioritizes families with higher number of members not yet infected.

Figure 3 shows mixed results: the random-based vaccination strategy outperforms the family-based one (the red line is above the balck one) for some parameters combinations while the reverse holds for others. In particular, the random-based tends to dominate the family-based strategy in case of larger family (afs = 3.5) and low and high vaccination levels (nv = 50 and 150). The opposite is true with smaller families at the same vaccination levels. The intermediate level of vaccination provides exceptions.

Figure 3: empirical cumulative distribution function of several parametrizations. The ecdfs is build by taking the number of infected people at period 200 of 100 runs with different random seed for each parametrization.

It is perhaps useful to highlight how, in the model, the family-based vaccination strategy stops the diffusion of a new wave or variant with a significant probability for smaller average family size and low and high vaccination levels (bottom-left and top-left charts) and for large average family size and middle level of vaccination (middle-right chart).

A conclusive note

At present, the model is very simple and can be improved in several directions. The most useful would probably be the inclusion of family-specific information. Setting up the model with additional information on each family member’s age or health state would allow overcoming the “universal mixing assumption” (Watts et al., 2020) currently in the model. Furthermore, additional vaccination strategy prioritization based on multiple criteria (such as vaccinating the families of most fragile or elderly) could be compared.

Initializing the model with census data of a local community could give a chance to analyze a more realistic setting in the wake of Pescarmona et al. (2020) and be more useful and understandable to (local) policy makers (Edmonds, 2020).

Developing the model to provide estimations for hospitalization and mortality is another needed step towards more sound vaccination strategies comparison.

Vaccinating by families could balance direct (vaccinating highest risk individuals) and indirect protection, i.e., limiting the probability the virus reaches most fragiles by vaccinating people with many contacts. It could also have positive economic effects relaunching, for example, family tourism. However, it cannot be implemented at risk of worsening the pandemic.

The present text aims only at posing a question. Further assessments following Squazzoni et al.’s (2020) recommendations are needed.

References

Barton, C.M. et al. (2020) Call for transparency of COVID-19 models. Science, 368(6490), 482-483. doi:10.1126/science.abb8637

Bubar, K.M. et al. (2021) Model-informed COVID-19 vaccine prioritization strategies by age and serostatus. Science 371, 916–921. doi:10.1126/science.abe6959

Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/

Jarosz, B. (2021) Poisson Distribution: A Model for Estimating Households by Household Size. Population Research and Policy Review, 40, 149–162. doi:10.1007/s11113-020-09575-x

Metlay J.P., Haas J.S., Soltoff A.E., Armstrong KA. Household Transmission of SARS-CoV-2. (2021) JAMA Netw Open, 4(2):e210304. doi:10.1001/jamanetworkopen.2021.0304

Pescarmona, G., Terna, P., Acquadro, A., Pescarmona, P., Russo, G., and Terna, S. (2020) How Can ABM Models Become Part of the Policy-Making Process in Times of Emergencies – The S.I.S.A.R. Epidemic Model. Review of Artificial Societies and Social Simulation, 20th Oct 2020. https://rofasss.org/2020/10/20/sisar/

Watts, C.J., Gilbert, N., Robertson, D., Droy, L.T., Ladley, D and Chattoe-Brown, E. (2020) The role of population scale in compartmental models of COVID-19 transmission. Review of Artificial Societies and Social Simulation, 14th August 2020. https://rofasss.org/2020/08/14/role-population-scale/

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Giulioni, G. (2020) Should the family size be used in COVID-19 vaccine prioritization strategy to prevent variants diffusion? A first investigation using a basic ABM. Review of Artificial Societies and Social Simulation, 15th April 2021. https://rofasss.org/2021/04/15/famsize/


 

The Policy Context of Covid19 Agent-Based Modelling

By Edmund Chattoe-Brown

(A contribution to the: JASSS-Covid19-Thread)

In the recent discussions about the role of ABM and COVID, there seems to be an emphasis on the purely technical dimensions of modelling. This obviously involves us “playing to our strengths” but unfortunately it may reduce the effectiveness that our potential policy contributions can make. Here are three contextual aspects of policy for consideration to provide a contrast/corrective.

What is “Good” Policy?

Obviously from a modelling perspective good policy involves achieving stated goals. So a model that suggests a lower death rate (or less taxing of critical care facilities) under one intervention rather than another is a potential argument for that intervention. (Though of course how forceful the argument is depends on the quality of the model.) But the problem is that policy is predominantly a political and not a technical process (related arguments are made by Edmonds 2020). The actual goals by which a policy is evaluated may not be limited to the obvious technical ones (even if that is what we hear most about in the public sphere) and, most problematically, there may be goals which policy makers are unwilling to disclose. Since we do not know what these goals are, we cannot tell whether their ends are legitimate (having to negotiate privately with the powerful to achieve anything) or less so (getting re-elected as an end in itself).

Of course, by its nature (being based on both power and secrecy), this problem may be unfixable but even awareness of it may change our modelling perspective in useful ways. Firstly, when academic advice is accused of irrelevance, the academics can only ever be partly to blame. You can only design good policy to the extent that the policy maker is willing to tell you the full evaluation function (to the extent that they know it of course). Obviously, if policy is being measured by things you can’t know about, your advice is at risk of being of limited value. Secondly, with this is mind, we may be able to gain some insight into the hidden agenda of policy by looking at what kind of suggestions tend to be accepted and rejected. Thirdly, once we recognise that there may be “unknown unknowns” we can start to conjecture intelligently about what these could be and take some account of them in our modelling strategies. For example, how many epidemic models consider the financial costs of interventions even approximately? Is the idea that we can and will afford whatever it takes to reduce deaths a blind spot of the “medical model?”

When and How to Intervene

There used to be an (actually rather odd) saying: “You can’t get a baby in a month by making nine women pregnant”. There has been a huge upsurge in interest regarding modelling and its relationship to policy since start of the COVID crisis (of which this theme is just one example) but realising the value of this interest currently faces significant practical problems. Data collection is even harder than usual (as is scholarship in general), there is a limit to how fast good research can ever be done, peer review takes time and so on. The question here is whether any amount of rushing around at the present moment will compensate for neglected activities when scholarship was easier and had more time (an argument also supported by Bithell 2018). The classic example is the muttering in the ABM community about the Ferguson model being many thousands of lines of undocumented C code. Now we are in a crisis, even making the model available was a big ask, let alone making it easier to read so that people might “heckle” it. But what stopped it being available, documented, externally validated and so on before COVID? What do we need to do so that next time there is a pandemic crisis, which there surely will be, “we” (the modelling community very broadly defined) are able to offer the government a “ready” model that has the best features of various modelling techniques, evidence of unfudgeable quality against data, relevant policy scenarios and so on? (Specifically, how will ABM make sure it deserves to play a fit part in this effort?) Apart from the models themselves, what infrastructures, modelling practices, publishing requirements and so on do we need to set up and get working well while we have the time? In practice, given the challenges of making effective contributions right now (and the proliferation of research that has been made available without time for peer review may be actively harmful), this perspective may be the most important thing we can realistically carry into the “post lockdown” world.

What Happens Afterwards?

ABM has taken such a long time to “get to” policy based on data that looking further than the giving of such advice simply seems to have been beyond us. But since policy is what actually happens, we have a serious problem with counterfactuals. If the government decides to “flatten the curve” rather than seek “herd immunity” then we know how the policy implemented relates to the model “findings” (for good or ill) but not how the policy that was not implemented does. Perhaps the outturn of the policy that looked worse in the model would actually have been better had it been implemented?

Unfortunately (this is not a typo), we are about to have an unprecedently large social data set of comparative experiments in the nature and timing of epidemiological interventions, but ABM needs to be ready and willing to engage with this data. I think that ABM probably has a unique contribution to make in “endogenising” the effects of policy implementation and compliance (rather than seeing these, from a “model fitting” perspective, as structural changes to parameter values) but to make this work, we need to show much more interest in data than we have to date.

In 1971, Dutton and Starbuck, in a worryingly neglected article (cited only once in JASSS since 1998 and even then not in respect of model empirics) reported that 81% of the models they surveyed up to 1969 could not achieve even qualitative measurement in both calibration and validation (with only 4% achieving quantitative measurement in both). As a very rough comparison (but still the best available), Angus and Hassani-Mahmooei (2015) showed that just 13% of articles in JASSS published between 2010 and 2012 displayed “results elements” both from the simulation and using empirical material (but the reader cannot tell whether these are qualitative or quantitative elements or whether their joint presence involves comparison as ABM methodology would indicate). It would be hard to make the case that the situation in respect to ABM and data has therefore improved significantly in 4 decades and it is at least possible that it has got worse!

For the purposes of policy making (in the light of the comments above), what matters of course is not whether the ABM community believes that models without data continue to make a useful contribution but whether policy makers do.

References

Angus, S. D. and Hassani-Mahmooei, B. (2015) “Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012, Journal of Artificial Societies and Social Simulation, 18(4), 16. doi:10.18564/jasss.2952

Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb

Dutton, John M. and Starbuck, William H. (1971) Computer Simulation Models of Human Behavior: A History of an Intellectual Technology. IEEE Transactions on Systems, Man, and Cybernetics, SMC-1(2), 128–171. doi:10.1109/tsmc.1971.4308269

Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/


Chattoe-Brown, E. (2020) The Policy Context of Covid19 Agent-Based Modelling. Review of Artificial Societies and Social Simulation, 4th May 2020. https://rofasss.org/2020/05/04/policy-context/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

What more is needed for Democratically Accountable Modelling?

By Bruce Edmonds

(A contribution to the: JASSS-Covid19-Thread)

In the context of the Covid19 outbreak, the (Squazzoni et al 2020) paper argued for the importance of making complex simulation models open (a call reiterated in Barton et al 2020) and that relevant data needs to be made available to modellers. These are important steps but, I argue, more is needed.

The Central Dilemma

The crux of the dilemma is as follows. Complex and urgent situations (such as the Covid19 pandemic) are beyond the human mind to encompass – there are just too many possible interactions and complexities. For this reason one needs complex models, to leverage some understanding of the situation as a guide for what to do. We can not directly understand the situation, but we can understand some of what a complex model tells us about the situation. The difficulty is that such models are, themselves, complex and difficult to understand. It is easy to deceive oneself using such a model. Professional modellers only just manage to get some understanding of such models (and then, usually, only with help and critique from many other modellers and having worked on it for some time: Edmonds 2020) – politicians and the public have no chance of doing so. Given this situation, any decision-makers or policy actors are in an invidious position – whether to trust what the expert modellers say if it contradicts their own judgement. They will be criticised either way if, in hindsight, that decision appears to have been wrong. Even if the advice supports their judgement there is the danger of giving false confidence.

What options does such a policy maker have? In authoritarian or secretive states there is no problem (for the policy makers) – they can listen to who they like (hiring or firing advisers until they get advice they are satisfied with), and then either claim credit if it turned out to be right or blame the advisers if it was not. However, such decisions are very often not value-free technocratic decisions, but ones that involve complex trade-offs that affect people’s lives. In these cases the democratic process is important for getting good (or at least accountable) decisions. However, democratic debate and scientific rigour often do not mix well [note 1].

A Cautionary Tale

As discussed in (Adoha & Edmonds 2019) Scientific modelling can make things worse, as in the case of the North Atlantic Cod Fisheries Collapse. In this case, the modellers became enmeshed within the standards and wishes of those managing the situation and ended up confirming their wishful thinking. An effect of technocratising the decision-making about how much it is safe to catch had the effect of narrowing down the debate to particular measurement and modelling processes (which turned out to be gravely mistaken). In doing so the modellers contributed to the collapse of the industry, with severe social and ecological consequences.

What to do?

How to best interface between scientific and policy processes is not clear, however some directions are becoming apparent.

  • That the process of developing and giving advice to policy actors should become more transparent, including who is giving advice and on what basis. In particular, any reservations or caveats that the experts add should be open to scrutiny so the line between advice (by the experts) and decision-making (by the politicians) is clearer.
  • That such experts are careful not to over-state or hype their own results. For example, implying that their model can predict (or forecast) the future of complex situations and so anticipate the effects of policy before implementation (de Matos Fernandes and Keijzer 2020). Often a reliable assessment of results only occurs after a period of academic scrutiny and debate.
  • Policy actors need to learn a little bit about modelling, in particular when and how modelling can be reliably used. This is discussed in (Government Office for Science 2018, Calder et al. 2018) which also includes a very useful checklist for policy actors who deal with modellers.
  • That the public learn some maturity about the uncertainties in scientific debate and conclusions. Preliminary results and critiques tend to be jumped on too early to support one side within polarised debate or models rejected simply on the grounds they are not 100% certain. We need to collectively develop ways of facing and living with uncertainty.
  • That the decision-making process is kept as open to input as possible. That the modelling (and its limitations) should not be used as an excuse to limit what the voices that are heard, or the debate to a purely technical one, excluding values (Aodha & Edmonds 2017).
  • That public funding bodies and journals should insist on researchers making their full code and documentation available to others for scrutiny, checking and further development (readers can help by signing the Open Modelling Foundation’s open letter and the campaign for Democratically Accountable Modelling’s manifesto).

Some Relevant Resources

  • CoMSeS.net — a collection of resources for computational model-based science, including a platform for publicly sharing simulation model code and documentation and forums for discussion of relevant issues (including one for covid19 models)
  • The Open Modelling Foundation — an international open science community that works to enable the next generation modelling of human and natural systems, including its standards and methodology.
  • The European Social Simulation Association — which is planning to launch some initiatives to encourage better modelling standards and facilitate access to data.
  • The Campaign for Democratic Modelling — which campaigns concerning the issues described in this article.

Notes

note1: As an example of this see accounts of the relationship between the UK scientific advisory committees and the Government in the Financial Times and BuzzFeed.

References

Barton et al. (2020) Call for transparency of COVID-19 models. Science, Vol. 368(6490), 482-483. doi:10.1126/science.abb8637

Aodha, L.Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822. (see also http://cfpm.org/discussionpapers/236)

Calder, M., Craig, C., Culley, D., de Cani, R., Donnelly, C.A., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N. Hargrove, C., Hinds, D., Lane, D.C., Mitchell, D., Pavey, G., Robertson, D., Rosewell, B., Sherwin, S., Walport, M. & Wilson, A. (2018) Computational modelling for decision-making: where, why, what, who and how. Royal Society Open Science,

Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13th April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/

de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Government Office for Science (2018) Computational Modelling: Technological Futures. https://www.gov.uk/government/publications/computational-modelling-blackett-review

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Understanding the current COVID-19 epidemic: one question, one model

By the CoVprehension Collective

(A contribution to the: JASSS-Covid19-Thread)

On the evening of 16th March 2020, the French president, Emmanuel Macron announced the start of a national lockdown, for a period of 15 days. It would be effective from noon the next day (17th March). On the 18th March 2020 at 01:11 pm, the first email circulated in the MicMac team, who had been working on the micro-macro modelling of the spread of a disease in a transportation network a few years. This email was the start of CoVprehension. After about a week of intense emulation, the website was launched, with three questions answered. A month later, there were about fifteen questions on the website, and the group was composed of nearly thirty members from French research institutions, in a varied pool of disciplines, all contributing as volunteers from their confined residence.

CoVprehension in principles

This rapid dynamic originates from a very singular context. It is tricky to analyse it given that the COVID-19 crisis is still developing. However, we can highlight a few fundamental principles leading the project.

The first principle is undeniably a principle of action. To become an actor of the situation first, but this invitation extends to readers of the website, allowing them to run the simulation and to change its parameters; but also more broadly by giving them suggestions on how to link their actions to this global phenomenon which is hard to comprehend. This empowerment also touches upon principles of social justice and, longer term, democracy in the face of this health crisis. By accompanying the process of social awareness, we aim to guide the audience towards a free and informed consent (cf. code of public health) in order to confront the disease. Our first principle is spelled out on theCoVprehension website in the form of a list of objectives that the CoVprehension collective set themselves:

  • Comprehension (the propagation of the virus, the actions put in place)
  • Objectification (giving a more concrete shape to this event which is bigger than us and can be overwhelming)
  • Visualisation (showing the mechanisms at play)
  • Identification (the essential principles and actions to put in place)
  • Do something (overcoming fears and anxieties to become actors in the epidemic)

The second founding principle is that of an interdisciplinary scientific collective formed on a voluntary basis. CoVprehension is self-organised and rests on three pillars: volunteering, collaborative work and the will to be useful during the crisis by offering a space for information, reflection and interaction with a large audience.

As a third principle, we have agility and reactivity. The main idea of the project is to answer questions that people ask, with short posts based on a model or data analysis. This can only be done if the delay between question and answer remains short, which is a real challenge given the complexity of the subject, the high frequency of scientific literature being produced since the beginning of the crisis, and the large number of unknowns and uncertainties which characterise it.

The fourth principle, finally, is the autonomy of groups which form to answer the questions. This allows a multiplicity of perspectives and points of view, sometimes divergent. This necessity draws on the acknowledgement by the European simulation community that a lack of pluralism is even more harmful to support public decision-making than a lack of transparency.

A collaborative organisation and an interactive website

The four principles have lead us, quite naturally, to favour a functioning organisation which exploits short and frequent retroactions and relies of adapted tools. The questions asked online through a Framasoft form are transferred to all CoVprehension members, while a moderator is in charge of replying to them quickly and personally. Each question is integrated into a Trello management board, which allows each member of the collective to pick the questions they want to contribute to and to follow their progression until publication. The collaboration and debate on each of the questions is done using VoIP application Discord. Model prototypes are mostly developed on the Netlogo platform (with some javascript exceptions). Finally, the whole project and website is hosted on GitHub.

The website itself (https://covprehension.org/en) is freely accessible online. Besides the posts answering questions, it contains a simulator to rerun and reproduce the simulations showcased in the posts, a page with scientific resources on the COVID-19 epidemic, a page presenting the project members and a link to the form allowing anyone to ask the collective a question.

On the 28th April 2020, the collective counted 29 members (including 10 women): medical doctors, researchers, engineers and specialists in the fields of computer science, geography, epidemiology, mathematics, economy, data analysis, medicine, architecture and digital media production. The professional statuses of the team members vary (from PhD student to full professor, from intern to engineer, from lecturer to freelancer) whereas their skills complement each other (although a majority of them are complex system modellers). The collective effort enables CoVprehension to scale up on information collection, sharing and updating. This is also fueled by debates during the first take on questions by small teams. Such scaling up would otherwise only be possible in large epidemiology laboratories with massive funding. To increase visibility, the content of the website, initially all in French, is being translated into English progressively as new questions are published.

Simple simulation models

When a question requires a model, especially so for the first questions, our choice has been to build simple models (cf. Question 0). Indeed, the objective of CoVprehension models is not to predict. It is rather to describe, to explain and to illustrate some aspects of the COVID-19 epidemic and its consequences on population. KISS models (“Keep It Simple, Stupid!” cf. Edmonds  & Moss 2004) for the opposition between simple and “descriptive” models) seem better suited to our project. They can unveil broad tendencies and help develop intuitions about potential strategies to deal with the crisis, which can then be also shared with a broad audience.

By choosing a KISS posture, we implicitly reject KIDS postures in such crisis circumstances. Indeed, if the conditions and processes modelled were better informed and known, we could simulate a precise dynamic and generate a series of predictions and forecasts. This is what N. Ferguson’s team did for instance, with a model initially developed with regards to the H5N1 flu in Asia (Ferguson et al., 2005). This model was used heavily to inform public decision-making in the first days of the epidemic in the United Kingdom. Building and calibrating such models takes an awfully long time (Ferguson’s project dates back from 2005) and requires teams and recurring funding which is almost impossible to get nowadays for most teams. At the moment, we think that uncertainty is too big, and that the crisis and the questions that people have do not always necessitate the modelling of complex processes. A large area of the space of social questions mobilised can be answered without describing the mechanisms in so much detail. It is possible that this situation will change as we get information from other scientific disciplines. For now, demonstrating that even simple models are very sensitive to many elements which remain uncertain shows that the scientific discourse could gain by remaining humble: the website reveals how little we know about the future consequences of the epidemic and the political decisions made to tackle it.

Feedback on the questions received and answered

At the end of April, twenty-seven questions have been asked to the CoVprehension collective, through the online form. Seven of them are not really questions (they are rather remarks and comments from people supporting the initiative). Some questions happen to have been asked by colleagues and relatives. The intended outreach has not been fully realised since the website seems to reach people who are already capable of looking for information on the internet. This was to be expected given the circumstances. Everyone who has done some scientific outreach knows how hard it is to reach populations who have not been been made aware of or are interested in scientific facts in the first place. Some successful initiatives (like “les petits débrouillards” or “la main à la pâte” in France) spread scientific knowledge related to recent publications in collaboration with researchers, but they are much better equipped for that (since they do not rely mostly on institutional portals like we do). This large selection bias in our audience (almost impossible to solve, unless we create some specific buzz… which we will then have to handle in terms of new question influx, which is not possible at the moment given the size of the collective and its organisation) means that our website has been protected from trolling. However, we can expect that it might be used within educational programs for example, where STEM teachers could make the students use the various simulators in a question and answer type of game.

Figure 1 shows that the majority of questions are taken by small interdisciplinary teams of two or three members. The most frequent collaborations are between geographers and computer scientists. They are often joined by epidemiologists and mathematicians, and recently by economists. Most topics require the team to build and analyse a simulation model in order to answer the question. The timing of team formations reflects the arrival of new team members in the early days of the project, leading to a large number of questions to be tackled simultaneously. Since April, the rhythm has slowed, reflecting also the increasing complexity of questions, models and answers, but also the marginal “cost” of this investment on the other projects and responsibilities of the researchers involved.

Visualisation of the questions tackled by Covprehension.

Figure 1. Visualisation of the questions tackled by Covprehension.

Initially, the website prioritised questions on simulation and aggregation effects specifically connected with the distribution models of diffusion. For instance, the first questions aimed essentially at showing the most tautological results: with simple interaction rules, we illustrated logically expected effects. These results are nevertheless interesting because while they are trivial to simulation practitioners, they also serve to convince profane readers that they are able to follow the logic:

  • Reducing the density of interactions reduces the spread of the virus and therefore: maybe the lockdown can alter the infection curve (cf. Question 2 and Question 3).
  • By simply adding a variable for the number of hospital beds, we can visualise the impact of lockdown on hospital congestion (cf. Question 7).

For more elaborate questions to be tackled (and to rationalise the debates):

  • Some alternative policies have been highlighted (the Swedish case: Question 13; the deconfinement: Question 9);
  • Some indicators with contradicting impacts have been discussed, which shows the complexity of political decisions and leads readers to question the relevance of some of these indicators (cf. Question 6);
  • The hypotheses (behavioural ones in particular) have been largely discussed, which highlights the way in which the model deviates from what it represents in a simplified way (cf. Question 15).

More than half of the questions asked could not be answered through modelling. In the first phase of the project, we personnally replied to these questions and directed the person towards robust scientific websites or articles where their question could be better answered. The current evolution of the project is more fundamental: new researchers from complementary disciplines have shown some interest in the work done so far and are now integrated into the team (including two medical doctors operating in COVID-19 centres for instance). This will broaden the scope of questions tackled by the team from now on.

Our work fits into a type of education to critical thinking about formal models, one that has long been known as necessary to a technical democracy (Stengers, 2017). At this point, the website can be considered both as a result by itself and as a pilot to function as a model for further initiatives.

Conclusion

Feedback on the CoVprehension project has mostly been positive, but not exempt from limits and weaknesses. Firstly, the necessity of a prompt response has been detrimental to our capacity to fully explore different models, to evaluate their robustness and look for unexpected results. Model validation is unglamorous, slow and hard to communicate. It is crucial nevertheless when assessing the credibility to be associated with models and results. We are now trying to explore our models in parallel. Secondly, the website may suggest a homogeneity of perspectives and a lack of debates regarding how questions are to be answered. These debates do take place during the assessment of questions but so far remain hidden from the readers. It shows indirectly in the way some themes appear in different answers treated from different angles by different teams (for example: the lockdown, treated in question 6, 7, 9 and 14). We consider the possibility of publishing alternative answers to a given question in order to show this possible divergence. Finally, the project is facing a significant challenge: that of continuing its existence in parallel with its members’ activities, with the number of members increasing. The efforts in management, research, editing, publishing and translation have to be maintained while the transaction costs are going up as the size and diversity of the collective increases, as the debates become more and more specific and happen on different platforms… and while new questions keep arriving!

References

Edmonds, B., & Moss, S. (2004). From KISS to KIDS–an ‘anti-simplistic’ modelling approach. In International workshop on multi-agent systems and agent-based simulation (pp. 130-144). Springer, Berlin, Heidelberg. doi:10.1007/978-3-540-32243-6_11

Ferguson, N. M., Cummings, D. A., Cauchemez, S., Fraser, C., Riley, S., Meeyai, A. & Burke, D. S. (2005). Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature, 437(7056), 209-214. doi:10.1038/nature04017

Stengers I. (2017). Civiliser la modernité ? Whitehead et les ruminations du sens commun, Dijon, Les presses du réel. https://www.lespressesdureel.com/EN/ouvrage.php?id=3497


the CoVprehension Collective (2020) Understanding the current COVID-19 epidemic: one question, one model. Review of Artificial Societies and Social Simulation, 30th April 2020. https://rofasss.org/2020/04/30/covprehension/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)