Category Archives: Content

For the comments etc. themselves

Good Modelling Takes a Lot of Time and Many Eyes

By Bruce Edmonds

(A contribution to the: JASSS-Covid19-Thread)

It is natural to want to help in a crisis (Squazzoni et al. 2020), but it is important to do something that is actually useful rather than just ‘adding to the noise’. Usefully modelling disease spread within complex societies is not easy to do – which essentially means there are two options:

  1. Model it in a fairly abstract manner to explore ideas and mechanisms, but without the empirical grounding and validation needed to reliably support policy making.
  2. Model it in an empirically testable manner with a view to answering some specific questions and possibly inform policy in a useful manner.

Which one does depends on the modelling purpose one has in mind (Edmonds et al. 2019). Both routes are legitimate as long as one is clear as to what it can and cannot do. The dangers come when there is confusion –  taking the first route whilst giving policy actors the impression one is doing the second risks deceiving people and giving false confidence (Edmonds & Adoha 2019, Elsenbroich & Badham 2020). Here I am only discussing the second, empirically ambitious route.

Some of the questions that policy-makers might want to ask, include, what might happen if we: close the urban parks, allow children of a specific range of ages go to school one day a week, cancel 75% of the intercity trains, allow people to go to beauty spots, visit sick relatives in hospital or test people as they recover and give them a certificate to allow them to go back to work?

To understand what might happen in these scenarios would require an agent-based model where agents made the kind of mundane, every-day decisions of where to go and who to meet, such that the patterns and outputs of the model were consistent with known data (possibly following the ‘Pattern-Oriented Modelling’ of Grimm & Railsback 2012). This is currently lacking. However this would require:

  1. A long-term, iterative development (Bithell 2018), with many cycles of model development followed by empirical comparison and data collection. This means that this kind of model might be more useful for the next epidemic rather than the current one.
  2. A collective approach rather than one based on individual modellers. In any very complex model it is impossible to understand it all – there are bound to be small errors and programmed mechanisms will subtly interaction with others. As (Siebers & Venkatesan 2020) pointed out this means collaborating with people from other disciplines (which always takes time to make work), but it also means an open approach where lots of modellers routinely inspect, replicate, pull apart, critique and play with other modellers’ work – without anyone getting upset or feeling criticised. This does involve an institutional and normative embedding of good modelling practice (as discussed in Squazzoni et al. 2020) but also requires a change in attitude – from individual to collective achievement.

Both are necessary if we are to build the modelling infrastructure that may allow us to model policy options for the next epidemic. We will need to start now if we are to be ready because it will not be easy.

References

Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H., & Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html> doi: 10.18564/jasss.3993

Elsenbroich, C. and Badham, J. (2020) Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020. https://rofasss.org/2020/04/12/focussing-on-our-strengths/

Grimm, V., & Railsback, S. F. (2012). Pattern-oriented modelling: a ‘multi-scope’for predictive systems ecology. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1586), 298-310. doi:10.1098/rstb.2011.0180

Siebers, P-O. and Venkatesan, S. (2020) Get out of your silos and work together. Review of Artificial Societies and Social Simulation, 8th April 2020. https://rofasss.org/2020/0408/get-out-of-your-silos

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13th April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/


 

Focussing on our Strengths

By Corinna Elsenbroich and Jennifer Badham

(A contribution to the: JASSS-Covid19-Thread)

Understanding a situation is the precondition to make good decisions. In the extraordinary current situation of a global pandemic, the lack of consensus about a good decision path is evident in the variety of government measures in different countries, analyses of decision made and debates on how the future will look. What is also clear is how little we understand the situation and the impact of policy choices. We are faced with the complexity of social systems, our ability to only ever partially understand them and the political pressure to make decisions on partial information.

The JASSS call to arms (Flaminio & al. 2020) is pointing out the necessity for the ABM modelling community to produce relevant models for this kind of emergency situation. Whilst we wholly agree with the sentiment that ABM modelling can contribute to the debate and decision making, we would like to also point out some of the potential pitfalls inherent in a false application and interpretation for ABM.

  1. Small change, big difference: Given the complexity of the real world, there will be aspects that are better and some that are less well understood. Trying to produce a very large model encompassing several different aspects might be counter-productive as we will mix together well understood aspects with highly hypothetical knowledge. It might be better to have different, smaller models – on the epidemic, the economy, human behaviour etc. each of which can be taken with its own level of validation and veracity and be developed by modellers with subject matter understanding, theoretical knowledge and familiarity with relevant data.
  2. Carving up complex systems: If separate models are developed, then we are necessarily making decisions about the boundaries of our models. For a complex system any carving up can separate interactions that are important, for example the way in which fear of the epidemic can drive protective behaviour thereby reducing contacts and limiting the spread. While it is tempting to think that a “bigger model”, a more encompassing one, is necessarily a better carving up of the system because it eliminates these boundaries, in fact it simply moves them inside the model and hides them.
  3. Policy decisions are moral decisions: The decision of what is the right course to take is a decision for the policy maker with all the competing interests and interdependencies of different aspects of the situation in mind. Scientists are there to provide the best information for the understanding of a situation, and models can be used to understand consequences of different courses of action and the uncertainties associated with that action. Models can be used to inform policy decisions but they must not obfuscate that it is a moral choice that has to be made.
  4. Delaying a decision is making a decision to do nothing: Like any other policy option, a decision to maintain the status quo while gathering further information has its own consequences. The Call to Action (paragraph 1.6) refers to public pressure for immediate responses, but this underplays the pressure arising from other sources. It is important to recognise the logical fallacy: “We must do something. This is something. Therefore we must do this.” However, if there are options available that are clearly better than doing nothing, then it is equally illogical to do nothing.

Instead of trying to compete with existing epidemiological models, ABM could focus on the things it is really good at:

  1. Understanding uncertainty in complex systems resulting from heterogeneity, social influence, and feedback. For the case at hand this means not to build another model of the epidemic spread – there are excellent SEIR models doing that – but to explore how the effect of heterogeneity in the infected population (such as in contact patterns or personal behavior in response to infection) can influence the spread. Other possibilities include social effects such as how fear might spread and influence behaviours of panic buying or compliance with the lockdown.
  2. Build models for the pieces that are missing and couple these to the pieces that exist, thereby enriching the debate about the consequences of policy options by making those connections clear.
  3. Visualise and communicate difficult to understand and counterintuitive developments. Right now people are struggling to understand exponential growth, the dynamics of social distancing, the consequences of an overwhelmed health system, and the delays between actions and their consequences. It is well established that such fundamentals of systems thinking are difficult (Booth Sweeney and Sterman https://doi.org/10.1002/sdr.198). Models such as the simple models in the Washington Post or less abstract ones like the routine day activity one from Vermeulen et al (2020) do a wonderful job at this, allowing people to understand how their individual behaviour will contribute to the spread or containment of a pandemic.
  4. Highlight missing data and inform future collection. This unfolding pandemic is defined through the constant assessment using highly compromised data, i.e. infection rates in countries are entirely determined by how much is tested. The most comparable might be the rates of death but even there we have reporting delays and omissions. Trying to build models is one way to identify what needs to be known to properly evaluate consequences of policy options.

The problem we are faced with in this pandemic is one of complexity, not one of ABM, and we must ensure we are honouring the complexity rather than just paying lip service to it. We agree that model transparency, open data collection and interdisciplinary research are important, and want to ensure that all scientific knowledge is used in the best possible way to ensure a positive outcome of this global crisis.

But it is also important to consider the comparative advantage of agent-based modellers. Yes, we have considerable commitment to, and expertise in, open code and data. But so do many other disciplines. Health information is routinely collected in national surveys and administrative datasets, and governments have a great deal of established expertise in health data management. Of course, our individual skills in coding models, data visualisation, and relevant theoretical knowledge can be offered to individual projects as required. But we believe our institutional response should focus on activities where other disciplines are less well equipped, applying systems thinking to understand and communicate the consequences of uncertainty and complexity.

References

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P. , Antosz, P., Scholz, G., Chappin, E., Borit, M., Verhagen, H., Francesca, G. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation 23(2), 10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Booth Sweeney, L., & Sterman, J. D. (2000). Bathtub dynamics: initial results of a systems thinking inventory. System Dynamics Review: The Journal of the System Dynamics Society, 16(4), 249-286.

Stevens, H. (2020) Why outbreaks like coronavirus spread exponentially, and how to “flatten the curve”. Washington Post, 14th of March 2020. (accessed 11th April 2020) https://www.washingtonpost.com/graphics/2020/world/corona-simulator/

Vermeulen, B.,  Pyka, A. and Müller, M. (2020) An agent-based policy laboratory for COVID-19 containment strategies, (accessed 11th April 2020) https://inno.uni-hohenheim.de/corona-modell


Elsenbroich, C. and Badham, J. (2020) Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020. https://rofasss.org/2020/04/12/focussing-on-our-strengths/


 

Get out of your silos and work together!

By Peer-Olaf Siebers and Sudhir Venkatesan

(A contribution to the: JASSS-Covid19-Thread)

The JASSS position paper ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action’ (Squazzoni et al 2020) calls on the scientific community to improve the transparency, access, and rigour of their models. A topic that we think is equally important and should be part of this list is the quest to more “interdisciplinarity”; scientific communities to work together to tackle the difficult job of understanding the complex situation we are currently in and be able to give advice.

The modelling/simulation community in the UK (and more broadly) tend to work in silos. The two big communities that we have been exposed to are the epidemiological modelling community, and social simulation community. They do not usually collaborate with each other despite working on very similar problems and using similar methods (e.g. agent-based modelling). They publish in different journals, use different software, attend different conferences, and even sometimes use different terminology to refer to the same concepts.

The UK pandemic response strategy (Gov.UK 2020) is guided by advice from the Scientific Advisory Group for Emergencies (SAGE), which in turn has comprises three independent expert groups- SPI-M (epidemic modellers), SPI-B (experts in behaviour change from psychology, anthropology and history), and NERVTAG (clinicians, epidemiologists, virologists and other experts). Of these, modelling from member SPI-M institutions has played an important role in informing the UK government’s response to the ongoing pandemic (e.g. Ferguson et al 2020). Current members of the SPI-M belong to what could be considered the ‘epidemic modelling community’. Their models tend to be heavily data-dependent which is justifiable given that their most of their modelling focus on viral transmission parameters. However, this emphasis on empirical data can sometimes lead them to not model behaviour change or model it in a highly stylised fashion, although more examples of epidemic-behaviour models appear in recent epidemiological literature (e.g. Verelst et al 2016; Durham et al 2012; van Boven et al 2008; Venkatesan et al 2019). Yet, of the modelling work informing the current response to the ongoing pandemic, computational models of behaviour change are prominently missing. This, from what we have seen, is where the ‘social simulation’ community can really contribute their expertise and modelling methodologies in a very valuable way. A good resource for epidemiologists in finding out more about the wide spectrum of modelling ideas are the Social Simulation Conference Proceeding Programmes (e.g. SSC2019 2019). But unfortunately, the public health community, including policymakers, are either unaware of these modelling ideas or are unsure of how these are relevant to them.

As pointed out in a recent article, one important concern with how behaviour change has possibly been modelled in the SPI-M COVID-19 models is the assumption that changes in contact rates resulting from a lockdown in the UK and the USA will mimic those obtained from surveys performed in China, which unlikely to be valid given the large political and cultural differences between these societies (Adam 2020). For the immediate COVID-19 response models, perhaps requiring cross-disciplinary validation for all models that feed into policy may be a valuable step towards more credible models.

Effective collaboration between academic communities relies on there being a degree of familiarity, and trust, with each other’s work, and much of this will need to be built up during inter-pandemic periods (i.e. “peace time”). In the long term, publishing and presenting in each other’s journals and conferences (i.e. giving the opportunity for other academic communities to peer-review a piece of modelling work), could help foster a more collaborative environment, ensuring that we are in a much better to position to leverage all available expertise during a future emergency. We should aim to take the best across modelling communities and work together to come up with hybrid modelling solutions that provide insight by delivering statistics as well as narratives (Moss 2020). Working in silos is both unhelpful and inefficient.

References

Adam D (2020) Special report: The simulations driving the world’s response to COVID-19. How epidemiologists rushed to model the coronavirus pandemic. Nature – News Feature. https://www.nature.com/articles/d41586-020-01003-6 [last accessed 07/04/2020]

Durham DP, Casman EA (2012) Incorporating individual health-protective decisions into disease transmission models: A mathematical framework. Journal of The Royal Society Interface. 9(68), 562-570

Ferguson N, Laydon D, Nedjati Gilani G, Imai N, Ainslie K, Baguelin M, Bhatia S, Boonyasiri A, Cucunuba Perez Zu, Cuomo-Dannenburg G, Dighe A (2020) Report 9: Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand. https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf [last accessed 07/04/2020]

Gov.UK (2020) Scientific Advisory Group for Emergencies (SAGE): Coronavirus response. https://www.gov.uk/government/groups/scientific-advisory-group-for-emergencies-sage-coronavirus-covid-19-response [last accessed 07/04/2020]

Moss S (2020) “SIMSOC Discussion: How can disease models be made useful? “, Posted by Scott Moss, 22 March 2020 10:26 [last accessed 07/04/2020]

Squazzoni F, Polhill JG, Edmonds B, Ahrweiler P, Antosz P, Scholz G, Borit M, Verhagen H, Giardini F, Gilbert N (2020) Computational models that matter during a global pandemic outbreak: A call to action, Journal of Artificial Societies and Social Simulation, 23 (2) 10

SSC2019 (2019) Social simulation conference programme 2019. https://ssc2019.uni-mainz.de/files/2019/09/ssc19_final.pdf [last accessed 07/04/2020]

van Boven M, Klinkenberg D, Pen I, Weissing FJ, Heesterbeek H (2008) Self-interest versus group-interest in antiviral control. PLoS One. 3(2)

Venkatesan S, Nguyen-Van-Tam JS, Siebers PO (2019) A novel framework for evaluating the impact of individual decision-making on public health outcomes and its potential application to study antiviral treatment collection during an influenza pandemic. PLoS One. 14(10)

Verelst F, Willem L, Beutels P (2016) Behavioural change models for infectious disease transmission: A systematic review (2010–2015). Journal of The Royal Society Interface. 13(125)


Siebers, P-O. and Venkatesan, S. (2020) Get out of your silos and work together. Review of Artificial Societies and Social Simulation, 8th April 2020. https://rofasss.org/2020/0408/get-out-of-your-silos


 

Go for DATA

By Gérard Weisbuch

(A contribution to the: JASSS-Covid19-Thread)

I totally share the view on the importance of DATA. What we need is data driven models and the reference to weather forecasting and data assimilation is very appropriate. This probably implies the establishment of a center for epidemics forecasting similar to Reading in the UK or Météo-France in Toulouse. The persistence of such an institution in “normal times” would be hard to warrant, but its operation could be organised as the military reserve.

Let me stress three points.

  1. Models are needed not only by National Policy makers but by a wide range of decision makers such as hospitals and even households. These meso-scales units face hard problems of supplies: hospitals have to manage the supplies of material, consumables, personnel to face hard to predict demand from patients. The same holds true for households: e.g. how to program errands in view of the dynamics of the epidemics? All the supply chain issues also exist for firms, including the chain of deliveries of consumables to hospitals. Hence the importance of available data provided by a center for epidemics forecasting.
  2. The JASSS call (Flaminio et al. 2020) stresses the importance DATA, but does not provide many clues about how to get them. One can hope that some institutions would provide them, but my limited experience is that you have to dig for them. Do It Yourself is a leitmotiv of the Big Data industry. I am thinking of processing patient records to build models of the disease, or private diaries and tweets to model individual behaviour. One then needs collaboration from the NLP (Natural Language Processing) community.
  3. The public and even the media have a very low understanding of dynamical systems and of exponential growth. We know since D. Kahneman book “Thinking, Fast and Slow” (2011) that we have a hard time reasoning on probabilities for instance, but this also applies to dynamics and exponential. We face situations that mandate different actions at different stage of the epidemics such as doing errands or moving to the country-side for town dwellers. The issue is even more difficult for firms, who have to manage employment. Simple models and experimental cognitive science results should be brought to journalists and the general public concerning these issues, in the style of Kahneman if possible.

References

Kahneman, D., & Patrick, E. (2011). Thinking, fast and slow. Allen Lane.

Squazzoni, Flaminio, Polhill, J. Gareth, Edmonds, Bruce, Ahrweiler, Petra, Antosz, Patrycja, Scholz, Geeske, Chappin, Émile, Borit, Melania, Verhagen, Harko, Giardini, Francesca and Gilbert, Nigel (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Weisbuch, G. (2020) Go for DATA. Review of Artificial Societies and Social Simulation, 7th April 2020. https://rofasss.org/2020/04/07/go-for-data/


 

Predicting Social Systems – a Challenge

By Bruce Edmonds, Gary Polhill and David Hales

(Part of the Prediction-Thread)

There is a lot of pressure on social scientists to predict. Not only is an ability to predict implicit in all requests to assess or optimise policy options before they are tried, but prediction is also the “gold standard” of science. However, there is a debate among modellers of complex social systems about whether this is possible to any meaningful extent. In this context, the aim of this paper is to issue the following challenge:

Are there any documented examples of models that predict useful aspects of complex social systems?

To do this the paper will:

  1. define prediction in a way that corresponds to what a wider audience might expect of it
  2. give some illustrative examples of prediction and non-prediction
  3. request examples where the successful prediction of social systems is claimed
  4. and outline the aspects on which these examples will be analysed

About Prediction

We start by defining prediction, taken from (Edmonds et al. 2019). This is a pragmatic definition designed to encapsulate common sense usage – what a wider public (e.g. policy makers or grant givers) might reasonably expect from “a prediction”.

By ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model.

Let us clarify the language in this.

  • It has to be reliable. That is, one can rely upon its prediction as one makes this – a model that predicts erratically and only occasionally predicts is no help, since one does not whether to believe any particular prediction. This usually means that (a) it has made successful predictions for several independent cases and (b) the conditions under which it works is (roughly) known.
  • What is predicted has to be unknown at the time of prediction. That is, the prediction has to be made before the prediction is verified. Predicting known data (as when a model is checked on out-of-sample data) is not sufficient [1]. Nor is the practice of looking for phenomena that is consistent with the results of a model, after they have been generated (due to ignoring all the phenomena that is not consistent with the model in this process).
  • What is being predicted is well defined. That is, How to use the model to make a prediction about observed data is clear. An abstract model that is very suggestive – appears to predict phenomena but in a vague and undefined manner but where one has to invent the mapping between model and data to make this work – may be useful as a way of thinking about phenomena, but this is different from empirical prediction.
  • Which aspects of data about being predicted is open. As Watts (2014) points out, this is not restricted to point numerical predictions of some measurable value but could be a wider pattern. Examples of this include: a probabilistic prediction, a range of values, a negative prediction (this will not happen), or a second-order characteristic (such as the shape of a distribution or a correlation between variables). What is important is that (a) this is a useful characteristic to predict and (b) that this can be checked by an independent actor. Thus, for example, when predicting a value, the accuracy of that prediction depends on its use.
  • The prediction has to use the model in an essential manner. Claiming to predict something obviously inevitable which does not use the model is insufficient – the model has to distinguish which of the possible outcomes is being predicted at the time.

Thus, prediction is different from other kinds of scientific/empirical uses, such as description and explanation (Edmonds et al. 2019). Some modellers use “prediction” to mean any output from a model, regardless of its relationship to any observation of what is being modelled [2]. Others use “prediction” for any empirical fitting of data, regardless of whether that data is known before hand. However here we wish to be clearer and avoid any “post-truth” softening of the meaning of the word for two reasons (a) distinguishing different kinds of model use is crucial in matters of model checking or validation and (b) these “softer” kinds of empirical purpose will simply confuse the wider public when if talk to themabout “prediction”. One suspects that modellers have accepted these other meanings because it then allows them to claim they can predict (Edmonds 2017).

Some Examples

Nate Silver and his team aim to predict future social phenomena, such as the results of elections and the outcome of sports competitions. He correctly predicted the outcomes of all 50 electoral colleges in Obama’s election before it happened. This is a data-hungry approach, which involves the long-term development of simulations that carefully see what can be inferred from the available data, with repeated trial and error. The forecasts are probabilistic and repeated many times. As well as making predictions, his unit tries to establish the level of uncertainty in those predictions – being honest about the probability of those predictions coming about given the likely levels of error and bias in the data. These models are not agent-based in nature but tend to be of a mostly statistical nature, thus it is debatable whether these are treated as complex systems – it certainly does not use any theory from complexity science. His book (Silver 2012) describes his approach. Post hoc analysis of predictions – explaining why it worked or not – is kept distinct from the predictive models themselves – this analysis may inform changes to the predictive model but is not then incorporated into the model. The analysis is thus kept independent of the predictive model so it can be an effective check.

Many models in economics and ecology claim to “predict” but on inspection, this only means there is a fit to some empirical data. For example, (Meese & Rogoff 1983) looked at 40 econometric models where they claimed they were predicting some time-series. However, 37 out of 40 models failed completely when tested on newly available data from the same time series that they claimed to predict. Clearly, although presented as being predictive models, they could not predict unknown data. Although we do not know for sure, presumably what happened was that these models had been (explicitly or implicitly) fitted to the out-of-sample data, because the out-of-sample data was already known to the modeller. That is, if the model failed to fit the out-of-sample data when the model was tested, it was then adjusted until it did work, or alternatively, only those models that fitted the out-of-sample data were published.

The Challenge

The challenge is envisioned as happening like this.

  1. We publicise this paper requesting that people send us example of prediction or near-prediction on complex social systems with pointers to the appropriate documentation.
  2. We collect these and analyse them according to the characteristics and questions described below.
  3. We will post some interim results in January 2020 [3], in order to prompt more examples and to stimulate discussion. The final deadline for examples is the end of March 2020.
  4. We will publish the list of all the examples sent to us on the web, and present our summary and conclusions at Social Simulation 2020 in Milan and have a discussion there about the nature and prospects for the prediction of complex social systems. Anyone who contributed an example will be invited to be a co-author if they wish to be so-named.

How suggestions will be judged

For each suggestion, a number of answers will be sought – namely to the following questions:

  • What are the papers or documents that describe the model?
  • Is there an explicit claim that the model can predict (as opposed to might in the future)?
  • What kind of characteristics are being predicted (number, probabilistic, range…)?
  • Is there evidence of a prediction being made before the prediction was verified?
  • Is there evidence of the model being used for a series of independent predictions?
  • Were any of the predictions verified by a team that is independent of the one that made the prediction?
  • Is there evidence of the same team or similar models making failed predictions?
  • To what extent did the model need extensive calibration/adjustment before the prediction?
  • What role does theory play (if any) in the model?
  • Are the conditions under which predictive ability claimed described?

Of course, negative answers to any of the above about a particular model does not mean that the model cannot predict. What we are assessing is the evidence that a model can predict something meaningful about complex social systems. (Silver 2012) describes the method by which they attempt prediction, but this method might be different from that described in most theory-based academic papers.

Possible Outcomes

This exercise might shed some light of some interesting questions, such as:

  • What kind of prediction of complex social systems has been attempted?
  • Are there any examples where the reliable prediction of complex social systems has been achieved?
  • Are there certain kinds of social phenomena which seem to more amenable to prediction than others?
  • Does aiming to predict with a model entail any difference in method than projects with other aims?
  • Are there any commonalities among the projects that achieve reliable prediction?
  • Is there anything we could (collectively) do that would encourage or document good prediction?

It might well be that whether prediction is achievable might depend on exactly what is meant by the word.

Acknowledgements

This paper resulted from a “lively discussion” after Gary’s (Polhill et al. 2019) talk about prediction at the Social Simulation conference in Mainz. Many thanks to all those who joined in this. Of course, prior to this we have had many discussions about prediction. These have included Gary’s previous attempt at a prediction competition (Polhill 2018) and Scott Moss’s arguments about prediction in economics (which has many parallels with the debate here).

Notes

[1] This is sufficient for other empirical purposes, such as explanation (Edmonds et al. 2019)

[2] Confusingly they sometimes the word “forecasting” for what we mean by predict here.

[3] Assuming we have any submitted examples to talk about

References

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1 (see also http://cfpm.org/discussionpapers/236)

Edmonds, B. (2017) The post-truth drift in social simulation. Social Simulation Conference (SSC2017), Dublin, Ireland. (http://cfpm.org/discussionpapers/195)

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Grimm V, Revilla E, Berger U, Jeltsch F, Mooij WM, Railsback SF, Thulke H-H, Weiner J, Wiegand T, DeAngelis DL (2005) Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science 310: 987-991.

Meese, R.A. & Rogoff, K. (1983) Empirical Exchange Rate models of the Seventies – do they fit out of sample? Journal of International Economics, 14:3-24.

Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/

Polhill, G., Hare, H., Anzola, D., Bauermann, T., French, T., Post, H. and Salt, D. (2019) Using ABMs for prediction: Two thought experiments and a workshop. Social Simulation 2019, Mainz.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Thorngate, W. & Edmonds, B. (2013) Measuring simulation-observation fit: An introduction to ordinal pattern analysis. Journal of Artificial Societies and Social Simulation, 16(2):14. http://jasss.soc.surrey.ac.uk/16/2/4.html

Watts, D. J. (2014). Common Sense and Sociological Explanations. American Journal of Sociology, 120(2), 313-351.


Edmonds, B., Polhill, G and Hales, D. (2019) Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge


 

What are the best journals for publishing re-validations of existing models?

By Annamaria Berea

A few years ago I worked on an ABM that I eventually published in a book. Recently, I have conducted new experiments with the same model, re-analyzed the data and had a different dataset that I used for validation of the model. Where can I publish this new work on an older model? I submitted it to a special issue of a journal, but was rejected as “the model was not original”. While the model is not, the new data analysis and validation are and I think it is even more important within the current discussions about replication crises in science.


Berea, A. (2019) What are the best journals or publishers for reports of re-validations of existing models? Review of Artificial Societies and Social Simulation, 31st October 2019. https://rofasss.org/2019/10/30/best-journal/


 

Some Philosophical Viewpoints on Social Simulation

By Bruce Edmonds

How one thinks about knowledge can have a significant impact on how one develops models as well as how one might judge a good model.

  • Pragmatism. Under this view a simulation is a tool for a particular purpose. Different purposes will imply different tests for a good model. What is useful for one purpose might well not be good for another – different kinds of models and modelling processes might be good for each purpose. A simulation whose purpose is to explore the theoretical implications of some assumptions might well be very different from one aiming to explain some observed data. An example of this approach is (Edmonds & al. 2019).
  • Social Constructivism. Here knowledge about social phenomena (including simulation models) are collectively constructed. There is no other kind of knowledge than this. Each simulation is a way of thinking about social reality and plays a part in constructing it so. What is a suitable construction may vary over time and between cultures etc. What a group of people construct is not necessarily limited to simulations that are related to empirical data. (Ahrweiler & Gilbert 2005) seem to take this view but this is more explicit in some of the participatory modelling work, where the aim is to construct a simulation that is acceptable to a group of people, e.g. (Etienne 2014).
  • Relativism. There are no bad models, only different ways of mediating between your thought and reality (Morgan 1999). If you work hard on developing your model, you do not get a better model, only a different one. This might be a consequence of holding to an Epistemological Constructivist position.
  • Descriptive Realism. A simulation is a picture of some aspect of reality (albeit at a much lower ‘resolution’ and imperfectly). If one obtains a faithful representation of some aspect of reality as a model, one can use it for many different purposes. Could imply very complicated models (depending on what one observes and decides is relevant), which might themselves be difficult to understand. I suspect that many people have this in mind as they develop models, but few explicitly take this approach. Maybe an example is (Fieldhouse et al. 2016).
  • Classic Positivism. Here, the empirical fit and the analytic understanding of the simulation is all that matters, nothing else. Models should be tested against data and discarded if inadequate (or they compete and one is currently ahead empirically). Also they should be simple enough that they can be thoroughly understood. There is no obligation to be descriptively realistic. Many physics approaches to social phenomena follow this path (e.g Helbing 2010, Galam 2012).

Of course, few authors make their philosophical position explicit – usually one has to infer it from their text and modelling style.

References

Ahrweiler, P. and Gilbert, N. (2005). Caffè Nero: the Evaluation of Social Simulation. Journal of Artificial Societies and Social Simulation 8(4):14. http://jasss.soc.surrey.ac.uk/8/4/14.html

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. and Squazzoni. F. (in press) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Etienne, M. (ed.) (2014) Companion Modelling: A Participatory Approach to Support Sustainable Development. Springer

Fieldhouse, E., Lessard-Phillips, L. and Edmonds, B. (2016) Cascade or echo chamber? A complex agent-based simulation of voter turnout. Party Politics. 22(2):241-256. DOI:10.1177/1354068815605671

Galam, S. (2012) Sociophysics: A Physicist’s modeling of psycho-political phenomena. Springer.

Helbing, D. (2010). Quantitative sociodynamics: stochastic methods and models of social interaction processes. Springer.

Morgan, M. S., Morrison, M., & Skinner, Q. (Eds.). (1999). Models as mediators: Perspectives on natural and social science (Vol. 52). Cambridge University Press.


Edmonds, B. (2019) Some Philosophical Viewpoints on Social Simulation. Review of Artificial Societies and Social Simulation, 2nd July 2019. https://rofasss.org/2019/07/02/phil-view/


 

Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM

By Sebastian Achter, Melania Borit, Edmund Chattoe-Brown, Christiane Palaretti and Peer-Olaf Siebers

The initiative presented below arose from a Lorentz Center workshop on Integrating Qualitative and Quantitative Evidence using Social Simulation (8-12 April 2019, Leiden, the Netherlands). At the beginning of this workshop, the attenders divided themselves into teams aiming to work on specific challenges within the broad domain of the workshop topic. Our team took up the challenge of looking at “Rigour, Transparency, and Reuse”. The aim that emerged from our initial discussions was to create a framework for augmenting rigour and transparency (RAT) of data use in ABM when both designing, analysing and publishing such models.

One element of the framework that the group worked on was a roadmap of the modelling process in ABM, with particular reference to the use of different kinds of data. This roadmap was used to generate the second element of the framework: A protocol consisting of a set of questions, which, if answered by the modeller, would ensure that the published model was as rigorous and transparent in terms of data use, as it needs to be in order for the reader to understand and reproduce it.

The group (which had diverse modelling approaches and spanned a number of disciplines) recognised the challenges of this approach and much of the week was spent examining cases and defining terms so that the approach did not assume one particular kind of theory, one particular aim of modelling, and so on. To this end, we intend that the framework should be thoroughly tested against real research to ensure its general applicability and ease of use.

The team was also very keen not to “reinvent the wheel”, but to try develop the RAT approach (in connection with data use) to augment and “join up” existing protocols or documentation standards for specific parts of the modelling process. For example, the ODD protocol (Grimm et al. 2010) and its variants are generally accepted as the established way of documenting ABM but do not request rigorous documentation/justification of the data used for the modelling process.

The plan to move forward with the development of the framework is organised around three journal articles and associated dissemination activities:

  • A literature review of best (data use) documentation and practice in other disciplines and research methods (e.g. PRISMA – Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
  • A literature review of available documentation tools in ABM (e.g. ODD and its variants, DOE, the “Info” pane of NetLogo, EABSS)
  • An initial statement of the goals of RAT, the roadmap, the protocol and the process of testing these resources for usability and effectiveness
  • A presentation, poster, and round table at SSC 2019 (Mainz)

We would appreciate suggestions for items that should be included in the literature reviews, “beta testers” and critical readers for the roadmap and protocol (from as many disciplines and modelling approaches as possible), reactions (whether positive or negative) to the initiative itself (including joining it!) and participation in the various activities we plan at Mainz. If you are interested in any of these roles, please email Melania Borit (melania.borit@uit.no).

References

Grimm, V., Berger, U., DeAngelis, D. L., Polhill, J. G., Giske, J. and Railsback, S. F. (2010) ‘The ODD Protocol: A Review and First Update’, Ecological Modelling, 221(23):2760–2768. doi:10.1016/j.ecolmodel.2010.08.019


Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. and Siebers, P.-O.(2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/


 

Vision for a more rigorous “replication first” modelling journal

By David Hales

A proposal for yet another journal? My first reaction to any such suggestion is to argue that we already have far too many journals. However, hear me out.

My vision is for a modelling journal that is far more rigorous than what we currently have. It would be aimed at work in which a significant aspect of the result is derived from the output of a complex system type computer model in an empirical way.

I propose that the journal would incorporate, as part of the reviewing process, at least one replication of the model by an independent reviewer. Hence models would be verified as independently replicated before being published.

In addition the published article would include an appendix detailing the issues raised during the replication process.

Carrying out such an exercise would almost certainly lead to clarifications of the original article such that it would easier to replicate by others and give more confidence in the results. Both readers and authors would gain significantly from this.

I would be much more willing to take modelling articles seriously if I knew they had already been independently replicated.

Here is a question that immediately springs to mind: replicating a model is a time consuming and costly business requiring significant expertise. Why would a reviewer do this?

One possible solution would be to provide an incentive in the following form. Final articles published in the journal would include the replicators as co-authors of the paper – specifically credited with the independent replication work that they write up in the appendix.

This would mean that good, clear and interesting initial articles would be desirable to replicate since the reviewer / replicator would obtain citations.

This could be a good task for an able graduate student allowing them to gain experience, contacts and citations.

Why would people submit good work to such a journal? This is not as easy to answer. It would almost certainly mean more work from their perspective and a time delay (since replication would almost certainly take more time than traditional review). However there is the benefit of actually getting a replication of their model and producing a final article that others would be able to engage with more easily.

Also I think it would be necessary, given the above aspects, to put quite a high bar on what is accepted for review / replication in the first place. Articles reviewed would have to present significant and new results in areas of fairly wide interest. Hence incremental or highly specific models would be ruled out. Also articles that did not contain enough detail to even attempt a replication would be rejected on that basis. Hence one can envisage a two stage review process where the editors decide if the submitted paper is “right” for a full replication review before soliciting replications.

My vision is of a low output, high quality, high initial rejection journal. Perhaps publishing 3 articles every 6 months. Ideally this would support a reputation for high quality over time.


Hales, D. (2018) Vision for a more rigorous “replication first” modelling journal. Review of Artificial Societies and Social Simulation, 5th November 2018. https://rofasss.org/2018/11/05/dh/


 

Escaping the modelling crisis

By Emile Chappin

Let me explain something I call the ‘modelling crisis’. It is something that many modellers in one way or another encounter. By being aware we may resolve such a crisis, avoid frustration, and, hopefully, save the world from some bad modelling.

Views on modelling

I first present two views on modelling. Bear with me!

[View 1: Model = world] The first view is that models capture things in the real world pretty well and some models are pretty much representative. And of course this is true. You can add many things to the model and you may have. But if you think along this line, you start seeing the model as if it is the world. At one point you may become rather optimistic about modelling. Well, I really mean to say, you become naive: the model is fabulous. The model can help anyone with any problem only somewhat related to the original idea behind this model. You don’t waste time worrying about the details and sell the model to everyone listening, and you’re quite convinced in the way you do this. You may come to a belief that the model is the truth.

[View 2: Model ≠ world] The second view is that the model can never represent the world adequately enough to really predict what is going on. And of course this is true. But if you think along this line, you can get pretty frustrated: the model is never good enough, because factor A is not in there, mechanism B is biased, etc. At one point you may become quite pessimistic about ‘the model’: will it help anyone anytime soon? You may come to the belief that the model is nonsense (and that modelling itself is nonsense).

As a modeller, you may encounter these views in your modelling journey: in how your model is perceived, in how your model is compared to other models and in the questions you’re asked about your model. And it may the case that you get stuck in either one of the views yourself. And you may not be aware, but you might still behave accordingly.

Possible consequences

Let’s conceive the idea of having a business doing modelling: we are ambitious and successful! What might happen over time with our business and with our clients?

  • Your clients love your business – Clients can ask us any question and they will get a very precise answer back! Anytime we give a good result, a result that comes true in some sense, we are praised, and our reputation grows. Anytime we give a bad result, something that turns out quite different from what we’d expected, we can blame the particular circumstances which could not have been foreseen or argue that this result is basically out of the original scope. Our modesty makes our reputation grow! And it makes us proud!
  • Assets need protection – Over time, our model/business reputation becomes more and more important. You should ask us for any modelling job because we’ve modelled (this) for decades. Any question goes into our fabulous model that can Answer Any Question In A Minute (AAQIAM). Our models became patchworks because of questions that were not so easy to fit in. But obviously, as a whole, the model is great. More than great: it is the best! The models are our key assets: they need to be protected. In a board meeting we decide that we should not show the insides of our models anymore. We should keep them secret.
  • Modelling schools – Habits emerge of how our models are used, what kind of analysis we do, and which we don’t. Core assumptions that we always make with our model are accepted and forgotten. We get used to those assumptions, we won’t change them anyway and probably we can’t. It is not really needed to think about the consequences of those assumptions anyway. We stick to the basics, represent the results in the way that the client can use it, and mention in footnotes how much detail is underneath, and that some caution is warranted in interpretation of the results. Other modelling schools may also emerge, but they really can’t deliver the precision/breadth of what we have been doing for decades, so they are not relevant, not really, anyway.
  • Distrusting all models – Another kind of people, typically not your clients, start distrusting the modelling business completely. They get upset in discussions: why worry about discussing the model details: there is always something missing anyway. And it is impossible to quantify anything, really. They decide that it is better to ignore model geeks completely and just follow their own reasoning. It doesn’t matter that this reasoning can’t be backed up with facts (such as a modelled reality). They don’t believe that it be done could anyway. So the problem is not their reasoning, it is the inability of quantitative science.

Here is the crisis

At this point, people stop debating the crucial elements in our models and the ambition for model innovation goes out of the window. I would say, we end up in a modelling crisis. At some point, decisions have to be made in the real world, and they can either be inspired by good modelling, by bad modelling, or not by modelling at all.

The way out of the modelling crisis

How can such a modelling crisis be resolved? First, we need to accept that the model ≠ world, so we don’t necessarily need to predict. We also need to accept that modelling can certainly be useful, for example when it helps to find clear and explicit reasoning/underpinning of an argument.

  • We should focus more on the problem that we really want to address, and for that problem, argue how modelling can actually contribute to a solution for that problem. This should result in better modelling questions, because modelling is a means, not an end. We should stop trying to outsource the thinking to a model.
  • Following from this point, we should be very explicit about the modelling purpose: in what way does the modelling contribute to solving the problem identified earlier? We have to be aware that different kinds of purposes lead to different styles of reasoning, and, consequently, to different strengths and weaknesses in the modelling that we do. Consider the differences between prediction, explanation, theoretical exposition, description and illustration as types of modelling purpose, see (Edmonds 2017), (more types are possible).
  • Following this point, we should accept the importance of creativity and the process in modelling. Science is about reasoned, reproducible work. But, paradoxically, good science does not come from a linear, step-by-step approach. Accepting this, modelling can help both in the creative process by exploring possible ideas, explicating an intuition as well as in justification and underpinning of a very particular reasoning. Next, it is important avoid mixing these perspectives up. The modelling process is as relevant as the model outcome. In the end, the reasoning should be standalone and strong (also without the model). But you may have needed the model to find it.
  • We should adhere to better modelling practices and develop the tooling to accommodate them. For ABM, many successful developments are ongoing: we should be explicit and transparent about assumptions we are making (e.g. the ODD protocol, Polhill et al. 2008). We should develop requirements and procedures for modelling studies, with respect to how the analysis is performed, also if clients don’t ask for it (validity, robustness of findings, sensitivity of outcomes, analysis of uncertainties). For some sectors, such requirements have been developed. The discussion around practices and validation is prominent in ABMs, where some ‘issues’ may be considered obvious (see for instance Heath, Hill, and Ciarallo 2009, the effort through CoMSES), but they should be asked for any type of model. In fact, we should share, debate on, and work with all types of models that are already out there (again, such as the great efforts through CoMSES), and consider forms of multi-modelling to save time and effort and benefit from strengths of different model formalisms.
  • We should start looking for good examples: get inspired and share them. Personally I like Basic Traffic from the NetLogo library, it does not predict you where traffic jams are, but it clearly shows the worth of slowing down earlier. Another may be the Limits to Growth, irrespective of its predictive power.
  • We should start doing it better ourselves, so that we show others that it can be done!

References

Heath, B., Hill, R. and Ciarallo, F. (2009). A Survey of Agent-Based Modeling Practices (January 1998 to July 2008). Journal of Artificial Societies and Social Simulation 12(4):9. http://jasss.soc.surrey.ac.uk/12/4/9.html

Polhill, J. Gary, Dawn Parker, Daniel Brown, and Volker Grimm. (2008). Using the ODD Protocol for Describing Three Agent-Based Social Simulation Models of Land-Use Change. Journal of Artificial Societies and Social Simulation 11(2): 3.

Edmonds, B. (2017) Five modelling purposes, Centre for Policy Modelling Discussion Paper CPM-17-238, http://cfpm.org/discussionpapers/192/


Chappin, E.J.L. (2018) Escaping the modelling crisis. Review of Artificial Societies and Social Simulation, 12th October 2018. https://rofasss.org/2018/10/12/ec/