Tag Archives: ABM

Does It Take Two (And A Creaky Search Engine) To Make An Outstation? Hunting Highly Cited Opinion Dynamics Articles in the Journal of Artificial Societies and Social Simulation (JASSS)

By Edmund Chattoe-Brown

In an important article, Squazzoni and Casnici (2013) raise the issue of how social simulation (as manifested in the Journal of Artificial Societies and Social Simulation – hereafter JASSS – the journal that has probably published the most of this kind of research for longest) cites and is cited in the wider scientific community. They discuss this in terms of social simulation being a potential “outstation” of social science (but better integrated into physical science and computing). This short note considers the same argument in reverse. As an important site of social simulation research, is it the case that JASSS is effectively representing research done more widely across the sciences?

The method used to investigate this was extremely simple (and could thus easily be extended and replicated). On 28.08.21, using the search term “opinion dynamics” in “all fields”, all sources from Web of Science (www.webofknowledge.com, hereafter WOS) that were flagged as “highly cited” were selected as a sample. For each article (only articles turned out to be highly cited), the title was searched in JASSS and the number of hits recorded. Common sense was applied in this search process to maximise the chances of success. So if a title had two sub clauses, these were searched jointly as quotations (to avoid the “hits” being very sensitive to the reproduction of punctuation linking clauses.) In addition, the title of the journal in which the article appeared was searched to give a wider sense of how well the relevant journal is known is JASSS.

However, now we come to the issue of the creaky search engine (as well as other limitations of quick and dirty searches). Obviously searching for the exact title will not find variants of that title with spelling mistakes or attempts to standardise spelling (i. e. changing behavior to behaviour). Further, it turns out that the Google search engine (which JASSS uses) does not promise the consistency that often seems to be assumed for it (http://jdebp.uk/FGA/google-result-counts-are-a-meaningless-metric.html). For example, when I searched for “SIAM Review” I mostly got 77 hits, rather often 37 hits and very rarely 0 or 1 hits. (PDFs are available for three of these outcomes from the author but the fourth could not be reproduced to be recorded in the time available.) This result occurred when another search took place seconds after the first so it is not, for example, a result of substantive changes to the content of JASSS. To deal with this problem I tried to confirm the presence of a particular article by searching jointly for all its co-authors. Mostly this approach gave a similar result (but where it does not it is noted in the table below). In addition, wherever there were a relatively large number of hits for a specific search, some of these were usually not the ones intended. (For example no hit on the term “global challenges” actually turned out to be for the journal Global Challenges.) In addition, JASSS often gives an oddly inconsistent number of hits for a specific article: It may appear as PDF and HTML as well as in multiple indices or may occur just once. (This discouraged attempts to go from hits to the specific number of unique articles citing these WOS sources. As it turns out, this additional detail would have added little to the headline result.)

The term “opinion dynamics” was chosen somewhat arbitrarily (for reasons connected with other research) and it is not claimed that this term is even close to a definitive way of capturing any models connected with opinion/attitude change. Nonetheless, it is clear that the number of hits and the type of articles reported on WOS (which is curated and quality controlled) are sufficient (and sufficiently relevant) for this to be a serviceable search term to identify a solid field of research in JASSS (and elsewhere). I shall return to this issue.

The results, shown in the table below are striking on several counts. (All these sources are fully cited in the references at the end of this article.) Most noticeably, JASSS is barely citing a significant number of articles that are very widely cited elsewhere. Because these are highly cited in WOS this cannot be because they are too new or too inaccessible. The second point is the huge discrepancy in citation for the one article on the WOS list that appears in JASSS itself (Flache et al. 2017). Thirdly, although some of these articles appear in journals that JASSS otherwise does not cite (like Global Challenges and Dynamic Games and Applications) others appear in journals that are known to JASSS and generally cited (like SIAM Review).

Reference WOS Citations Article Title Hits in JASSS Journal Title Hits in JASSS
Acemoglu and Ozdaglar (2011) 301 0 (1 based on joint authors) 2
Motsch and Tadmor (2014) 214 0 77
Van Der Linden et al. (2017) 191 0 6 (but none for the journal)
Acemoğlu et al. (2013) 186 1 2 (but 1 article)
Proskurnikov et al. (2016) 165 0 9
Dong et al. (2017) 147 0 48 (but rather few for the journal)
Jia et al. (2015) 118 0 77
Dong et al. (2018) 117 0 (1 based on joint authors) 48 (but rather few for the journal)
Flache et al. (2017) 86 58 (17 based on joint authors) N/A
Urena et al. (2019) 72 0 6
Bu et al. (2020) 56 0 5
Zhang et al. (2020) 55 0 33 (but only some of these are for the journal)
Xiong et al. (2020) 28 0 1
Carrillo et al. (2020) 13 0 0

One possible interpretation of this result is simply that none of the most highly cited articles in WOS featuring the term “opinion dynamics” happen to be more than incidentally relevant to the scientific interests of JASSS. On consideration, however, this seems a rather improbable coincidence. Firstly, these articles were chosen exactly because they are highly cited so we would have to explain how they could be perceived as so useful generally but specifically not in JASSS. Secondly, the same term (“opinion dynamics”) consistently generates 254 hits in JASSS, suggesting that the problem isn’t a lack of overlap in terminology or research interests.

This situation, however, creates a problem for more conclusive explanation. The state of affairs here is not that these articles are being cited and then rejected on scientific grounds given the interests of JASSS (thus providing arguments I could examine). It is that they are barely being cited at all. Unfortunately, it is almost impossible to establish why something is not happening. Perhaps JASSS authors are not aware of these articles to begin with. Perhaps they are aware but do not see the wider scientific value of critiquing them or attempting to engage with their irrelevance in print.

But, given that the problem is non citation, my concern can be made more persuasive (perhaps as persuasive as it can be given problems of convincingly explaining an absence) by investigating the articles themselves. (My thanks are due to Bruce Edmonds for encouraging me to strengthen the argument in this way.) There are definitely some recurring patterns in this sample. Firstly, a significant proportion of the articles are highly mathematical and, therefore (as Agent-Based Modelling often criticises) rely on extreme simplifying assumptions and toy examples. Even here, however, it is not self-evident that such articles should not be cited in JASSS merely because they are mathematical. JASSS has itself published relatively mathematical articles and, if an article contains a mathematical model that could be “agentised” (thus relaxing its extreme assumptions) which is no less empirical than similar models in JASSS (or has particularly interesting behaviours) then it is hard to see why this should not be discussed by at least a few JASSS authors. A clear example of this is provided by Acemoğlu et al. (2013) which argues that existing opinion dynamics models fail to produce the ongoing fluctuations of opinion observed in real data (see, for example, Figures 1-3 in Chattoe-Brown 2014 which also raises concerns about the face validity of popular social simulations of opinion dynamics). In fact, the assumptions of this model could easily be questioned (and real data involves turning points and not just fluctuations) but the point is that JASSS articles are not citing it and rejecting it based on argument but simply not citing it. A model capable of generating ongoing opinion fluctuations (however imperfect) is simply too important to the current state of opinion dynamics research in social simulation not to be considered at all. Another (though less conclusive) example is Motsch and Tadmor (2014) which presents a model suggesting (counter intuitively) that interaction based on heterophily can better achieve consensus than interaction based on homophily. Of course one can reject such an assumption on empirical grounds but JASSS is not currently doing that (and in fact the term heterophily is unknown in the journal except for the title of a cited article.)

Secondly, there are also a number of articles which, while not providing important results seem no less plausible or novel than typical OD articles that are published in JASSS. For example, Jia et al. (2015) add self-appraisal and social power to a standard OD model. Between debates, agents amend the efficacy they believe that they and others have in terms of swaying the outcome and take that into account going forward. Proskurnikov et al. (2016) present the results of a model in which agents can have negative ties with each other (as well as the more usual positive ones) and thus consider the coevolution of positive/negative sentiments and influence (describing what they call hostile camps i. e. groups with positive ties to each other and negative ties to other groups). This is distinct from the common repulsive effect in OD models where agents do not like the opinions of others (rather than disliking the others themselves.)

Finally, both Dong et al. (2017) and Zhang et al. (2020) reach for the idea (through modelling) that experts and leaders in OD models may not just be randomly scattered through the population as types but may exist because of formal organisations or accidents of social structure: This particular agent is either deliberately appointed to have more influence or happens to have it because of their network position.

On a completely different tack, two articles (Dong et al. 2018 and Acemoglu and Ozdaglar 2011) are literature reviews or syntheses on relevant topics and it is hard to see how such broad ranging articles could have so little value to OD research in JASSS.

It will be admitted that some of the articles in the sample are hard to evaluate with certainty. Mathematical approaches often seem to be more interested in generating mathematics than in justifying its likely value. This is particularly problematic when combined with a suggestion that the product of the research may be instrumental algorithms (designed to get things done) rather than descriptive ones (designed to understand social behaviour). An example of this is several articles which talk about achieving consensus without really explaining whether this is a technical goal (for example in a neural network) or a social phenomenon and, if the latter, whether this places constraints on what it legitimate: You can reach consensus by debate but not by shooting dissenters!

But as well as specific ideas in specific models, this sample of articles also suggest a different emphasis from those currently found within JASSS OD research. For example, there is much more interest in deliberately achieving consensus (and the corresponding hazards of manipulation or misinformation impeding that.) Reading these articles collectively gives a sense that JASSS OD models are very much liberal democratic: Agents honestly express their views (or at most are somewhat reticent to protect themselves.) They decently expect the will of the people to prevail. They do not lie strategically to sway the influential, spread rumours to discredit the opinions of opponents or flood the debate with bots. Again, this darker vision is no more right a priori than the liberal democratic one but JASSS should at least be engaging with articles modelling (or providing data on – see Van Der Linden et al. 2017) such phenomena in an OD context. (Although misinformation is mentioned in some OD articles in JASSS it does not seem to be modelled. There also seems to be another surprising glitch in the search engine which considers the term “fake news” to be a hit for misinformation!) This also puts a new slant on an ongoing challenge in OD research, identifying a plausible relationship between fact and opinion. Is misinformation a different field of research (on the grounds that opinions can never be factually wrong) or is it possible for the misinformed to develop mis-opinions? (Those that they would change if what they knew changed.) Is it really the case that Brexiteers, for example, are completely indifferent to the economic consequences which will reveal themselves or did they simply have mistaken beliefs about how high those costs might turn out to be which will cause them to regret their decision at some later stage?

Thus to sum up, while some of the articles in the sample can be dismissed as either irrelevant to JASSS or having a potential relevance that is hard to establish, the majority cannot reasonably be regarded in this way (and a few are clearly important to the existing state of OD research.) While we cannot explain why these articles are not in fact cited, we can thus call into question one possible (Panglossian) explanation for the observed pattern (that they are not cited because they have nothing to contribute).

Apart from the striking nature of the result and its obvious implication (if social simulators want to be cited more widely they need to make sure they are also citing the work of others appropriately) this study has two wider (related) implications for practice.

Firstly, systematic literature reviewing (see, for example, Hansen et al. 2019 – not published in JASSS) needs to be better enforced in social simulation: “Systematic literature review” gets just 7 hits in JASSS. It is not enough to cite just what you happen to have read or models that resemble your own, you need to be citing what the community might otherwise not be aware of or what challenges your own model assumptions. (Although, in my judgement, key assumptions of Acemoğlu et al. 2013 are implausible I don’t think that I could justify non subjectively that they are any more implausible than those of those of the Zaller-Deffuant model – Malarz et al. 2011 – given the huge awareness discrepancy which the two models manifest in social simulation.)

Secondly, we need to rethink the nature of literature reviewing as part of progressive research. I have used “opinion dynamics” here not because it is the perfect term to identify all models of opinion and attitude change but because it throws up enough hits to show that this term is widely used in social simulation. Because I have clearly stated my search term, others can critique it and extend my analysis using other relevant terms like “opinion change” or “consensus formation”. A literature review that is just a bunch of arbitrary stuff cannot be critiqued or improved systematically (rather than nit-picked for specific omissions – as reviewers often do – and even then the critique can’t tell what should have been included if there are no clearly stated search criteria.) It should not be possible for JASSS (and the social simulation community it represents) simply to disregard articles as potentially important in their implications for OD as Acemoğlu et al. (2013). Even if this article turned out to be completely wrong-headed, we need to have enough awareness of it to be able to say why before setting it aside. (Interestingly, the one citation it does receive in JASSS can be summarised as “there are some other model broadly like this” with no detailed discussion at all – and thus no clear statement of how the model presented in the citing article adds to previous models – but uninformative citation is a separate problem.)

Acknowledgements

This article as part of “Towards Realistic Computational Models of Social Influence Dynamics” a project funded through ESRC (ES/S015159/1) by ORA Round 5.

References

Acemoğlu, Daron and Ozdaglar, Asuman (2011) ‘Opinion Dynamics and Learning in Social Networks’, Dynamic Games and Applications, 1(1), March, pp. 3-49. doi:10.1007/s13235-010-0004-1

Acemoğlu, Daron, Como, Giacomo, Fagnani, Fabio and Ozdaglar, Asuman (2013) ‘Opinion Fluctuations and Disagreement in Social Networks’, Mathematics of Operations Research, 38(1), February, pp. 1-27. doi:10.1287/moor.1120.0570

Bu, Zhan, Li, Hui-Jia, Zhang, Chengcui, Cao, Jie, Li, Aihua and Shi, Yong (2020) ‘Graph K-Means Based on Leader Identification, Dynamic Game, and Opinion Dynamics’, IEEE Transactions on Knowledge and Data Engineering, 32(7), July, pp. 1348-1361. doi:10.1109/TKDE.2019.2903712

Carrillo, J. A., Gvalani, R. S., Pavliotis, G. A. and Schlichting, A. (2020) ‘Long-Time Behaviour and Phase Transitions for the Mckean–Vlasov Equation on the Torus’, Archive for Rational Mechanics and Analysis, 235(1), January, pp. 635-690. doi:10.1007/s00205-019-01430-4

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1), February, article 16, <http://www.socresonline.org.uk/19/1/16.html&gt;. doi:10.5153/sro.3315

Dong, Yucheng, Ding, Zhaogang, Martínez, Luis and Herrera, Francisco (2017) ‘Managing Consensus Based on Leadership in Opinion Dynamics’, Information Sciences, 397-398, August, pp. 187-205. doi:10.1016/j.ins.2017.02.052

Dong, Yucheng, Zhan, Min, Kou, Gang, Ding, Zhaogang and Liang, Haiming (2018) ‘A Survey on the Fusion Process in Opinion Dynamics’, Information Fusion, 43, September, pp. 57-65. doi:10.1016/j.inffus.2017.11.009

Flache, Andreas, Mäs, Michael, Feliciani, Thomas, Chattoe-Brown, Edmund, Deffuant, Guillaume, Huet, Sylvie and Lorenz, Jan (2017) ‘Models of Social Influence: Towards the Next Frontiers’, Journal of Artificial Societies and Social Simulation, 20(4), October, article 2, <http://jasss.soc.surrey.ac.uk/20/4/2.html&gt;. doi:10.18564/jasss.3521

Hansen, Paula, Liu, Xin and Morrison, Gregory M. (2019) ‘Agent-Based Modelling and Socio-Technical Energy Transitions: A Systematic Literature Review’, Energy Research and Social Science, 49, March, pp. 41-52. doi:10.1016/j.erss.2018.10.021

Jia, Peng, MirTabatabaei, Anahita, Friedkin, Noah E. and Bullo, Francesco (2015) ‘Opinion Dynamics and the Evolution of Social Power in Influence Networks’, SIAM Review, 57(3), pp. 367-397. doi:10.1137/130913250

Malarz, Krzysztof, Gronek, Piotr and Kulakowski, Krzysztof (2011) ‘Zaller-Deffuant Model of Mass Opinion’, Journal of Artificial Societies and Social Simulation, 14(1), 2, <https://www.jasss.org/14/1/2.html&gt;. doi:10.18564/jasss.1719

Motsch, Sebastien and Tadmor, Eitan (2014) ‘Heterophilious Dynamics Enhances Consensus’, SIAM Review, 56(4), pp. 577-621. doi:10.1137/120901866

Proskurnikov, Anton V., Matveev, Alexey S. and Cao, Ming (2016) ‘Opinion Dynamics in Social Networks With Hostile Camps: Consensus vs. Polarization’, IEEE Transactions on Automatic Control, 61(6), June, pp. 1524-1536. doi:10.1109/TAC.2015.2471655

Squazzoni, Flaminio and Casnici, Niccolò (2013) ‘Is Social Simulation a Social Science Outstation? A Bibliometric Analysis of the Impact of JASSS’, Journal of Artificial Societies and Social Simulation, 16(1), 10, <http://jasss.soc.surrey.ac.uk/16/1/10.html&gt;. doi:10.18564/jasss.2192

Ureña, Raquel, Chiclana, Francisco, Melançon, Guy and Herrera-Viedma, Enrique (2019) ‘A Social Network Based Approach for Consensus Achievement in Multiperson Decision Making’, Information Fusion, 47, May, pp. 72-87. doi:10.1016/j.inffus.2018.07.006

Van Der Linden, Sander, Leiserowitz, Anthony, Rosenthal, Seth and Maibach, Edward (2017) ‘Inoculating the Public against Misinformation about Climate Change’, Global Challenges, 1(2), 27 February, article 1600008. doi:10.1002/gch2.201600008

Xiong, Fei, Wang, Ximeng, Pan, Shirui, Yang, Hong, Wang, Haishuai and Zhang, Chengqi (2020) ‘Social Recommendation With Evolutionary Opinion Dynamics’, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50(10), October, pp. 3804-3816. doi:10.1109/TSMC.2018.2854000

Zhang, Zhen, Gao, Yuan and Li, Zhuolin (2020) ‘Consensus Reaching for Social Network Group Decision Making by Considering Leadership and Bounded Confidence’, Knowledge-Based Systems, 204, 27 September, article 106240. doi:10.1016/j.knosys.2020.106240


Chattoe-Brown, E. (2021) Does It Take Two (And A Creaky Search Engine) To Make An Outstation? Hunting Highly Cited Opinion Dynamics Articles in the Journal of Artificial Societies and Social Simulation (JASSS). Review of Artificial Societies and Social Simulation, 19th August 2021. https://rofasss.org/2021/08/19/outstation/


 

Query: How could we make Data Mining tools more useful for Agent-Based modelling?

By Robin Faber

I am currently doing my master thesis in Computer Science at TU Delft on Data Mining (DM) in Agent-Based Simulation. The goal of this thesis is to provide model designers and analysts with DM tools to make the evaluation of models easier.

The main idea is to create a tool in Python that connects with NetLogo to run models, design experiments and obtain and present the output with visualisations. Because Python has many data analytic libraries, it provides tools that NetLogo lacks in terms of data analytics for the output of ABMs. From my understanding, there are some tools in NetLogo such as BehaviorSpace to run experiments, but this is quite basic and produces a text file which still has to be analysed elsewhere. What I would like to do is develop a library in Python that streamlines this whole process of “run model → get output → analyse output”, with a focus on the usability and ease of use to also make it available for people that are not experienced programmers. However, because my background is in Computer Science, I obviously lack some knowledge of what is needed in order for the tool to be useful and usable for an actual model designer.

The three main questions I would like to ask the RofASSS readers are these:

  • Which requirements would you define to make the tool easy to use for non-programmers? (e.g. documentation, GUI, lines of code, data structures)
  • What type of information is important to obtain from a simulation? (e.g variables, locations, agent counts)
  • How should the information obtained from the model output/experiments be presented? (e.g, types of graphs/tables/visualisation)

If you have any questions, comments or would like to schedule a call to discuss this topic, please contact me using the comments facility at the bottom of this post (for comments) or emailing me at: r.j.faber@student.tudelft.nl.


Faber, R.J. (2021) Query: How could we make Data Mining tools more useful for Agent-Based Modelling. Review of Artificial Societies and Social Simulation, 5th February 2021. https://rofasss.org/2021/02/3/dm4abm/


 

A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation

By Edmund Chattoe-Brown

The Motivation

Research that confronts models with data is still sufficiently rare that it is hard to get a representative sense of how it is done and how convincing the results are simply by “background reading”. One way to advance good quality empirical modelling is therefore simply to make it more visible in quantity. With this in mind I have constructed (building on the work of Angus and Hassani-Mahmooei 2015) the first version of a bibliography listing all ABM attempting empirical validation in JASSS between 1998 and 2019 (along with a few other example) – which generates 68 items in all. Each entry gives a full reference and also describes what comparisons are made and where in the article they occur. In addition the document contains a provisional bibliography of articles giving advice or technical support to validation and lists three survey articles that categorise large samples of simulations by their relationships to data (which served as actual or potential sources for the bibliography).

With thanks to Bruce Edmonds, this first version of the bibliography has been made available as a Centre for Policy Modelling Discussion Paper CPM-20-216, which can be downloaded http://cfpm.org/discussionpapers/256.

The Argument

It may seem quite surprising to focus only on validation initially but there is an argument (Chattoe-Brown 2019) which says that this is a more fundamental challenge to the quality of a model than calibration. A model that cannot track real data well, even when its parameters are tuned to do so is clearly a fundamentally inadequate model. Only once some measure of validation has been achieved can we decide how “convincing” it is (comparing independent empirical calibration with parameter tuning for example). Arguably, without validation, we cannot really be sure whether a model tells us anything about the real world at all (no matter how plausible any narrative about its assumptions may appear). This can be seen as a consequence of the arguments about complexity routinely made by ABM practitioners as the plausibility of the assumptions does not map intuitively onto the plausibility of the outputs.

The Uses

Although these are covered in the preface to the bibliography in greater detail, such a sample has a number of scientific uses which I hope will form the basis for further research.

  • To identify (and justify) good and bad practice, thus promoting good practice.
  • To identify (and then perhaps fill) gaps in the set of technical tools needed to support validation (for example involving particular sorts of data).
  • To test the feasibility and value of general advice offered on validation to date and refine it in the face of practical challenges faced by analysis of real cases.
  • To allow new models to demonstrably outperform the levels of validation achieved by existing models (thus creating the possibility for progressive empirical research in ABM).
  • To support agreement about the effective use of the term validation and to distinguish it from related concepts (like verification) and potentially unhelpful (for example ambiguous or rhetorically loaded) uses

The Plan

Because of the labour involved and the diversity of fields in which ABM have now been used over several decades, an effective bibliography on this kind cannot be the work of a single author (or even a team of authors). My plan is thus to solicit (fully credited) contributions and regularly release new versions of the bibliography – with new co-authors as appropriate. (This publishing model is intended to maintain the quality and suitability for citation of the resulting document relative to the anarchy that sometimes arises in genuine communal authorship!) All of the following contributions will be gratefully accepted for the next revision (on which I am already working myself in any event)

  • References to new surveys or literature reviews that categorise significant samples of ABM research by their relationship to data.
  • References for proposed new entries to the bibliography in as much detail as possible.
  • Proposals to delete incorrectly categorised entries. (There are a small number of cases where I have found it very difficult to establish exactly what the authors did in the name of validation, partly as a result of confusing or ambiguous terminology.)
  • Proposed revisions to incorrect or “unfair” descriptions of existing entries (ideally by the authors of those pieces).
  • Offers of collaboration for a proposed companion bibliography on calibration. Ultimately this will lead to a (likely very small) sample of calibrated and validated ABM (which are often surprisingly little cited given their importance to the credibility of the ABM “project” – see, for example, Chattoe-Brown (2018a, 2018b).

Acknowledgements

This article as part of “Towards Realistic Computational Models of Social Influence Dynamics” a project funded through ESRC (ES/S015159/1) by ORA Round 5.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16. <http://jasss.soc.surrey.ac.uk/18/4/16.html> doi:10.18564/jasss.2952

Chattoe-Brown, Edmund (2018a) ‘Query: What is the Earliest Example of a Social Science Simulation (that is Nonetheless Arguably an ABM) and Shows Real and Simulated Data in the Same Figure or Table?’ Review of Artificial Societies and Social Simulation, 11 June. https://rofasss.org/2018/06/11/ecb/

Chattoe-Brown, Edmund (2018b) ‘A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974)’, Review of Artificial Societies and Social Simulation, 1 June. https://rofasss.org/2018/06/01/ecb/

Chattoe-Brown, Edmund (2019) ‘Agent Based Models’, in Atkinson, Paul, Delamont, Sara, Cernat, Alexandru, Sakshaug, Joseph W. and Williams, Richard A. (eds.) SAGE Research Methods Foundations. doi:10.4135/9781526421036836969


Chattoe-Brown, E. (2020) A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation. Review of Artificial Societies and Social Simulation, 12th June 2020. https://rofasss.org/2020/06/12/abm-validation-bib/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

No one can predict the future: More than a semantic dispute

By Carlos A. de Matos Fernandes and Marijn A. Keijzer

(A contribution to the: JASSS-Covid19-Thread)

Models are pivotal to battle the current COVID-19 crisis. In their call to action, Squazzoni et al. (2020) convincingly put forward how social simulation researchers could and should respond in the short run by posing three challenges for the community among which is a COVID-19 prediction challenge. Although Squazzoni et al. (2020) stress the importance of transparent communication of model assumptions and conditions, we question the liberal use of the word ‘prediction’ for the outcomes of the broad arsenal of models used to mitigate the COVID-19 crisis by ours and other modelling communities. Four key arguments are provided that advocate using expectations derived from scenarios when explaining our models to a wider, possibly non-academic audience.

The current COVID-19 crisis necessitates that we implement life-changing policies that, to a large extent, build upon predictions from complex, quickly adapted, and sometimes poorly understood models. The examples of models spurring the news to produce catchphrase headlines are abundant (Imperial College, AceMod-Australian Census-based Epidemic Model, IndiaSIM, IHME, etc.). And even though most of these models will be useful to assess the comparative effectiveness of interventions in our aim to ‘flatten the curve’, the predictions that disseminate to news media are those of total cases or timing of the inflection point.

The current focus on predictive epidemiological and behavioural models brings back an important discussion about prediction in social systems. “[T]here is a lot of pressure for social scientists to predict” (Edmonds, Polhill & Hales, 2019), and we might add ‘especially nowadays’. But forecasting in human systems is often tricky (Hofman, Sharma & Watts, 2017). Approaches that take well-understood theories and simple mechanisms often fail to grasp the complexity of social systems, yet models that rely on complex supervised machine learning-like approaches may offer misleading levels of confidence (as was elegantly shown recently by Salganik et al., 2020). COVID-19 models appear to be no exception as a recent review concluded that “[…] their performance estimates are likely to be optimistic and misleading” (Wynants et al., 2020, p. 9). Squazzoni et al. describe these pitfalls too (2020: paragraph 3.3). In the crisis at hand, it may even be counter-productive to rely on complex models that combine well-understood mechanisms with many uncertain parameters (Elsenbroich & Badham, 2020).

Considering the level of confidence we can have about predictive models in general, we believe there is an issue with the way predictions are communicated by the community. Scientists often use ‘prediction’ to refer to some outcome of a (statistical) model where they ‘predict’ aspects of the data that are already known, but momentarily set aside. Edmonds et al. (2019: paragraph 2.4) state that “[b]y ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model”. Predictive accuracy, in this case, can then be computed later on, by comparing the prediction to the truth. Scientists know that when talking about predictions of their models, they don’t claim to generalize to situations outside of the narrow scope of their study sample or their artificial society. We are not predicting the future, and wouldn’t claim we could. However, this is wildly different from how ‘prediction’ is commonly understood: As an estimation of some unknown thing in the future. Now that our models quickly disseminate to the general public, we need to be careful with the way we talk about their outcomes.

Predictions in the COVID-19 crisis will remain imperfect. In the current virus outbreak, society cannot afford to rely on the falsification of models for interventions against empirical data. As the virus remains to spread rapidly, our only option is to rely on models as a basis for policy, ceteris paribus. And it is precisely here – at ‘ceteris paribus’ – where the terminology ‘predictions’ miss the mark. All things will not be equal tomorrow, the next day, or the day after that (Van Bavel et al. [2020] note numerous topics that affect managing the COVID-19 pandemic and its impact on society). Policies around the globe are constantly being tweaked, and people’s behaviour changes dramatically as a consequence (Google, 2020). Relying on predictions too much may give a false sense of security.

We propose to avoid using the word ‘prediction’ too much and talk about scenarios or expectations instead where possible. We identify four reasons why you should avoid talking about prediction right now:

  1. Not everyone is acquainted with noise and emergence. Computational Social Scientists generally understand the effects of noise in social systems (Squazzoni et al., 2020: paragraph 1.8). Small behavioural irregularities can be reinforced in complex systems fostering unexpected outcomes. Yet, scientists not acquainted with studying complex social systems may be unfamiliar with the principles we have internalized by now, and put over-confidence in the median outputs of volatile models that enter the scientific sphere as predictions.
  2. Predictions do not convey uncertainty. The general public is usually unacquainted with academic esoteric concepts. For instance, showing a flatten-the-curve scenario generally builds upon mean or median approximation, oftentimes neglecting to include variability of different scenarios. Still, there are numerous other outcomes, building on different parameter values. We fear that by stating a prediction to an undisciplined public, they expect such a thing to occur for certain. If we forecast a sunny day, but there’s rain, people are upset. Talking about scenarios, expectations, and mechanisms may prevent confusion and opposition when the forecast does not occur.
  3. It’s a model, not a reality. The previous argument feeds into the third notion: Be honest about what you model. A model is a model. Even the most richly calibrated model is a model. That is not to say that such models are not informative (we reiterate: models are not a shot in the dark). Still, richly calibrated models based on poor data may be more misleading than less calibrated models (Elsenbroich & Badham, 2020). Empirically calibrated models may provide more confidence at face value, but it lies in the nature of complex systems that small measurement errors in the input data may lead to big deviations in outputs. Models present a scenario for our theoretical reasoning with a given set of parameter values. We can update a model with empirical data to increase reliability but it remains a scenario about a future state given an (often expansive) set of assumptions (recently beautifully visualized by Koerth, Bronner, & Mithani, 2020).
  4. Stop predicting, start communicating. Communication is pivotal during a crisis. An abundance of research shows that communicating clearly and honestly is a best practice during a crisis, generally comforting the general public (e.g., Seeger, 2006). Squazzoni et al. (2020) call for transparent communication. by stating that “[t]he limitations of models and the policy recommendations derived from them have to be openly communicated and transparently addressed”. We are united in our aim to avert the COVID-19 crisis but should be careful that overconfidence doesn’t erode society’s trust in science. Stating unequivocally that we hope – based on expectations – to avert a crisis by implementing some policy, does not preclude altering our course of action when an updated scenario about the future may require us to do so. Modellers should communicate clearly to policy-makers and the general public that this is the role of computational models that are being updated daily.

Squazzoni et al. (2020) set out the agenda for our community in the coming months and it is an important one. Let’s hope that the expectations from the scenarios in our well-informed models will not fall on deaf ears.

References

Edmonds, B., Polhill, G., & Hales, D. (2019). Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H., & Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html> doi: 10.18564/jasss.3993

Elsenbroich, C., & Badham, J. (2020). Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020.
https://rofasss.org/2020/04/12/focussing-on-our-strengths/

Google. (2020). COVID-19 Mobility Reports. https://www.google.com/covid19/mobility/ (Accessed 15th April 2020)

Hofman, J. M., Sharma, A., & Watts, D. J. (2017). Prediction and Explanation in Social Systems. Science, 355, 486–488. doi: 10.1126/science.aal3856

Koerth, M., Bronner, L., & Mithani, J. (2020, March 31). Why It’s So Freaking Hard To Make A Good COVID-19 Model. FiveThirtyEight. https://fivethirtyeight.com/

Salganik, M. J. et al. (2020). Measuring the Predictability of Life Outcomes with a Scientific Mass Collaboration. PNAS. 201915006. doi: 10.1073/pnas.1915006117

Seeger, M. W. (2006). Best Practices in Crisis Communication: An Expert Panel Process, Journal of Applied Communication Research, 34(3), 232-244.  doi: 10.1080/00909880600769944

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Van Bavel, J. J. et al. (2020). Using social and behavioural science to support COVID-19 pandemic response. PsyArXiv. https://doi.org/10.31234/osf.io/y38m9

Wynants. L., et al. (2020). Prediction models for diagnosis and prognosis of COVID-19 infection: systematic review and critical appraisal. BMJ, 369, m1328. doi: 10.1136/bmj.m1328


de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Focussing on our Strengths

By Corinna Elsenbroich and Jennifer Badham

(A contribution to the: JASSS-Covid19-Thread)

Understanding a situation is the precondition to make good decisions. In the extraordinary current situation of a global pandemic, the lack of consensus about a good decision path is evident in the variety of government measures in different countries, analyses of decision made and debates on how the future will look. What is also clear is how little we understand the situation and the impact of policy choices. We are faced with the complexity of social systems, our ability to only ever partially understand them and the political pressure to make decisions on partial information.

The JASSS call to arms (Flaminio & al. 2020) is pointing out the necessity for the ABM modelling community to produce relevant models for this kind of emergency situation. Whilst we wholly agree with the sentiment that ABM modelling can contribute to the debate and decision making, we would like to also point out some of the potential pitfalls inherent in a false application and interpretation for ABM.

  1. Small change, big difference: Given the complexity of the real world, there will be aspects that are better and some that are less well understood. Trying to produce a very large model encompassing several different aspects might be counter-productive as we will mix together well understood aspects with highly hypothetical knowledge. It might be better to have different, smaller models – on the epidemic, the economy, human behaviour etc. each of which can be taken with its own level of validation and veracity and be developed by modellers with subject matter understanding, theoretical knowledge and familiarity with relevant data.
  2. Carving up complex systems: If separate models are developed, then we are necessarily making decisions about the boundaries of our models. For a complex system any carving up can separate interactions that are important, for example the way in which fear of the epidemic can drive protective behaviour thereby reducing contacts and limiting the spread. While it is tempting to think that a “bigger model”, a more encompassing one, is necessarily a better carving up of the system because it eliminates these boundaries, in fact it simply moves them inside the model and hides them.
  3. Policy decisions are moral decisions: The decision of what is the right course to take is a decision for the policy maker with all the competing interests and interdependencies of different aspects of the situation in mind. Scientists are there to provide the best information for the understanding of a situation, and models can be used to understand consequences of different courses of action and the uncertainties associated with that action. Models can be used to inform policy decisions but they must not obfuscate that it is a moral choice that has to be made.
  4. Delaying a decision is making a decision to do nothing: Like any other policy option, a decision to maintain the status quo while gathering further information has its own consequences. The Call to Action (paragraph 1.6) refers to public pressure for immediate responses, but this underplays the pressure arising from other sources. It is important to recognise the logical fallacy: “We must do something. This is something. Therefore we must do this.” However, if there are options available that are clearly better than doing nothing, then it is equally illogical to do nothing.

Instead of trying to compete with existing epidemiological models, ABM could focus on the things it is really good at:

  1. Understanding uncertainty in complex systems resulting from heterogeneity, social influence, and feedback. For the case at hand this means not to build another model of the epidemic spread – there are excellent SEIR models doing that – but to explore how the effect of heterogeneity in the infected population (such as in contact patterns or personal behavior in response to infection) can influence the spread. Other possibilities include social effects such as how fear might spread and influence behaviours of panic buying or compliance with the lockdown.
  2. Build models for the pieces that are missing and couple these to the pieces that exist, thereby enriching the debate about the consequences of policy options by making those connections clear.
  3. Visualise and communicate difficult to understand and counterintuitive developments. Right now people are struggling to understand exponential growth, the dynamics of social distancing, the consequences of an overwhelmed health system, and the delays between actions and their consequences. It is well established that such fundamentals of systems thinking are difficult (Booth Sweeney and Sterman https://doi.org/10.1002/sdr.198). Models such as the simple models in the Washington Post or less abstract ones like the routine day activity one from Vermeulen et al (2020) do a wonderful job at this, allowing people to understand how their individual behaviour will contribute to the spread or containment of a pandemic.
  4. Highlight missing data and inform future collection. This unfolding pandemic is defined through the constant assessment using highly compromised data, i.e. infection rates in countries are entirely determined by how much is tested. The most comparable might be the rates of death but even there we have reporting delays and omissions. Trying to build models is one way to identify what needs to be known to properly evaluate consequences of policy options.

The problem we are faced with in this pandemic is one of complexity, not one of ABM, and we must ensure we are honouring the complexity rather than just paying lip service to it. We agree that model transparency, open data collection and interdisciplinary research are important, and want to ensure that all scientific knowledge is used in the best possible way to ensure a positive outcome of this global crisis.

But it is also important to consider the comparative advantage of agent-based modellers. Yes, we have considerable commitment to, and expertise in, open code and data. But so do many other disciplines. Health information is routinely collected in national surveys and administrative datasets, and governments have a great deal of established expertise in health data management. Of course, our individual skills in coding models, data visualisation, and relevant theoretical knowledge can be offered to individual projects as required. But we believe our institutional response should focus on activities where other disciplines are less well equipped, applying systems thinking to understand and communicate the consequences of uncertainty and complexity.

References

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P. , Antosz, P., Scholz, G., Chappin, E., Borit, M., Verhagen, H., Francesca, G. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation 23(2), 10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298

Booth Sweeney, L., & Sterman, J. D. (2000). Bathtub dynamics: initial results of a systems thinking inventory. System Dynamics Review: The Journal of the System Dynamics Society, 16(4), 249-286.

Stevens, H. (2020) Why outbreaks like coronavirus spread exponentially, and how to “flatten the curve”. Washington Post, 14th of March 2020. (accessed 11th April 2020) https://www.washingtonpost.com/graphics/2020/world/corona-simulator/

Vermeulen, B.,  Pyka, A. and Müller, M. (2020) An agent-based policy laboratory for COVID-19 containment strategies, (accessed 11th April 2020) https://inno.uni-hohenheim.de/corona-modell


Elsenbroich, C. and Badham, J. (2020) Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020. https://rofasss.org/2020/04/12/focussing-on-our-strengths/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Get out of your silos and work together!

By Peer-Olaf Siebers and Sudhir Venkatesan

(A contribution to the: JASSS-Covid19-Thread)

The JASSS position paper ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action’ (Squazzoni et al 2020) calls on the scientific community to improve the transparency, access, and rigour of their models. A topic that we think is equally important and should be part of this list is the quest to more “interdisciplinarity”; scientific communities to work together to tackle the difficult job of understanding the complex situation we are currently in and be able to give advice.

The modelling/simulation community in the UK (and more broadly) tend to work in silos. The two big communities that we have been exposed to are the epidemiological modelling community, and social simulation community. They do not usually collaborate with each other despite working on very similar problems and using similar methods (e.g. agent-based modelling). They publish in different journals, use different software, attend different conferences, and even sometimes use different terminology to refer to the same concepts.

The UK pandemic response strategy (Gov.UK 2020) is guided by advice from the Scientific Advisory Group for Emergencies (SAGE), which in turn has comprises three independent expert groups- SPI-M (epidemic modellers), SPI-B (experts in behaviour change from psychology, anthropology and history), and NERVTAG (clinicians, epidemiologists, virologists and other experts). Of these, modelling from member SPI-M institutions has played an important role in informing the UK government’s response to the ongoing pandemic (e.g. Ferguson et al 2020). Current members of the SPI-M belong to what could be considered the ‘epidemic modelling community’. Their models tend to be heavily data-dependent which is justifiable given that their most of their modelling focus on viral transmission parameters. However, this emphasis on empirical data can sometimes lead them to not model behaviour change or model it in a highly stylised fashion, although more examples of epidemic-behaviour models appear in recent epidemiological literature (e.g. Verelst et al 2016; Durham et al 2012; van Boven et al 2008; Venkatesan et al 2019). Yet, of the modelling work informing the current response to the ongoing pandemic, computational models of behaviour change are prominently missing. This, from what we have seen, is where the ‘social simulation’ community can really contribute their expertise and modelling methodologies in a very valuable way. A good resource for epidemiologists in finding out more about the wide spectrum of modelling ideas are the Social Simulation Conference Proceeding Programmes (e.g. SSC2019 2019). But unfortunately, the public health community, including policymakers, are either unaware of these modelling ideas or are unsure of how these are relevant to them.

As pointed out in a recent article, one important concern with how behaviour change has possibly been modelled in the SPI-M COVID-19 models is the assumption that changes in contact rates resulting from a lockdown in the UK and the USA will mimic those obtained from surveys performed in China, which unlikely to be valid given the large political and cultural differences between these societies (Adam 2020). For the immediate COVID-19 response models, perhaps requiring cross-disciplinary validation for all models that feed into policy may be a valuable step towards more credible models.

Effective collaboration between academic communities relies on there being a degree of familiarity, and trust, with each other’s work, and much of this will need to be built up during inter-pandemic periods (i.e. “peace time”). In the long term, publishing and presenting in each other’s journals and conferences (i.e. giving the opportunity for other academic communities to peer-review a piece of modelling work), could help foster a more collaborative environment, ensuring that we are in a much better to position to leverage all available expertise during a future emergency. We should aim to take the best across modelling communities and work together to come up with hybrid modelling solutions that provide insight by delivering statistics as well as narratives (Moss 2020). Working in silos is both unhelpful and inefficient.

References

Adam D (2020) Special report: The simulations driving the world’s response to COVID-19. How epidemiologists rushed to model the coronavirus pandemic. Nature – News Feature. https://www.nature.com/articles/d41586-020-01003-6 [last accessed 07/04/2020]

Durham DP, Casman EA (2012) Incorporating individual health-protective decisions into disease transmission models: A mathematical framework. Journal of The Royal Society Interface. 9(68), 562-570

Ferguson N, Laydon D, Nedjati Gilani G, Imai N, Ainslie K, Baguelin M, Bhatia S, Boonyasiri A, Cucunuba Perez Zu, Cuomo-Dannenburg G, Dighe A (2020) Report 9: Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand. https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf [last accessed 07/04/2020]

Gov.UK (2020) Scientific Advisory Group for Emergencies (SAGE): Coronavirus response. https://www.gov.uk/government/groups/scientific-advisory-group-for-emergencies-sage-coronavirus-covid-19-response [last accessed 07/04/2020]

Moss S (2020) “SIMSOC Discussion: How can disease models be made useful? “, Posted by Scott Moss, 22 March 2020 10:26 [last accessed 07/04/2020]

Squazzoni F, Polhill JG, Edmonds B, Ahrweiler P, Antosz P, Scholz G, Borit M, Verhagen H, Giardini F, Gilbert N (2020) Computational models that matter during a global pandemic outbreak: A call to action, Journal of Artificial Societies and Social Simulation, 23 (2) 10

SSC2019 (2019) Social simulation conference programme 2019. https://ssc2019.uni-mainz.de/files/2019/09/ssc19_final.pdf [last accessed 07/04/2020]

van Boven M, Klinkenberg D, Pen I, Weissing FJ, Heesterbeek H (2008) Self-interest versus group-interest in antiviral control. PLoS One. 3(2)

Venkatesan S, Nguyen-Van-Tam JS, Siebers PO (2019) A novel framework for evaluating the impact of individual decision-making on public health outcomes and its potential application to study antiviral treatment collection during an influenza pandemic. PLoS One. 14(10)

Verelst F, Willem L, Beutels P (2016) Behavioural change models for infectious disease transmission: A systematic review (2010–2015). Journal of The Royal Society Interface. 13(125)


Siebers, P-O. and Venkatesan, S. (2020) Get out of your silos and work together. Review of Artificial Societies and Social Simulation, 8th April 2020. https://rofasss.org/2020/0408/get-out-of-your-silos


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Call for responses to the JASSS Covid19 position paper

In the recent position paper in JASSS, entitled “Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action” the authors suggest some collective actions we, as social simulators, could take.

We are asking for submissions that present serious comments on this paper. This  could include:

  • To discuss other points of view
  • To talk about possible modelling approaches
  • To review simulation modelling of covid19 that includes social aspects
  • To point out some of the difficulties of interpretation and the interface with the policy/political world
  • To discuss or suggest other possible collective actions that could be taken.

All such contributions will form the the: JASSS-Covid19-Thread


Predicting Social Systems – a Challenge

By Bruce Edmonds, Gary Polhill and David Hales

(Part of the Prediction-Thread)

There is a lot of pressure on social scientists to predict. Not only is an ability to predict implicit in all requests to assess or optimise policy options before they are tried, but prediction is also the “gold standard” of science. However, there is a debate among modellers of complex social systems about whether this is possible to any meaningful extent. In this context, the aim of this paper is to issue the following challenge:

Are there any documented examples of models that predict useful aspects of complex social systems?

To do this the paper will:

  1. define prediction in a way that corresponds to what a wider audience might expect of it
  2. give some illustrative examples of prediction and non-prediction
  3. request examples where the successful prediction of social systems is claimed
  4. and outline the aspects on which these examples will be analysed

About Prediction

We start by defining prediction, taken from (Edmonds et al. 2019). This is a pragmatic definition designed to encapsulate common sense usage – what a wider public (e.g. policy makers or grant givers) might reasonably expect from “a prediction”.

By ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model.

Let us clarify the language in this.

  • It has to be reliable. That is, one can rely upon its prediction as one makes this – a model that predicts erratically and only occasionally predicts is no help, since one does not whether to believe any particular prediction. This usually means that (a) it has made successful predictions for several independent cases and (b) the conditions under which it works is (roughly) known.
  • What is predicted has to be unknown at the time of prediction. That is, the prediction has to be made before the prediction is verified. Predicting known data (as when a model is checked on out-of-sample data) is not sufficient [1]. Nor is the practice of looking for phenomena that is consistent with the results of a model, after they have been generated (due to ignoring all the phenomena that is not consistent with the model in this process).
  • What is being predicted is well defined. That is, How to use the model to make a prediction about observed data is clear. An abstract model that is very suggestive – appears to predict phenomena but in a vague and undefined manner but where one has to invent the mapping between model and data to make this work – may be useful as a way of thinking about phenomena, but this is different from empirical prediction.
  • Which aspects of data about being predicted is open. As Watts (2014) points out, this is not restricted to point numerical predictions of some measurable value but could be a wider pattern. Examples of this include: a probabilistic prediction, a range of values, a negative prediction (this will not happen), or a second-order characteristic (such as the shape of a distribution or a correlation between variables). What is important is that (a) this is a useful characteristic to predict and (b) that this can be checked by an independent actor. Thus, for example, when predicting a value, the accuracy of that prediction depends on its use.
  • The prediction has to use the model in an essential manner. Claiming to predict something obviously inevitable which does not use the model is insufficient – the model has to distinguish which of the possible outcomes is being predicted at the time.

Thus, prediction is different from other kinds of scientific/empirical uses, such as description and explanation (Edmonds et al. 2019). Some modellers use “prediction” to mean any output from a model, regardless of its relationship to any observation of what is being modelled [2]. Others use “prediction” for any empirical fitting of data, regardless of whether that data is known before hand. However here we wish to be clearer and avoid any “post-truth” softening of the meaning of the word for two reasons (a) distinguishing different kinds of model use is crucial in matters of model checking or validation and (b) these “softer” kinds of empirical purpose will simply confuse the wider public when if talk to themabout “prediction”. One suspects that modellers have accepted these other meanings because it then allows them to claim they can predict (Edmonds 2017).

Some Examples

Nate Silver and his team aim to predict future social phenomena, such as the results of elections and the outcome of sports competitions. He correctly predicted the outcomes of all 50 electoral colleges in Obama’s election before it happened. This is a data-hungry approach, which involves the long-term development of simulations that carefully see what can be inferred from the available data, with repeated trial and error. The forecasts are probabilistic and repeated many times. As well as making predictions, his unit tries to establish the level of uncertainty in those predictions – being honest about the probability of those predictions coming about given the likely levels of error and bias in the data. These models are not agent-based in nature but tend to be of a mostly statistical nature, thus it is debatable whether these are treated as complex systems – it certainly does not use any theory from complexity science. His book (Silver 2012) describes his approach. Post hoc analysis of predictions – explaining why it worked or not – is kept distinct from the predictive models themselves – this analysis may inform changes to the predictive model but is not then incorporated into the model. The analysis is thus kept independent of the predictive model so it can be an effective check.

Many models in economics and ecology claim to “predict” but on inspection, this only means there is a fit to some empirical data. For example, (Meese & Rogoff 1983) looked at 40 econometric models where they claimed they were predicting some time-series. However, 37 out of 40 models failed completely when tested on newly available data from the same time series that they claimed to predict. Clearly, although presented as being predictive models, they could not predict unknown data. Although we do not know for sure, presumably what happened was that these models had been (explicitly or implicitly) fitted to the out-of-sample data, because the out-of-sample data was already known to the modeller. That is, if the model failed to fit the out-of-sample data when the model was tested, it was then adjusted until it did work, or alternatively, only those models that fitted the out-of-sample data were published.

The Challenge

The challenge is envisioned as happening like this.

  1. We publicise this paper requesting that people send us example of prediction or near-prediction on complex social systems with pointers to the appropriate documentation.
  2. We collect these and analyse them according to the characteristics and questions described below.
  3. We will post some interim results in January 2020 [3], in order to prompt more examples and to stimulate discussion. The final deadline for examples is the end of March 2020.
  4. We will publish the list of all the examples sent to us on the web, and present our summary and conclusions at Social Simulation 2020 in Milan and have a discussion there about the nature and prospects for the prediction of complex social systems. Anyone who contributed an example will be invited to be a co-author if they wish to be so-named.

How suggestions will be judged

For each suggestion, a number of answers will be sought – namely to the following questions:

  • What are the papers or documents that describe the model?
  • Is there an explicit claim that the model can predict (as opposed to might in the future)?
  • What kind of characteristics are being predicted (number, probabilistic, range…)?
  • Is there evidence of a prediction being made before the prediction was verified?
  • Is there evidence of the model being used for a series of independent predictions?
  • Were any of the predictions verified by a team that is independent of the one that made the prediction?
  • Is there evidence of the same team or similar models making failed predictions?
  • To what extent did the model need extensive calibration/adjustment before the prediction?
  • What role does theory play (if any) in the model?
  • Are the conditions under which predictive ability claimed described?

Of course, negative answers to any of the above about a particular model does not mean that the model cannot predict. What we are assessing is the evidence that a model can predict something meaningful about complex social systems. (Silver 2012) describes the method by which they attempt prediction, but this method might be different from that described in most theory-based academic papers.

Possible Outcomes

This exercise might shed some light of some interesting questions, such as:

  • What kind of prediction of complex social systems has been attempted?
  • Are there any examples where the reliable prediction of complex social systems has been achieved?
  • Are there certain kinds of social phenomena which seem to more amenable to prediction than others?
  • Does aiming to predict with a model entail any difference in method than projects with other aims?
  • Are there any commonalities among the projects that achieve reliable prediction?
  • Is there anything we could (collectively) do that would encourage or document good prediction?

It might well be that whether prediction is achievable might depend on exactly what is meant by the word.

Acknowledgements

This paper resulted from a “lively discussion” after Gary’s (Polhill et al. 2019) talk about prediction at the Social Simulation conference in Mainz. Many thanks to all those who joined in this. Of course, prior to this we have had many discussions about prediction. These have included Gary’s previous attempt at a prediction competition (Polhill 2018) and Scott Moss’s arguments about prediction in economics (which has many parallels with the debate here).

Notes

[1] This is sufficient for other empirical purposes, such as explanation (Edmonds et al. 2019)

[2] Confusingly they sometimes the word “forecasting” for what we mean by predict here.

[3] Assuming we have any submitted examples to talk about

References

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1 (see also http://cfpm.org/discussionpapers/236)

Edmonds, B. (2017) The post-truth drift in social simulation. Social Simulation Conference (SSC2017), Dublin, Ireland. (http://cfpm.org/discussionpapers/195)

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Grimm V, Revilla E, Berger U, Jeltsch F, Mooij WM, Railsback SF, Thulke H-H, Weiner J, Wiegand T, DeAngelis DL (2005) Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science 310: 987-991.

Meese, R.A. & Rogoff, K. (1983) Empirical Exchange Rate models of the Seventies – do they fit out of sample? Journal of International Economics, 14:3-24.

Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/

Polhill, G., Hare, H., Anzola, D., Bauermann, T., French, T., Post, H. and Salt, D. (2019) Using ABMs for prediction: Two thought experiments and a workshop. Social Simulation 2019, Mainz.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Thorngate, W. & Edmonds, B. (2013) Measuring simulation-observation fit: An introduction to ordinal pattern analysis. Journal of Artificial Societies and Social Simulation, 16(2):14. http://jasss.soc.surrey.ac.uk/16/2/4.html

Watts, D. J. (2014). Common Sense and Sociological Explanations. American Journal of Sociology, 120(2), 313-351.


Edmonds, B., Polhill, G and Hales, D. (2019) Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

What are the best journals for publishing re-validations of existing models?

By Annamaria Berea

A few years ago I worked on an ABM that I eventually published in a book. Recently, I have conducted new experiments with the same model, re-analyzed the data and had a different dataset that I used for validation of the model. Where can I publish this new work on an older model? I submitted it to a special issue of a journal, but was rejected as “the model was not original”. While the model is not, the new data analysis and validation are and I think it is even more important within the current discussions about replication crises in science.


Berea, A. (2019) What are the best journals or publishers for reports of re-validations of existing models? Review of Artificial Societies and Social Simulation, 31st October 2019. https://rofasss.org/2019/10/30/best-journal/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Some Philosophical Viewpoints on Social Simulation

By Bruce Edmonds

How one thinks about knowledge can have a significant impact on how one develops models as well as how one might judge a good model.

  • Pragmatism. Under this view a simulation is a tool for a particular purpose. Different purposes will imply different tests for a good model. What is useful for one purpose might well not be good for another – different kinds of models and modelling processes might be good for each purpose. A simulation whose purpose is to explore the theoretical implications of some assumptions might well be very different from one aiming to explain some observed data. An example of this approach is (Edmonds & al. 2019).
  • Social Constructivism. Here knowledge about social phenomena (including simulation models) are collectively constructed. There is no other kind of knowledge than this. Each simulation is a way of thinking about social reality and plays a part in constructing it so. What is a suitable construction may vary over time and between cultures etc. What a group of people construct is not necessarily limited to simulations that are related to empirical data. (Ahrweiler & Gilbert 2005) seem to take this view but this is more explicit in some of the participatory modelling work, where the aim is to construct a simulation that is acceptable to a group of people, e.g. (Etienne 2014).
  • Relativism. There are no bad models, only different ways of mediating between your thought and reality (Morgan 1999). If you work hard on developing your model, you do not get a better model, only a different one. This might be a consequence of holding to an Epistemological Constructivist position.
  • Descriptive Realism. A simulation is a picture of some aspect of reality (albeit at a much lower ‘resolution’ and imperfectly). If one obtains a faithful representation of some aspect of reality as a model, one can use it for many different purposes. Could imply very complicated models (depending on what one observes and decides is relevant), which might themselves be difficult to understand. I suspect that many people have this in mind as they develop models, but few explicitly take this approach. Maybe an example is (Fieldhouse et al. 2016).
  • Classic Positivism. Here, the empirical fit and the analytic understanding of the simulation is all that matters, nothing else. Models should be tested against data and discarded if inadequate (or they compete and one is currently ahead empirically). Also they should be simple enough that they can be thoroughly understood. There is no obligation to be descriptively realistic. Many physics approaches to social phenomena follow this path (e.g Helbing 2010, Galam 2012).

Of course, few authors make their philosophical position explicit – usually one has to infer it from their text and modelling style.

References

Ahrweiler, P. and Gilbert, N. (2005). Caffè Nero: the Evaluation of Social Simulation. Journal of Artificial Societies and Social Simulation 8(4):14. http://jasss.soc.surrey.ac.uk/8/4/14.html

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. and Squazzoni. F. (in press) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Etienne, M. (ed.) (2014) Companion Modelling: A Participatory Approach to Support Sustainable Development. Springer

Fieldhouse, E., Lessard-Phillips, L. and Edmonds, B. (2016) Cascade or echo chamber? A complex agent-based simulation of voter turnout. Party Politics. 22(2):241-256. DOI:10.1177/1354068815605671

Galam, S. (2012) Sociophysics: A Physicist’s modeling of psycho-political phenomena. Springer.

Helbing, D. (2010). Quantitative sociodynamics: stochastic methods and models of social interaction processes. Springer.

Morgan, M. S., Morrison, M., & Skinner, Q. (Eds.). (1999). Models as mediators: Perspectives on natural and social science (Vol. 52). Cambridge University Press.


Edmonds, B. (2019) Some Philosophical Viewpoints on Social Simulation. Review of Artificial Societies and Social Simulation, 2nd July 2019. https://rofasss.org/2019/07/02/phil-view/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)