Tag Archives: modelling

The Poverty of Suggestivism – the dangers of “suggests that” modelling

By Bruce Edmonds

Vagueness and refutation

A model[1] is basically composed of two parts (Zeigler 1976, Wartofsky 1979):

  1. A set of entities (such as mathematical equations, logical rules, computer code etc.) which can be used to make some inferences as to the consequences of that set (usually in conjunction with some data and parameter values)
  2. A mapping from this set to what it aims to represent – what the bits mean

Whilst a lot of attention has been paid to the internal rigour of the set of entities and the inferences that are made from them (1), the mapping to what that represents (2) has often been left as implicit or incompletely described – sometimes only indicated by the labels given to its parts. The result is a model that vaguely relates to its target, suggesting its properties analogically. There is not a well-defined way that the model is to be applied to anything observed, but a new map is invented each time it is used to think about a particular case. I call this way of modelling “Suggestivism”, because the model “suggests” things about what is being modelled.

This is partly a recapitulation of Popper’s critique of vague theories in his book “The Poverty of Historicism” (1957). He characterised such theories as “irrefutable”, because whatever the facts, these theories could be made to fit them. Irrefutability is an indicator of a lack of precise mapping to reality – such vagueness makes refutation very hard. However, it is only an indicator; there may be other reasons than vagueness for it not being possible to test a theory – it is their disconnection from well-defined empirical reference that is the issue here.

Some might go as far as suggesting that any model or theory that is not refutable is “unscientific”, but this goes too far, implying a very restricted definition of what ‘science’ is. We need analogies to think about what we are doing and to gain insight into what we are studying, e.g. (Hartman 1997) – for humans they are unavoidable, ‘baked’ into the way language works (Lakoff 1987). A model might make a set of ideas clear and help map out the consequences of a set of assumptions/structures/processes. Many of these suggestivist models relate to a set of ideas and it is the ideas that relate to what is observed (albeit informally) (Edmonds 2001). However, such models do not capture anything reliable about what they refer to, and in that sense are not part of the set of the established statements and theories that is at the core of science  (Arnold 2014).

The dangers of suggestivist modelling

As above, there are valid uses of abstract or theoretical modelling where this is explicitly acknowledged and where no conclusions about observed phenomena are made. So what are the dangers of suggestivist modelling – why am I making such a fuss about it?

Firstly, that people often seem to confuse a model as an analogy – a way of thinking about stuff – and a model that tells us reliably about what we are studying. Thus they give undue weight to the analyses of abstract models that are, in fact, just thought experiments. Making models is a very intimate way of theorising – one spends an extended period of time interacting with one’s model: developing, checking, analysing etc. The result is a particularly strong version of “Kuhnian Spectacles” (Kuhn 1962) causing us to see the world though our model for weeks after. Under this strong influence it is natural to confuse what we can reliably infer about the world and how we are currently perceiving/thinking about it. Good scientists should then pause and wait for this effect to wear off so that they can effectively critique what they have done, its limitations and what its implications are. However, often in the rush to get their work out, modellers often do not do this, resulting in a sloppy set of suggestive interpretations of their modelling.

Secondly, empirical modelling is hard. It is far easier (and, frankly, more fun) to play with non-empirical models. A scientific culture that treats suggestivist modelling as substantial progress and significantly rewards modellers that do it, will effectively divert a lot of modelling effort in this direction. Chattoe-Brown (2018) displayed evidence of this in his survey of opinion dynamics models – abstract, suggestivist modelling got far more reward (in terms of citations) than those that tried to relate their model to empirical data in a direct manner. Abstract modelling has a role in science, but if it is easier and more rewarding then the field will become unbalanced. It may give the impression of progress but not deliver on this impression. In a more mature science, researchers working on measurement methods (steps from observation to models) and collecting good data are as important as the theorists (Moss 1998).

Thirdly, it is hard to judge suggestivist models. Given their connection to the modelling target is vague there cannot be any decisive test of its success. Good modellers should declare the exact purpose of their model, e.g. that is analogical or merely exploring the consequences of theory (Edmonds et al. 2019), but then accept the consequences of this choice – namely, that it excludes  making conclusions about the observed world. If it is for a theoretical exploration then the comprehensiveness of the exploration, the scope of the exploration and the applicability of the model can be judged, but if the model is analogical or illustrative then this is harder. Whilst one model may suggest X, another may suggest the opposite. It is quite easy to fix a model to get the outcomes one wants. Clearly, if a model makes startling suggestions – illustrating totally new ideas or making a counter-example to widely held assumptions – then this helps science by widening the pool of theories or hypotheses that are considered. However most suggestivist modelling does not do this.

Fourthly, their sheer flexibility of as to application causes problems – if one works hard enough one can invent mappings to a wide range of cases, the limits are only those of our imagination. In effect, having a vague mapping from model to what it models adds in huge flexibility in a similar way to having a large number of free (non-empirical) parameters. This flexibility gives an impression of generality, and many desire simple and general models for complex phenomena. However, this is illusory because a different mapping is needed for each case, to make it apply. Given the above (1)+(2) definition of a model this means that, in fact, it is a different model for each case – what a model refers to, is part of the model. The same flexibility makes such models impossible to refute, since one can just adjust the mapping to save them. The apparent generality and lack of refutation means that such models hang around in the literature, due to their surface attractiveness.

Finally, these kinds of model are hugely influential beyond the community of modellers to the wider public including policy actors. Narratives that start in abstract models make their way out and can be very influential (Vranckx 1999). Despite the lack of rigorous mapping from model to reality, suggestivist models look impressive, look scientific. For example, very abstract models from the Neo-Classical ‘Chicago School’ of economists supported narratives about the optimal efficiency of markets, leading to a reluctance to regulate them (Krugman 2009). A lack of regulation seemed to be one of the factors behind the 2007/8 economic crash (Baily et al 2008). Modellers may understand that other modellers get over-enthusiastic and over-interpret their models, but others may not. It is the duty of modellers to give an accurate impression of the reliability of any modelling results and not to over-hype them.

How to recognise a suggestivist model

It can be hard to detangle how empirically vague a model is, because many descriptions about modelling work do not focus on making the mapping to what it represents precise. The reasons for this are various, for example: the modeller might be conflating reality and what is in the model in their minds, the researcher is new to modelling and has not really decided what the purpose of their model is, the modeller might be over-keen to establish the importance of their work and so is hyping the motivation and conclusions, they might simply not got around to thinking enough about the relationship between their model and what it might represent, or they might not have bothered to make the relationship explicit in their description. Whatever the reason the reader of any description of such work is often left with an archaeological problem: trying to unearth what the relationship might be, based on indirect clues only. The only way to know for certain is to take a case one knows about and try and apply the model to it, but this is a time consuming process and relies upon having a case with suitable data available. However, there are some indicators, albeit fallible ones, including the following.

  • A relatively simple model is interpreted as explaining a wide range of observed, complex phenomena
  • No data from an observed case study is compared to data from the model (often no data is brought in at all, merely abstract observations) – despite this, conclusions about some observed phenomena are made
  • The purpose of the model is not explicitly declared
  • The language of the paper seems to conflate talking about the model with what is being modelled
  • In the paper there are sudden abstraction ‘jumps’ between the motivation and the description of the model and back again to the interpretation of the results in terms of that motivation. The abstraction jumps involved are large and justified by some a priori theory or modelling precedents rather than evidence.

How to avoid suggestivist modelling

How to avoid the dangers of suggestivist modelling should be clear from the above discussion, but I will make them explicit here.

  • Be clear about the model purpose – that is does the model aim to achieve, which indicates how it should be judged by others (Edmonds et al 2019)
  • Do not make any conclusions about the real world if you have not related the model to any data
  • Do not make any policy conclusions – things that might affect other people’s lives – without at least some independent validation of the model outcomes
  • Document how a model relates (or should relate) to data, the nature of that data and maybe even the process whereby that data should be obtained (Achter et al 2019)
  • Be explicit as possible about what kinds of phenomena the model applies to – the limits of its scope
  • Keep the language about the model and what is being modelled distinct – for any statement it should be clear whether it is talking about the model or what it models (Edmonds 2020)
  • Highlight any bold assumptions in the specification of the model or describe what empirical foundation there is for them – be honest about these

Conclusion

Models can serve many different purposes (Epstein 2008). This is fine as long as the purpose of models are always made clear, and model results are not interpreted further than their established purpose allows. Research which gives the impression that analogical, illustrative or theoretical modelling can tell us anything reliable about observed complex phenomena is not only sloppy science, but can have a deleterious impact – giving an impression of progress whilst diverting attention from empirically reliable work. Like a bad investment: if it looks too good and too easy to be true, it probably isn’t.

Notes

[1] We often use the word “model” in a lazy way to indicate (1) rather than (1)+(2) in this definition, but a set of entities without any meaning or mapping to anything else is not a model, as it does not represent anything. For example, a random set of equations or program instructions does not make a model.

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, grant number ES/S015159/1.

References

Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. & Siebers, P.-O. (2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/

Arnold, E. (2014). What’s wrong with social simulations?. The Monist, 97(3), 359-377. DOI:10.5840/monist201497323

Baily, M. N., Litan, R. E., & Johnson, M. S. (2008). The origins of the financial crisis. Fixing Finance Series – Paper 3, The Brookings Institution. https://www.brookings.edu/wp-content/uploads/2016/06/11_origins_crisis_baily_litan.pdf

Chattoe-Brown, E. (2018) What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table? Review of Artificial Societies and Social Simulation, 11th June 2018. https://rofasss.org/2018/06/11/ecb/

Edmonds, B. (2001) The Use of Models – making MABS actually work. In. Moss, S. and Davidsson, P. (eds.), Multi Agent Based Simulation, Lecture Notes in Artificial Intelligence, 1979:15-32. http://cfpm.org/cpmrep74.html

Edmonds, B. (2020) Basic Modelling Hygiene – keep descriptions about models and what they model clearly distinct. Review of Artificial Societies and Social Simulation, 22nd May 2020. https://rofasss.org/2020/05/22/modelling-hygiene/

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Epstein, J. M. (2008). Why model?. Journal of artificial societies and social simulation, 11(4), 12. https://jasss.soc.surrey.ac.uk/11/4/12.html

Hartmann, S. (1997): Modelling and the Aims of Science. In: Weingartner, P. et al (ed.) : The Role of Pragmatics in Contemporary Philosophy: Contributions of the Austrian Ludwig Wittgenstein Society. Vol. 5. Wien und Kirchberg: Digi-Buch. pp. 380-385. https://epub.ub.uni-muenchen.de/25393/

Krugman, P. (2009) How Did Economists Get It So Wrong? New York Times, Sept. 2nd 2009. https://www.nytimes.com/2009/09/06/magazine/06Economic-t.html

Kuhn, T.S. (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

Lakoff, G. (1987) Women, fire, and dangerous things. University of Chicago Press, Chicago.

Morgan, M. S., & Morrison, M. (1999). Models as mediators. Cambridge: Cambridge University Press.

Moss, S. (1998) Social Simulation Models and Reality: Three Approaches. Centre for Policy Modelling  Discussion Paper: CPM-98-35, http://cfpm.org/cpmrep35.html

Popper, K. (1957). The poverty of historicism. Routledge.

Vranckx, An. (1999) Science, Fiction & the Appeal of Complexity. In Aerts, Diederik, Serge Gutwirth, Sonja Smets, and Luk Van Langehove, (eds.) Science, Technology, and Social Change: The Orange Book of “Einstein Meets Magritte.” Brussels: Vrije Universiteit Brussel; Dordrecht: Kluwer., pp. 283–301.

Wartofsky, M. W. (1979). The model muddle: Proposals for an immodest realism. In Models (pp. 1-11). Springer, Dordrecht.

Zeigler, B. P. (1976). Theory of Modeling and Simulation. Wiley Interscience, New York.


Edmonds, B. (2022) The Poverty of Suggestivism – the dangers of "suggests that" modelling. Review of Artificial Societies and Social Simulation, 28th Feb 2022. https://rofasss.org/2022/02/28/poverty-suggestivism


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation

By Edmund Chattoe-Brown

As part of a previous research project, I collected a sample of the Opinion Dynamics (hereafter OD) models published in JASSS that were most highly cited in JASSS. The idea here was to understand what styles of OD research were most influential in the journal. In the top 50 on 19.10.21 there were eight such articles. Five were self-contained modelling exercises (Hegselmann and Krause 2002, 58 citations, Deffuant et al. 2002, 35 citations, Salzarulo 2006, 13 citations, Deffuant 2006, 13 citations and Urbig et al. 2008, 9 citations), two were overviews of OD modelling (Flache et al. 2017, 13 citations and Sobkowicz 2009, 10 citations) and one included an OD example in an article mainly discussing the merits of cellular automata modelling (Hegselmann and Flache 1998, 12 citations). In order to get in to the top 50 on that date you had to achieve at least 7 citations. In parallel, I have been trying to identify Agent-Based Models that are validated (undergo direct comparison of real and equivalent simulated data). Based on an earlier bibliography (Chattoe-Brown 2020) which I extended to the end of 2021 for JASSS and articles which were described as validated in the highly cited articles listed above, I managed to construct a small and unsystematic sample of validated OD models. (Part of the problem with a systematic sample is that validated models are not readily searchable as a distinct category and there are too many OD models overall to make reading them all feasible. Also, I suspect, validated models just remain rare in line with the larger scale findings of Dutton and Starbuck (1971, p. 130, table 1) and discouragingly, much more recently, Angus and Hassani-Mahmooei (2015, section 4.5, figure 9). Obviously, since part of the sample was selected by total number of citations, one cannot make a comparison on that basis, so instead I have used the best possible alternative (given the limitations of the sample) and compared articles on citations per year. The problem here is that attempting validated modelling is relatively new while older articles inevitably accumulate citations however slowly. But what I was trying to discover was whether new validated models could be cited at a much higher annual rate without reaching the top 50 (or whether, conversely, older articles could have a high enough total citations to get into the top 50 without having a particularly impressive annual citation rate.) One would hope that, ultimately, validated models would tend to receive more citations than those that were not validated (but see the rather disconcerting related findings of Serra-Garcia and Gneezy 2021). Table 1 shows the results sorted by citations per year.

Article Status Number of JASSS Citations[1] Number of Years[2] Citations Per Year
Bernardes et al. 2002 Validated 1 20 0.05
Bernardes et al. 2001 Validated 2 21 0.096
Fortunato and Castellano 2007 Validated 2 15 0.13
Caruso and Castorina 2005 Validated 4 17 0.24
Chattoe-Brown 2014 Validated 2 8 0.25
Brousmiche et al. 2016 Validated 2 6 0.33
Hegselmann and Flache 1998 Non-Validated 12 24 0.5
Urbig et al. 2008 Non-Validated 9 14 0.64
Sobkowicz 2009 Non-Validated 10 13 0.77
Deffuant 2006 Non-Validated 13 16 0.81
Salzarulo 2006 Non-Validated 13 16 0.81
Duggins 2017 Validated 5 5 1
Deffuant et al. 2002 Non-Validated 35 20 1.75
Flache et al. 2017 Non-Validated 13 5 2.6
Hegselmann and Krause 2002 Non-Validated 58 20 2.9

Table 1. Annual Citation Rates for OD Articles Highly Cited in JASSS (Systematic Sample) and Validated OD Articles in or Cited in JASSS (Unsystematic Sample)

With the notable (and potentially encouraging) exception of Duggins (2017), the most recent validated OD model I have been able to discover in JASSS, the sample clearly divides into non-validated research with more citations and validated research with fewer. The position of Duggins (2017) might suggest greater recent interest in validated OD models. Unfortunately, however, qualitative analysis of the citations suggests that these are not cited as validated models per se (and thus as a potential improvement over non-validated models) but merely as part of general classes of OD model (like those involving social networks or repulsion – moving away from highly discrepant opinions). This tendency to cite validated models without acknowledging that they are validated (and what the implications of that might be) is widespread in the articles I looked at.

Obviously, there is plenty wrong with this analysis. Even looking at citations per annum we are arguably still partially sampling on the dependent variable (articles selected for being widely cited prove to be widely cited!) and the sample of validated OD models is unsystematic (though in fairness the challenges of producing a systematic sample are significant.[3]) But the aim here is to make a distinctive use of RoFASSS as a rapid mode of permanent publication and to think differently about science. If I tried to publish this in a peer reviewed journal, the amount of labour required to satisfy reviewers about the research design would probably be prohibitive (even if it were possible). As a result, the case to answer about this apparent (and perhaps undesirable) pattern in data might never see the light of day.

But by publishing quickly in RoFASSS without the filter of peer review I actively want my hypothesis to be rejected or replaced by research based on a better design (and such research may be motivated precisely by my presenting this interesting pattern with all its imperfections). When it comes to scientific progress, the chance to be clearly wrong now could be more useful than the opportunity to be vaguely right at some unknown point in the future.

Acknowledgements

This analysis was funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5 (PI: Professor Bruce Edmonds, Centre for Policy Modelling, Manchester Metropolitan University: https://gtr.ukri.org/projects?ref=ES%2FS015159%2F1).

Notes

[1] Note that the validated OD models had their citations counted manually while the high total citation articles had them counted automatically. This may introduce some comparison error but there is no reason to think that either count will be terribly inaccurate.

[2] Including the year of publication and the current year (2021).

[3] Note, however, that there are some checks and balances on sample quality. Highly successful validated OD models would have shown up independently in the top 50. There is thus an upper bound to the impact of the articles I might have missed in manually constructing my “version 1” bibliography. The unsystematic review of 47 articles by Sobkowicz (2009) also checks independently on the absence of validated OD models in JASSS to that date and confirms the rarity of such articles generally. Only four of the articles that he surveys are significantly empirical.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16, <http://jasss.soc.surrey.ac.uk/18/4/16.html>. doi:10.18564/jasss.2952

Bernardes, A. T., Costa, U. M. S., Araujo, A. D. and Stauffer, D. (2001) ‘Damage Spreading, Coarsening Dynamics and Distribution of Political Votes in Sznajd Model on Square Lattice’, International Journal of Modern Physics C: Computational Physics and Physical Computation, 12(2), February, pp. 159-168. doi:10.1140/e10051-002-0013-y

Bernardes, A. T., Stauffer, D. and Kertész, J. (2002) ‘Election Results and the Sznajd Model on Barabasi Network’, The European Physical Journal B: Condensed Matter and Complex Systems, 25(1), January, pp. 123-127. doi:10.1142/S0129183101001584

Brousmiche, Kei-Leo, Kant, Jean-Daniel, Sabouret, Nicolas and Prenot-Guinard, François (2016) ‘From Beliefs to Attitudes: Polias, A Model of Attitude Dynamics Based on Cognitive Modelling and Field Data’, Journal of Artificial Societies and Social Simulation, 19(4), October, article 2, <https://www.jasss.org/19/4/2.html>. doi:10.18564/jasss.3161

Caruso, Filippo and Castorina, Paolo (2005) ‘Opinion Dynamics and Decision of Vote in Bipolar Political Systems’, arXiv > Physics > Physics and Society, 26 March, version 2. doi:10.1142/S0129183105008059

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1), February, article 16, <https://www.socresonline.org.uk/19/1/16.html>. doi:0.5153/sro.3315

Chattoe-Brown Edmund (2020) ‘A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation: Version 1’, CPM Report CPM-20-216, 12 June, <http://cfpm.org/discussionpapers/256>

Deffuant, Guillaume (2006) ‘Comparing Extremism Propagation Patterns in Continuous Opinion Models’, Journal of Artificial Societies and Social Simulation, 9(3), June, article 8, <https://www.jasss.org/9/3/8.html>.

Deffuant, Guillaume, Amblard, Frédéric, Weisbuch, Gérard and Faure, Thierry (2002) ‘How Can Extremism Prevail? A Study Based on the Relative Agreement Interaction Model’, Journal of Artificial Societies and Social Simulation, 5(4), October, article 1, <https://www.jasss.org/5/4/1.html>.

Duggins, Peter (2017) ‘A Psychologically-Motivated Model of Opinion Change with Applications to American Politics’, Journal of Artificial Societies and Social Simulation, 20(1), January, article 13, <http://jasss.soc.surrey.ac.uk/20/1/13.html>. doi:10.18564/jasss.3316

Dutton, John M. and Starbuck, William H. (1971) ‘Computer Simulation Models of Human Behavior: A History of an Intellectual Technology’, IEEE Transactions on Systems, Man, and Cybernetics, SMC-1(2), April, pp. 128-171. doi:10.1109/TSMC.1971.4308269

Flache, Andreas, Mäs, Michael, Feliciani, Thomas, Chattoe-Brown, Edmund, Deffuant, Guillaume, Huet, Sylvie and Lorenz, Jan (2017) ‘Models of Social Influence: Towards the Next Frontiers’, Journal of Artificial Societies and Social Simulation, 20(4), October, article 2, <http://jasss.soc.surrey.ac.uk/20/4/2.html>. doi:10.18564/jasss.3521

Fortunato, Santo and Castellano, Claudio (2007) ‘Scaling and Universality in Proportional Elections’, Physical Review Letters, 99(13), 28 September, article 138701. doi:10.1103/PhysRevLett.99.138701

Hegselmann, Rainer and Flache, Andreas (1998) ‘Understanding Complex Social Dynamics: A Plea For Cellular Automata Based Modelling’, Journal of Artificial Societies and Social Simulation, 1(3), June, article 1, <https://www.jasss.org/1/3/1.html>.

Hegselmann, Rainer and Krause, Ulrich (2002) ‘Opinion Dynamics and Bounded Confidence Models, Analysis, and Simulation’, Journal of Artificial Societies and Social Simulation, 5(3), June, article 2, <http://jasss.soc.surrey.ac.uk/5/3/2.html>.

Salzarulo, Laurent (2006) ‘A Continuous Opinion Dynamics Model Based on the Principle of Meta-Contrast’, Journal of Artificial Societies and Social Simulation, 9(1), January, article 13, <http://jasss.soc.surrey.ac.uk/9/1/13.html>.

Serra-Garcia, Marta and Gneezy, Uri (2021) ‘Nonreplicable Publications are Cited More Than Replicable Ones’, Science Advances, 7, 21 May, article eabd1705. doi:10.1126/sciadv.abd1705

Sobkowicz, Pawel (2009) ‘Modelling Opinion Formation with Physics Tools: Call for Closer Link with Reality’, Journal of Artificial Societies and Social Simulation, 12(1), January, article 11, <http://jasss.soc.surrey.ac.uk/12/1/11.html>.

Urbig, Diemo, Lorenz, Jan and Herzberg, Heiko (2008) ‘Opinion Dynamics: The Effect of the Number of Peers Met at Once’, Journal of Artificial Societies and Social Simulation, 11(2), March, article 4, <http://jasss.soc.surrey.ac.uk/11/2/4.html>.


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Today We Have Naming Of Parts: A Possible Way Out Of Some Terminological Problems With ABM

By Edmund Chattoe-Brown


Today we have naming of parts. Yesterday,
We had daily cleaning. And tomorrow morning,
We shall have what to do after firing. But to-day,
Today we have naming of parts. Japonica
Glistens like coral in all of the neighbouring gardens,
And today we have naming of parts.
(Naming of Parts, Henry Reed, 1942)

It is not difficult to establish by casual reading that there are almost as many ways of using crucial terms like calibration and validation in ABM as there are actual instances of their use. This creates several damaging problems for scientific progress in the field. Firstly, when two different researchers both say they “validated” their ABMs they may mean different specific scientific activities. This makes it hard for readers to evaluate research generally, particularly if researchers assume that it is obvious what their terms mean (rather than explaining explicitly what they did in their analysis). Secondly, based on this, each researcher may feel that the other has not really validated their ABM but has instead done something to which a different name should more properly be given. This compounds the possible confusion in debate. Thirdly, there is a danger that researchers may rhetorically favour (perhaps unconsciously) uses that, for example, make their research sound more robustly empirical than it actually is. For example, validation is sometimes used to mean consistency with stylised facts (rather than, say, correspondence with a specific time series according to some formal measure). But we often have no way of telling what the status of the presented stylised facts is. Are they an effective summary of what is known in a field? Are they the facts on which most researchers agree or for which the available data presents the clearest picture? (Less reputably, can readers be confident that they were not selected for presentation because of their correspondence?) Fourthly, because these terms are used differently by different researchers it is possible that valuable scientific activities that “should” have agreed labels will “slip down the terminological cracks” (either for the individual or for the ABM community generally). Apart from clear labels avoiding confusion for others, they may help to avoid confusion for you too!

But apart from these problems (and there may be others but these are not the main thrust of my argument here) there is also a potential impasse. There simply doesn’t seem to be any value in arguing about what the “correct” meaning of validation (for example) should be. Because these are merely labels there is no objective way to resolve this issue. Further, even if we undertook to agree the terminology collectively, each individual would tend to argue for their own interpretation without solid grounds (because there are none to be had) and any collective decision would probably therefore be unenforceable. If we decide to invent arbitrary new terminology from scratch we not only run the risk of adding to the existing confusion of terms (rather than reducing it) but it is also quite likely that everyone will find the new terms unhelpful.

Unfortunately, however, we probably cannot do without labels for these scientific activities involved in quality controlling ABMs. If we had to describe everything we did without any technical shorthand, presenting research might well become impossibly unwieldy.

My proposed solution is therefore to invent terms from scratch (so we don’t end up arguing about our different customary usages to no purpose) but to do so on the basis of actual scientific practices reported in published research. For example, we might call the comparison of corresponding real and simulated data (which at least has the endorsement of the much used Gilbert and Troitzsch 2005 – see pp. 15-19 – to be referred to as validation) CORAS – Comparison Of Real And Simulated. Similarly, assigning values to parameters given the assumptions of model “structures” might be called PANV – Parameters Assigned Numerical Values.

It is very important to be clear what the intention is here. Naming cannot solve scientific problems or disagreements. (Indeed, failure to grasp this may well be why our terminology is currently so muddled as people try to get their different positions through “on the nod”.) For example, if we do not believe that correspondence with stylised facts and comparison measures on time series have equivalent scientific status then we will have to agree distinct labels for them and have the debate about their respective value separately. Perhaps the former could be called COSF – Comparison Of Stylised Facts. But it seems plainly easier to describe specific scientific activities accurately and then find labels for them than to have to wade through the existing marsh of ambiguous terminology and try to extract the associated science. An example of a practice which does not seem to have even one generally agreed label (and therefore seems to be neglected in ABM as a practice) is JAMS – Justifying A Model Structure. (Why are your agents adaptive rather than habitual or rational? Why do they mix randomly rather than in social networks?)

Obviously, there still needs to be community agreement for such a convention to be useful (and this may need to be backed institutionally for example by reviewing requirements). But the logic of the approach avoids several existing problems. Firstly, while the labels are useful shorthand, they are not arbitrary. Each can be traced back to a clearly definable scientific practice. Secondly, this approach steers a course between the Scylla of fruitless arguments from current muddled usage and the Charybdis of a novel set of terminology that is equally unhelpful to everybody. (Even if people cannot agree on labels, they knew how they built and evaluated their ABMs so they can choose – or create – new labels accordingly.) Thirdly, the proposed logic is extendable. As we clarify our thinking, we can use it to label (or improve the labels of) any current set of scientific practices. We will do not have to worry that we will run out of plausible words in everyday usage.

Below I suggest some more scientific practices and possible terms for them. (You will see that I have also tried to make the terms as pronounceable and distinct as possible.)

Practice Term
Checking the results of an ABM by building another.[1] CAMWA (Checking A Model With Another).
Checking ABM code behaves as intended (for example by debugging procedures, destructive testing using extreme values and so on). TAMAD (Testing A Model Against Description).
Justifying the structure of the environment in which agents act. JEM (Justifying the Environment of a Model): This is again a process that may pass unnoticed in ABM typically. For example, by assuming that agents only consider ethnic composition, the Schelling Model (Schelling 1969, 1971) does not “allow” locations to be desirable because, for example, they are near good schools. This contradicts what was known empirically well before (see, for example, Rossi 1955) and it isn’t clear whether simply saying that your interest is in an “abstract” model can justify this level of empirical neglect.
Finding out what effect parameter values have on ABM behaviour. EVOPE (Exploring Value Of Parameter Effects).
Exploring the sensitivity of an ABM to structural assumptions not justified empirically (see Chattoe-Brown 2021). ESOSA (Exploring the Sensitivity Of Structural Assumptions).

Clearly this list is incomplete but I think it would be more effective if characterising the scientific practices in existing ABM and naming them distinctively was a collective enterprise.

Acknowledgements

This research is funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5 (PI: Professor Bruce Edmonds, Centre for Policy Modelling, Manchester Metropolitan University: https://gtr.ukri.org/projects?ref=ES%2FS015159%2F1).

Notes

[1] It is likely that we will have to invent terms for subcategories of practices which differ in their aims or warranted conclusions. For example, rerunning the code of the original author (CAMWOC – Checking A Model With Original Code), building a new ABM from a formal description like ODD (CAMUS – Checking A Model Using Specification) and building a new ABM from the published description (CAMAP – Checking A Model As Published, see Chattoe-Brown et al. 2021).

References

Chattoe-Brown, Edmund (2021) ‘Why Questions Like “Do Networks Matter?” Matter to Methodology: How Agent-Based Modelling Makes It Possible to Answer Them’, International Journal of Social Research Methodology, 24(4), pp. 429-442. doi:10.1080/13645579.2020.1801602

Chattoe-Brown, Edmund, Gilbert, Nigel, Robertson, Duncan A. and Watts Christopher (2021) ‘Reproduction as a Means of Evaluating Policy Models: A Case Study of a COVID-19 Simulation’, medRXiv, 23 February. doi:10.1101/2021.01.29.21250743

Gilbert, Nigel and Troitzsch, Klaus G. (2005) Simulation for the Social Scientist, second edition (Maidenhead: Open University Press).

Rossi, Peter H. (1955) Why Families Move: A Study in the Social Psychology of Urban Residential Mobility (Glencoe, IL, Free Press).

Schelling, Thomas C. (1969) ‘Models of Segregation’, American Economic Review, 59(2), May, pp. 488-493. (available at https://www.jstor.org/stable/1823701)


Chattoe-Brown, E. (2022) Today We Have Naming Of Parts: A Possible Way Out Of Some Terminological Problems With ABM. Review of Artificial Societies and Social Simulation, 11th January 2022. https://rofasss.org/2022/01/11/naming-of-parts/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Challenges and opportunities in expanding ABM to other fields: the example of psychology

By Dino Carpentras

Centre for Social Issues Research, Department of Psychology, University of Limerick

The loop of isolation

One of the problems discussed during the last public meeting of the European Social Simulation Association (ESSA) at the Social Simulation Conference 2021 was the problem of reaching different communities outside the ABM one. This is a serious problem as we are risking getting trapped in a vicious cycle of isolation.

The cycle can be explained as follows. (a) Many fields are not familiar with ABM methods and standards. This results in the fact that (b) both reviewers and editors will struggle in understanding and evaluating the quality of an ABM paper. In general, this translates in a higher rejection rate and way longer time before publication. As results (c) fewer researchers in ABM will be willing to send their work to other communities, and, in general, fewer ABM works will be published in journals of other communities. Fewer articles using ABM makes it such that (d) fewer people would be aware of ABM, understand their methods and standards and even consider it an established research method.

Another point to consider is that, as time passes, each field evolves and develops new standards and procedures. Unfortunately, if two fields are not enough aware of each other, the new procedures will appear even more alien to members of the other community reinforcing the previously discussed cycle. A schematic of this is offered in figure 1.

fig1_v2

Figure 1: Vicious cycle of isolation

The challenge

Of course, a “brute force” solution would be to keep sending articles to journals in different fields until they get published. However, this would be extremely expensive in terms of time, and probably most researchers will not be happy of following this path.

A more elaborated solution could be framed as “progressively getting to know each other.” This would consist in modellers getting more familiar with the target community and vice versa. In this way, people from ABM would be able to better understand the jargon, the assumptions and even what is interesting enough to be the main result of a paper in a specific discipline. This would make it easier for members of our community to communicate research results using the language and methods familiar to the other field.

At the same time, researchers in the other field could slowly integrate ABM into their work, showing the potential of ABM and making it appear less alien to their peers. All of this would revert the previously discussed vicious cycle, by producing a virtuous one which would bring the two fields closer and closer.

Unfortunately, such goal cannot be obtained overnight, as it probably will require several events, collaborations, publications and probably several years (or even decades!). However, as result, our field would be familiar to and recognized by multiple other fields, enormously increasing the scientific impact of our research as well as the number of people working in ABM.

In this short communication, I would like to, firstly, highlight the importance and the challenges of reaching out other fields and, secondly, show a practical example with the field of psychology. I have chosen this field for no particular reason, besides the fact that I am currently working in the department of psychology. This gave me the opportunity of interacting with several researchers in this field.

In the next sections, I will summarize the main points of several informal discussions with these researchers. Specifically, I will try to highlight what they reported to be promising or interesting in ABM and also what felt alien or problematic to them.

Let me also stress that this does not want to be a complete overview, nor it should be thought as a summary of “what every psychologist think about ABM.” Instead, this is simply a summary of the discussions I had so far. What I hope, is that this will be at least a little useful to our community for building better connections with other fields.

The elephant in the room

Before moving to the list of comments on ABM I have collected, I want to address one point which appeared almost every time I discussed ABM with psychologists. Actually, it appeared almost every time I discuss ABM with people outside our field. This is the problem of experiments and validation.

I know there was recently a massive discussion on the SimSoc mailing list on opinion dynamics and validation, and this discussion will probably continue. Therefore, I am not going to discuss if all models should be tested, if a validated model should be considered superior, etc. Indeed, I do not want to discuss at all if validation should be considered important within our community. Instead, I want to discuss how important this is while interacting with other communities.

Indeed, many other fields give empirical data and validation a key role, having even developed different methods to test the quality of a hypothesis or a model when comparing it to empirical data (e.g. calculation of p-value, Krishnaiah 1980). Also, I repeatedly experienced disappointment or even mockery when I explained to non-ABM people that the model I was explaining them about was not empirically validated (e.g. the Deffuant model of opinion dynamics). In one single case, I even had a person laughing at me for this.

Unfortunately, many people which are not familiar with ABM end up considering it almost like a “nice exercise,” and even “not a real science.” This could be extremely dangerous for our field. Indeed, if multiple researchers will start thinking of ABM as a lesser science, communication with other fields – as well as obtaining funding for research – would get exponentially harder for our community.

Also, please, let me stress again to not “confuse the message with the messenger.” Here, I am not claiming that an unvalidated model should be considered inferior, or anything like that. What I am saying is that many people outside our field think in a similar fashion and this may eventually turn into a way bigger problem for us.

I will further discuss this point in the conclusion section, however, I will not claim that we should get rid of “pure models,” or that every model should be validated. What I will claim is that we should promote more empirical works as they will allow us to interact more easily with other fields.

Further points

In this section, I have collected (in no particular order) different comments and suggestions I have received from psychologist on the topic ABM. All of them had at least some experience of working side to side with a researcher developing ABMs.

Also in this case, please, remember that this are not my claims, but feedbacks I received. Furthermore, they should not be analysed as “what ABM is,” but more as “how ABM may look like to people in another field.”

  1. Some psychologists showed interest in the possibility of having loops in ABMs, which allow for relationships which go beyond simple cause and effect. Indeed, several models in psychology are structured in the form of “parameter X influences parameter Y” (and Y cannot influence X, forming a loop). While this approach is very common in psychology, many researchers are not satisfied with it, making ABMs are a very good opportunity for the development of more realistic models.
  2. Some psychologists said that at first impact, ABM looks very interesting. However, the extensive use of equations can confuse or even scare people who are not very used to them.
  3. Some praised Schelling’s model (Schelling 1971). Especially the approach of developing a hypothesis and then using an ABM to falsify it.
  4. Some criticized that often is not clear what an ABM should be used for or what such a model “is telling us.”
  5. Similarly, the use of models with a big number of parameters was criticized as “[these models] can eventually produce any result.”
  6. Another confusion that appeared multiple times was that often it is not clear if the model should be analysed and interpreted at the individual level (e.g. agents which start from state A often end up in state B) or at the more global level (e.g. distribution A results in distribution B).
  7. Another major complaint was that psychological measures are nominal or ordinal, while many models suppose interval-like variables.
  8. Another criticism was based on the fact that often agents behave all in the same way without including personal differences.
  9. In psychology there is a lot of attention on the sample size and if this is big enough to produce significant results. Some stressed that in many ABM works it is often not clear if the sample size (i.e. the number of agents) is sufficient for supporting the analysis.

Conclusion

I would like to stress again that these comments are not supposed to represent the thoughts of every psychologist, nor that I am suggesting that all the ABM literature should adapt to them or that they are always correct. For example, to my personal opinion, point 5 and 8 are pushing towards opposite directions; one aiming at simpler models and the other pushing towards complexity. Similarly, I do not think we should decrease the number of equations in our works to meet point 2. However, I think we should consider these feedbacks when planning interactions with the psychology community.

As mentioned before, a crucial role when interacting with other communities is played by experiments and validations. Even points 6 and especially points 7 and 9 suggest how member of this community often try to look for 1-to-1 relationships between agents of simulations and people in the real world.

fig2

Figure 2: (left) Empirical ABM acting as a bridge between theoretical ABM and other research fields. (Right) as the relationship between ABM and the other field matures, people become familiar with ABM standards and a direct link to theoretical ABM can be established.

As suggested by someone during the already mentioned discussion in the SimSoc mailing list, this could be solved by introducing a new figure (or, equivalently, a new research field) dedicated to empirical work in ABM. Following this solution, theoretical modellers could keep developing models without having to worry about validation. This would be similar to the work carried out by theoretical researchers in physics. At the same time, we would have also a stream of research dedicated to “experimental ABM.” People working on this topic will further explore the connection between models and the empirical world through experiments and validation processes. Of course, the two should not be mutually exclusive, as a researcher (or a piece of research) may still fall in both categories. However, having this distinction may help in giving more space to empirical work.

I believe that the role of experimental ABM could be crucial for developing good interactions between ABM and other communities. Indeed, this type of research could be accepted much more easily by other communities, producing better interactions with ABM. Especially, mentioning experiments and validation, could strongly decrease the initial mistrust that many people show when discussing ABM. Furthermore, as ABM develops stronger connections with another field, and our methods and standards become more familiar, we would probably also observe more people from the other community which would start looking into more theoretical ABM approaches and what-if scenarios (see fig 2).

References

Krishnaiah, P. R. (Ed.). (1980). A Hand Book of Statistics (Vol. 1). Motilal Banarsidass Publishe.

Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1(2), 143-186.

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144.


Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Benefits of Open Research in Social Simulation: An Early-Career Researcher’s Perspective

By Hyesop Shin

Research Associate at the School of Geographical and Earth Sciences, University of Glasgow, UK

In March 2017, in my first year of PhD, I attended a talk at the Microsoft Research Lab in Cambridge UK. It was about the importance of reproducibility and replicability in science. Inspired by the talk, I redesigned my research beyond my word processer and hard disk to open repositories and social media. Through my experience, there have been some challenges to learn other people’s work and replicate them to my project, but I found it more beneficial to share my problem and solutions for other people who may have encountered the same problem.

Having spoken to many early career researchers (ECRs) regarding the need for open science, specifically whether sharing codes is essential, the consensus was that it was not an essential component for their degree. A few answered that they were too embarrassed to share their codes online because their codes were not well coded enough. I somewhat empathised with their opinions, but at the same time, would insist that open research can gain more benefits than shame.

I wrote this short piece to openly discuss the benefits of conducting open research and suggest some points that ECRs should keep in mind. During the writing, there are some screenshots taken from my PhD work (Shin, 2021). I conclude my writing by accommodating personal experiences or other thoughts that might give more insights to the audience.

Benefits of Aiming an Open Project

I argue here that being transparent and honest about your model development strengthens the credibility of the research. In doing so, my thesis shared the original data, the scripts with annotations that are downloadable and executable, and wiki pages to summarise the outcomes and interpretations (see Figure 1 for examples). This evidence enables scholars and technicians to visit the repository if they are interested in the source codes or outcomes. Also, people can comment if any errors or bugs are identified, or the model is not executing on their machine or may suggest alternative ways to tackle the same problem. Even during the development, many developers share their work via online repositories (e.g. Github, Gitlab), and social media to ask for advice. Agent-based models are mostly uploaded on CoMSeS.net (previously named OpenABM). All of this can improve the quality of research.

Picture 1

Figure 1 A screenshot of a Github page showing how open platforms can help other people to understand the outcomes step by step

More practically, one can learn new ideas by helping each other. If there was a technical issue that can’t be solved, the problem should not be kept hidden, but rather be opened and solved together with experts online and offline. Figure 2 is a pragmatic example of posing questions to a wide range of developers on Stackoverflow – an online community of programmers to share and build codes. Providing my NetLogo codes, I asked how to send an agent group from one location to the other. The anonymous person, whose ID was JenB, kindly responded to me with a new set of codes, which helped me structure the codes more effectively.

Picture 2

Figure 2 Raising a question about sending agents from one location to another in NetLogo

Another example was about the errors I had encountered whilst I was running NetLogo with an R package “nlrx” (Salecker et al., 2019). Here, R was used as a compiler to submit iterative NetLogo jobs on the HPC (High Performance Computing) cluster to improve the execution speed. However, much to my surprise, I received error messages due to early terminations of failed HPC jobs. Not knowing what to do, I posed a question to the developer of the package (see Figure 3) and luckily got a response that the R ecosystem stores all the assigned objects in the RAM, but even with gigabytes of RAM, it struggles to write 96,822 patches over 8764 ticks on a spreadsheet.

Stackoverflow has kindly informed that NetLogo has a memory ceiling of 1GB[i] and keeps each run in the memory before it shuts down. Thus, if the model is huge and requires several iterations, then it is more likely that the execution speed will decrease after a few iterations. Before this information was seen, it was not understood why the model took 1 hour 20 minutes to finish the first run but struggled to maintain that speed on the twentieth run. Hence, sharing technical obstacles that occur in the middle of research can save a lot of time even for those who are contemplating similar research.

Picture 3

Figure 3 Comments posted on an online repository regarding the memory issue that NetLogo and R encountered

The Future for Open Research

For future quantitative studies in social simulation, this paper suggests students and researchers in their early careers should acclimatise themselves to using open-source platforms to conduct sustainable research. As clarity, conciseness, and coherence are featured as the important C’s for writing skills, good programming should take into consideration the following points.

First is clarity and conciseness (C&C). Here, clarity means that the scripts should be neatly documented. The computer does not know whether the codes are dirty or neat, it only cares whether it is syntactically correct, but it matters when other people attempt to understand the task. If the outcome produces the same results, it is always better to write clearer and simpler codes for other people and future upgrades. Thus, researchers should refer to other people’s work and learn how to code effectively. Another way to maintain clarity in coding is to keep descriptive and distinctive names for new variables. This statement might seem contradictory to the conciseness issue, but this is important as one of the common mistakes users make is to assign variables with abstract names such as LP1, LP2…LP10, which seems clear and concise for the model builder, but is even harder for the others when reviewing the code. The famous quote from Einstein, “Everything should be made as simple as possible, but not simpler.” is the appropriate phrase that model builders should always keep in mind. Hence, instead of coding LP9, names such as LandPriceIncreaseRate2009 (camel cases) or landprice_incrate_2009 (snake cases) can be more effective for the reviewers to understand the model.

Second is reproducibility and replicability (R&R). To be reproducible and replicable, initially, no errors should occur when others execute the script, and possible errors or bugs should be reported. It will also be more useful to document the libraries and the dependencies required. This is quite important as different OSs (operating systems) have different behaviours to install packages. For instance, the sf package in R has slightly different ways to install the package between OSs where Windows and MacOSX can be installed from the binary package while Linux needs to separately install GDAL (to read and write vector and raster data), Proj (which deals with projection), and GEOS (which provides geospatial functions) prior to the package installation. Finally, it would be very helpful if unit testing is included in the model. While R and Python provide splendid examples in their vignettes, NetLogo remains to offer the library models but goes no further than that. Offering unit testing examples can give a better understanding when the whole model is too complicated for others to comprehend. It can also give the impression that the modeller has full control of the model because without the unit test the verification process becomes error-prone. The good news is that NetLogo has most recently released the Beginner’s Interactive Dictionary with friendly explanations with videos and code examples[ii].

Third is to maintain version control. In terms of sustainability, researchers should be aware of software maintenance. Much programming software relies on libraries and packages that are built on a particular version. If the software is upgraded and no longer accepts the previous versions, then the package developers need to keep updating to run it on a new version. For example, NetLogo 6.0 experienced a significant change compared to versions 5.X. The biggest change was the replacement of tasks[iii] by anonymous procedures (Wilensky, 1999). This means that tasks are no longer primitives but are converted to arrow syntax. For example, if there is a list of [a b c], the previous task is asked to add the first, second, and third element as foreach [a b c] [ ?a+?b+?c ], while the new version does the same job as foreach [a b c][ add_all → a + b + c]. If the models haven’t converted to a new version it can be viewable as a read-only model but can’t be executed. Other geospatial packages in R such as rgdal and sf, have also struggled whenever a major update was made on their own packages or on the R version itself due to a lot of dependencies. Even ArcGIS, a UI (User Interface) software, had issues when they upgraded it from version 9.3 to 10. The projects that were coded under the VBA script in 9.3 were broken because it was not recognised as a correct function in the new version based on Python. This is also another example that backward compatibility and deprecation mechanisms are important.

Lastly, for more advanced users, it is also recommended to use a collaborative platform that executes every result from the codes with the exact version. One of the platforms is Codeocean. The Nature research team has recently chosen the platform to peer-review the codes (Perkel, 2019). The Nature editors and peer-reviewers strongly believed that coding has become a norm across many disciplines, and hence have asserted that the model process including the quality of data, conciseness, reproducibility, and documentation of the model should be placed as a requirement. Although the training procedure can be difficult at first, it will lead researchers to conduct themselves with more responsibility.

Looking for Opinions

With the advent of the era of big data and data science where people collaborate online and the ‘sharing is caring’ atmosphere has become a norm (Arribas-Bel et al., 2021; Lovelace, 2021), I insist that open research should no longer be an option. However, one may argue that although open research is by far an excellent model that can benefit many of today’s projects, there are certain types of risks that might concern ECRs such as intellectual property issues, code quality and technical security. Thus, if you have had different opinions regarding this issue, or simply favour adding your experiences during your PhD in social simulation, please add your thoughts via a thread.

Notes

[i] http://ccl.northwestern.edu/netlogo/docs/faq.html#how-big-can-my-model-be-how-many-turtles-patches-procedures-buttons-and-so-on-can-my-model-contain

[ii] https://ccl.northwestern.edu/netlogo/bind/

[iii] Tasks can be equations, x + y, or a set of lists [1 2 3 4 5]

References

Arribas-Bel, D., Alvanides, S., Batty, M., Crooks, A., See, L., & Wolf, L. (2021). Urban data/code: A new EP-B section. Environment and Planning B: Urban Analytics and City Science, 23998083211059670. https://doi.org/10.1177/23998083211059670

Lovelace, R. (2021). Open source tools for geographic analysis in transport planning. Journal of Geographical Systems, 23(4), 547–578. https://doi.org/10.1007/s10109-020-00342-2

Perkel, J. M. (2019). Make code accessible with these cloud services. Nature, 575(7781), 247. https://doi.org/10.1038/d41586-019-03366-x

Salecker, J., Sciaini, M., Meyer, K. M., & Wiegand, K. (2019). The nlrx r package: A next-generation framework for reproducible NetLogo model analyses. Methods in Ecology and Evolution, 10(11), 1854–1863. https://doi.org/10.1111/2041-210X.13286

Shin, H. (2021). Assessing Health Vulnerability to Air Pollution in Seoul Using an Agent-Based Simulation. University of Cambridge. https://doi.org/https://doi.org/10.17863/CAM.65615

Wilensky, U. (1999). Netlogo. Northwestern University: Evanston, IL, USA. https://ccl.northwestern.edu/netlogo/


Shin, H. (2021) Benefits of Open Research in Social Simulation: An Early-Career Researcher’s Perspective. Review of Artificial Societies and Social Simulation, 24th Nov 2021. https://rofasss.org/2021/11/23/benefits-open-research/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Reply to Frank Dignum

By Edmund Chattoe-Brown

This is a reply to Frank Dignum’s reply (about Edmund Chattoe-Brown’s review of Frank’s book)

As my academic career continues, I have become more and more interested in the way that people justify their modelling choices, for example, almost every Agent-Based Modeller makes approving noises about validation (in the sense of comparing real and simulated data) but only a handful actually try to do it (Chattoe-Brown 2020). Thus I think two specific statements that Frank makes in his response should be considered carefully:

  1. … we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises.” Firstly, negative claims (“This is not a banana”) are not generally helpful in argument. Secondly, readers want to know (or should want to know) what is being claimed and, importantly, how they would decide if it is true “objectively”. Given how many models sprang up under COVID it is clear that what is described here cannot be the only way to do it but the question is how do we know you did it “better?” This was also my point about institutionalisation. For me, the big lesson from COVID was how much the automatic response of the ABM community seems to be to go in all directions and build yet more models in a tearing hurry rather than synthesise them, challenge them or test them empirically. I foresee a problem both with this response and our possible unwillingness to be self-aware about it. Governments will not want a million “interesting” models to choose from but one where they have externally checkable reasons to trust it and that involves us changing our mindset (to be more like climate modellers for example, Bithell & Edmonds 2020). For example, colleagues and I developed a comparison methodology that allowed for the practical difficulties of direct replication (Chattoe-Brown et al. 2021).
  2. The second quotation which amplifies this point is: “But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.” Again, here one has to ask the right question for progress in modelling. On what scientific grounds should people do this? On what grounds should someone reuse this model rather than start their own? Why isn’t the Dignum et al. model built on another “market leader” to set a good example? (My point about programming languages was purely practical not scientific. Frank is right that the model is no less valid because the programming language was changed but a version that is now unsupported seems less useful as a basis for the kind of further development advocated here.)

I am not totally sure I have understood Frank’s point about data so I don’t want to press it but my concern was that, generally, the book did not seem to “tap into” relevant empirical research (and this is a wider problem that models mostly talk about other models). It is true that parameter values can be adjusted arbitrarily in sensitivity analysis but that does not get us any closer to empirically justified parameter values (which would then allow us to attempt validation by the “generative methodology”). Surely it is better to build a model that says something about the data that exists (however imperfect or approximate) than to rely on future data collection or educated guesses. I don’t really have the space to enumerate the times the book said “we did this for simplicity”, “we assumed that” etc. but the cumulative effect is quite noticeable. Again, we need to be aware of the models which use real data in whatever aspects and “take forward” those inputs so they become modelling standards. This has to be a collective and not an individualistic enterprise.

References

Bithell, M. and Edmonds, B. (2020) The Systematic Comparison of Agent-Based Policy Models – It’s time we got our act together!. Review of Artificial Societies and Social Simulation, 11th May 2021. https://rofasss.org/2021/05/11/SystComp/

Chattoe-Brown, E. (2020) A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation. Review of Artificial Societies and Social Simulation, 12th June 2020. https://rofasss.org/2020/06/12/abm-validation-bib/

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html

Chattoe-Brown, E., Gilbert, N., Robertson, D. A., & Watts, C. J. (2021). Reproduction as a Means of Evaluating Policy Models: A Case Study of a COVID-19 Simulation. medRxiv 2021.01.29.21250743; DOI: https://doi.org/10.1101/2021.01.29.21250743

Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8


Chattoe-Brown, E. (2021) Reply to Frank Dignum. Review of Artificial Societies and Social Simulation, 10th November 2021. https://rofasss.org/2021/11/10/reply-to-dignum/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”

By Frank Dignum

This is a reply to a review in JASSS (Chattoe-Brown 2021) of (Dignum 2021).

Before responding to some of the specific concerns of Edmund I would like to thank him for the thorough review. I am especially happy with his conclusion that the book is solid enough to make it a valuable contribution to scientific progress in modelling crises. That was the main aim of the book and it seems that is achieved. I want to reiterate what we already remarked in the book; we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises. Nor do we claim that our simulations were without limitations. But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.

The concerns that are expressed by Edmund are certainly valid. I agree with some of them, but will nuance some others. First of all the concern about the fact that we seem to abandon the NetLogo implementation and move to Repast. This fact does not make the ABM itself any less valid! In itself it is also an important finding. It is not possible to scale such a complex model in NetLogo beyond around two thousand agents. This is not just a limitation of our particular implementation, but a more general limitation of the platform. It leads to the important challenge to get more computer scientists involved to develop platforms for social simulations that both support the modelers adequately and provide efficient and scalable implementations.

That the sheer size of the model and the results make it difficult to trace back the importance and validity of every factor on the results is completely true. We have tried our best to highlight the most important aspects every time. But, this leaves questions as to whether we make the right selection of highlighted aspects. As an illustration to this, we have been busy for two months to justify our results of the simulations of the effectiveness of the track and tracing apps. We basically concluded that we need much better integrated analysis tools in the simulation platform. NetLogo is geared towards creating one simulation scenario, running the simulation and analyzing the results based on a few parameters. This is no longer sufficient when we have a model with which we can create many scenarios and have many parameters that influence a result. We used R now to interpret the flood of data that was produced with every scenario. But, R is not really the most user friendly tool and also not specifically meant for analyzing the data from social simulations.

Let me jump to the third concern of Edmund and link it to the analysis of the results as well. While we tried to justify the results of our simulation on the effectiveness of the track and tracing app we compared our simulation with an epidemiological based model. This is described in chapter 12 of the book. Here we encountered the difference in assumed number of contacts per day a person has with other persons. One can take the results, as quoted by Edmund as well, of 8 or 13 from empirical work and use them in the model. However, the dispute is not about the number of contacts a person has per day, but what counts as a contact! For the COVID-19 simulations standing next to a person in the queue in a supermarket for five minutes can count as a contact, while such a contact is not a meaningful contact in the cited literature. Thus, we see that what we take as empirically validated numbers might not at all be the right ones for our purpose. We have tried to justify all the values of parameters and outcomes in the context for which the simulations were created. We have also done quite some sensitivity analyses, which we did not all report on just to keep the volume of the book to a reasonable size. Although we think we did a proper job in justifying all results, that does not mean that one can have different opinions on the value that some parameters should have. It would be very good to check the influence on the results of changes in these parameters. This would also progress scientific insights in the usefulness of complex models like the one we made!

I really think that an ABM crisis response should be institutional. That does not mean that one institution determines the best ABM, but rather that the ABM that is put forward by that institution is the result of a continuous debate among scientists working on ABM’s for that type of crisis. For us, one of the more important outcomes of the ASSOCC project is that we really need much better tools to support the types of simulations that are needed for a crisis situation. However, it is very difficult to develop these tools as a single group. A lot of the effort needed is not publishable and thus not valued in an academic environment. I really think that the efforts that have been put in platforms such as NetLogo and Repast are laudable. They have been made possible by some generous grants and institutional support. We argue that this continuous support is also needed in order to be well equipped for a next crisis. But we do not argue that an institution would by definition have the last word in which is the best ABM. In an ideal case it would accumulate all academic efforts as is done in the climate models, but even more restricted models would still be better than just having a thousand individuals all claiming to have a useable ABM while governments have to react quickly to a crisis.

The final concern of Edmund is about the empirical scale of our simulations. This is completely true! Given the scale and details of what we can incorporate we can only simulate some phenomena and certainly not everything around the COVID-19 crisis. We tried to be clear about this limitation. We had discussions about the Unity interface concerning this as well. It is in principle not very difficult to show people walking in the street, taking a car or a bus, etc. However, we decided to show a more abstract representation just to make clear that our model is not a complete model of a small town functioning in all aspects. We have very carefully chosen which scenarios we can realistically simulate and give some insights in reality from. Maybe we should also have discussed more explicitly all the scenarios that we did not run with the reasons why they would be difficult or unrealistic in our ABM. One never likes to discuss all the limitations of one’s labor, but it definitely can be very insightful. I have made up for this a little bit by submitting an to a special issue on predictions with ABM in which I explain in more detail, which should be the considerations to use a particular ABM to try to predict some state of affairs. Anyone interested to learn more about this can contact me.

To conclude this response to the review, I again express my gratitude for the good and thorough work done. The concerns that were raised are all very valuable to concern. What I tried to do in this response is to highlight that these concerns should be taken as a call to arms to put effort in social simulation platforms that give better support for creating simulations for a crisis.

References

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html


Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Where Now For Experiments In Agent-Based Modelling? Report of a Round Table at SSC2021, held on 22 September 2021


By Dino Carpentras1, Edmund Chattoe-Brown2*, Bruce Edmonds3, Cesar García-Diaz4, Christian Kammler5, Anna Pagani6 and Nanda Wijermans7

*Corresponding author, 1Centre for Social Issues Research, University of Limerick, 2School of Media, Communication and Sociology, University of Leicester, 3Centre for Policy Modelling, Manchester Metropolitan University, 4Department of Business Administration, Pontificia Universidad Javeriana, 5Department of Computing Science, Umeå University, 6Laboratory on Human-Environment Relations in Urban Systems (HERUS), École Polytechnique Fédérale de Lausanne (EPFL), 7Stockholm Resilience Centre, Stockholm University.

Introduction

This round table was convened to advance and improve the use of experimental methods in Agent-Based Modelling, in the hope that both existing and potential users of the method would be able to identify steps towards this aim[i]. The session began with a presentation by Bruce Edmonds (http://cfpm.org/slides/experiments%20and%20ABM.pptx) whose main argument was that the traditional idea of experimentation (controlling extensively for the environment and manipulating variables) was too simplistic to add much to the understanding of the sort of complex systems modelled by ABMs and that we should therefore aim to enhance experiments (for example using richer experimental settings, richer measures of those settings and richer data – like discussions between participants as well as their behaviour). What follows is a summary of the main ideas discussed organised into themed sections.

What Experiments Are

Defining the field of experiments proved to be challenging on two counts. The first was that there are a number of labels for potentially relevant approaches (experiments themselves – for example, Boero et al. 2010, gaming – for example, Tykhonov et al. 2008, serious games – for example Taillandier et al. 2019, companion/participatory modelling – for example, Ramanath and Gilbert 2004 and web based gaming – for example, Basole et al. 2013) whose actual content overlap is unclear. Is it the case that a gaming approach is generally more in line with the argument proposed by Edmonds? How can we systematically distinguish the experimental content of a serious game approach from a gaming approach? This seems to be a problem in immature fields where the labels are invented first (often on the basis of a few rather divergent instances) and the methodology has to grow into them. It would be ludicrous if we couldn’t be sure whether a piece of research was survey based or interview based (and this would radically devalue the associated labels if it were so.)

The second challenge is also more general in Agent-Based Modelling which is the same labels being used differently by different researchers. It is not productive to argue about which uses are correct but it is important that the concepts behind the different uses are clear so a common scheme of labelling might ultimately be agreed. So, for example, experiment can be used (and different round table participants had different perspectives on the uses they expected) to mean laboratory experiments (simplified settings with human subjects – again see, for example, Boero et al. 2010), experiments with ABMs (formal experimentation with a model that doesn’t necessarily have any empirical content – for example, Doran 1998) and natural experiments (choice of cases in the real world to, for example, test a theory – see Dinesen 2013).

One approach that may help with this diversity is to start developing possible dimensions of experimentation. One might be degree of control (all the way from very stripped down behavioural laboratory experiments to natural situations where the only control is to select the cases). Another might be data diversity: From pure analysis of ABMs (which need not involve data at all), through laboratory experiments that record only behaviour to ethnographic collection and analysis of diverse data in rich experiments (like companion modelling exercises.) But it is important for progress that the field develops robust concepts that allow meaningful distinctions and does not get distracted into pointless arguments about labelling. Furthermore, we must consider the possible scientific implications of experimentation carried out at different points in the dimension space: For example, what are the relative strengths and limitations of experiments that are more or less controlled or more or less data diverse? Is there a “sweet spot” where the benefit of experiments is greatest to Agent-Based Modelling? If so, what is it and why?

The Philosophy of Experiment

The second challenge is the different beliefs (often associated with different disciplines) about the philosophical underpinnings of experiment such as what we might mean by a cause. In an economic experiment, for example, the objective may be to confirm a universal theory of decision making through displayed behaviour only. (It is decisions described by this theory which are presumed to cause the pattern of observed behaviour.) This will probably not allow the researcher to discover that their basic theory is wrong (people are adaptive not rational after all) or not universal (agents have diverse strategies), or that some respondents simply didn’t understand the experiment (deviations caused by these phenomena may be labelled noise relative to the theory being tested but in fact they are not.)

By contrast qualitative sociologists believe that subjective accounts (including accounts of participation in the experiment itself) can be made reliable and that they may offer direct accounts of certain kinds of cause: If I say I did something for a certain reason then it is at least possible that I actually did (and that the reason I did it is therefore its cause). It is no more likely that agreement will be reached on these matters in the context of experiments than it has been elsewhere. But Agent-Based Modelling should keep its reputation for open mindedness by seeing what happens when qualitative data is also collected and not just rejecting that approach out of hand as something that is “not done”. There is no need for Agent-Based Modelling blindly to follow the methodology of any one existing discipline in which experiments are conducted (and these disciplines often disagree vigorously on issues like payment and deception with no evidence on either side which should also make us cautious about their self-evident correctness.)

Finally, there is a further complication in understanding experiments using analogies with the physical sciences. In understanding the evolution of a river system, for example, one can control/intervene, one can base theories on testable micro mechanisms (like percolation) and one can observe. But there is no equivalent to asking the river what it intends (whether we can do this effectively in social science or not).[ii] It is not totally clear how different kinds of data collection like these might relate to each other in the social sciences, for example, data from subjective accounts, behavioural experiments (which may show different things from what respondents claim) and, for example, brain scans (which side step the social altogether.) This relationship between different kinds of data currently seems incompletely explored and conceptualised. (There is a tendency just to look at easy cases like surveys versus interviews.)

The Challenge of Experiments as Practical Research

This is an important area where the actual and potential users of experiments participating in the round table diverged. Potential users wanted clear guidance on the resources, skills and practices involved in doing experimental work (and see similar issues in the behavioural strategy literature, for example, Reypens and Levine 2018). At the most basic level, when does a researcher need to do an experiment (rather than a survey, interviews or observation), what are the resource requirements in terms of time, facilities and money (laboratory experiments are unusual in often needing specific funding to pay respondents rather than substituting the researcher working for free) what design decisions need to be made (paying subjects, online or offline, can subjects be deceived?), how should the data be analysed (how should an ABM be validated against experimental data?) and so on.[iii] (There are also pros and cons to specific bits of potentially supporting technology like Amazon Mechanical Turk, Qualtrics and Prolific, which have not yet been documented and systematically compared for the novice with a background in Agent-Based Modelling.) There is much discussion about these matters in the traditional literatures of social sciences that do experiments (see, for example, Kagel and Roth 1995, Levine and Parkinson 1994 and Zelditch 2014) but this has not been summarised and tuned specifically for the needs of Agent-Based Modellers (or published where they are likely to see it).

However, it should not be forgotten that not all research efforts need this integration within the same project, so thinking about the problems that really need it is critical. Nonetheless, triangulation is indeed necessary within research programmes. For instance, in subfields such as strategic management and organisational design, it is uncommon to see an ABM integrated with an experiment as part of the same project (though there are exceptions, such as Vuculescu 2017). Instead, ABMs are typically used to explore “what if” scenarios, build process theories and illuminate potential empirical studies. In this approach, knowledge is accumulated instead through the triangulation of different methodologies in different projects (see Burton and Obel 2018). Additionally, modelling and experimental efforts are usually led by different specialists – for example, there is a Theoretical Organisational Models Society whose focus is the development of standards for theoretical organisation science.

In a relatively new and small area, all we often have is some examples of good practice (or more contentiously bad practice) of which not everyone is even aware. A preliminary step is thus to see to what extent people know of good practice and are able to agree that it is good (and perhaps why it is good).

Finally, there was a slightly separate discussion about the perspectives of experimental participants themselves. It may be that a general problem with unreal activity is that you know it is unreal (which may lead to problems with ecological validity – Bornstein 1999.) On the other hand, building on the enrichment argument put forward by Edmonds (above), there is at least anecdotal observational evidence that richer and more realistic settings may cause people to get “caught up” and perhaps participate more as they would in reality. Nonetheless, there are practical steps we can take to learn more about these phenomena by augmenting experimental designs. For example we might conduct interviews (or even group discussions) before and after experiments. This could make the initial biases of participants explicit and allow them to self-evaluate retrospectively the extent to which they got engaged (or perhaps even over-engaged) during the game. The first such questionnaire could be available before attending the experiment, whilst another could be administered right after the game (and perhaps even a third a week later). In addition to practical design solutions, there are also relevant existing literatures that experimental researchers should probably draw on in this area, for example that on systemic design and the associated concept of worldviews. But it is fair to say that we do not yet fully understand the issues here but that they clearly matter to the value of experimental data for Agent-Based Modelling.[iv]

Design of Experiments

Something that came across strongly in the round table discussion as argued by existing users of experimental methods was the desirability of either designing experiments directly based on a specific ABM structure (rather than trying to use a stripped down – purely behavioural – experiment) or mixing real and simulated participants in richer experimental settings. In line with the enrichment argument put forward by Edmonds, nobody seemed to be using stripped down experiments to specify, calibrate or validate ABM elements piecemeal. In the examples provided by round table participants, experiments corresponding closely to the ABM (and mixing real and simulated participants) seemed particularly valuable in tackling subjects that existing theory had not yet really nailed down or where it was clear that very little of the data needed for a particular ABM was available. But there was no sense that there is a clearly defined set of research designs with associated purposes on which the potential user can draw. (The possible role of experiments in supporting policy was also mentioned but no conclusions were drawn.)

Extracting Rich Data from Experiments

Traditional experiments are time consuming to do, so they are frequently optimised to obtain the maximum power and discrimination between factors of interest. In such situations they will often limit their data collection to what is strictly necessary for testing their hypotheses. Furthermore, it seems to be a hangover from behaviourist psychology that one does not use self-reporting on the grounds that it might be biased or simply involve false reconstruction (rationalisation). From the point of view of building or assessing ABMs this approach involves a wasted opportunity. Due to the flexible nature of ABMs there is a need for as many empirical constraints upon modelling as possible. These constraints can come from theory, evidence or abstract principles (such as simplicity) but should not hinder the design of an ABM but rather act as a check on its outcomes. Game-like situations can provide rich data about what is happening, simultaneously capturing decisions on action, the position and state of players, global game outcomes/scores and what players say to each other (see, for example, Janssen et al. 2010, Lindahl et al. 2021). Often, in social science one might have a survey with one set of participants, interviews with others and longitudinal data from yet others – even if these, in fact, involve the same people, the data will usually not indicate this through consistent IDs. When collecting data from a game (and especially from online games) there is a possibility for collecting linked data with consistent IDs – including interviews – that allows for a whole new level of ABM development and checking.

Standards and Institutional Bootstrapping

This is also a wider problem in newer methods like Agent-Based Modelling. How can we foster agreement about what we are doing (which has to build on clear concepts) and institutionalise those agreements into standards for a field (particularly when there is academic competition and pressure to publish).[v] If certain journals will not publish experiments (or experiments done in certain ways) what can we do about that? JASSS was started because it was so hard to publish ABMs. It has certainly made that easier but is there a cost through less publication in other journals? See, for example, Squazzoni and Casnici (2013). Would it have been better for the rigour and wider acceptance of Agent-Based Modelling if we had met the standards of other fields rather than setting our own? This strategy, harder in the short term, may also have promoted communication and collaboration better in the long term. If reviewing is arbitrary (reviewers do not seem to have a common view of what makes an experiment legitimate) then can that situation be improved (and in particular how do we best go about that with limited resources?) To some extent, normal individualised academic work may achieve progress here (researchers make proposals, dispute and refine them and their resulting quality ensures at least some individualised adoption by other researchers) but there is often an observable gap in performance: Even though most modellers will endorse the value of data for modelling in principle most models are still non-empirical in practice (Angus and Hassani-Mahmooei 2015, Figure 9). The jury is still out on the best way to improve reviewer consistency, use the power of peer review to impose better standards (and thus resolve a collective action problem under academic competition[vi]) and so on but recognising and trying to address these issues is clearly important to the health of experimental methods in Agent-Based Modelling. Since running experiments in association with ABMs is already challenging, adding the problem of arbitrary reviewer standards makes the publication process even harder. This discourages scientists from following this path and therefore retards this kind of research generally. Again, here, useful resources (like the Psychological Science Accelerator, which facilitates greater experimental rigour by various means) were suggested in discussion as raw material for our own improvements to experiments in Agent-Based Modelling.

Another issue with newer methods such as Agent-Based Modelling is the path to legitimation before the wider scientific community. The need to integrate ABMs with experiments does not necessarily imply that the legitimation of the former is achieved by the latter. Experimental economists, for instance, may still argue that (in the investigation of behaviour and its implications for policy issues), experiments and data analysis alone suffice. They may rightly ask: What is the additional usefulness of an ABM? If an ABM always needs to be justified by an experiment and then validated by a statistical model of its output, then the method might not be essential at all. Orthodox economists skip the Agent-Based Modelling part: They build behavioural experiments, gather (rich) data, run econometric models and make predictions, without the need (at least as they see it) to build any computational representation. Of course, the usefulness of models lies in the premise that they may tell us something that experiments alone cannot (see Knudsen et al. 2019). But progress needs to be made in understanding (and perhaps reconciling) these divergent positions. The social simulation community therefore needs to be clearer about exactly what ABMs can contribute beyond the limitations of an experiment, especially when addressing audiences of non-modellers (Ballard et al. 2021). Not only is a model valuable when rigorously validated against data, but also whenever it makes sense of the data in ways that traditional methods cannot.

Where Now?

Researchers usually have more enthusiasm than they have time. In order to make things happen in an academic context it is not enough to have good ideas, people need to sign up and run with them. There are many things that stand a reasonable chance of improving the profile and practice of experiments in Agent-Based Modelling (regular sessions at SSC, systematic reviews, practical guidelines and evaluated case studies, discussion groups, books or journal special issues, training and funding applications that build networks and teams) but to a great extent, what happens will be decided by those who make it happen. The organisers of this round table (Nanda Wijermans and Edmund Chattoe-Brown) are very keen to support and coordinate further activity and this summary of discussions is the first step to promote that. We hope to hear from you.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16, <http://jasss.soc.surrey.ac.uk/18/4/16.html>. doi:10.18564/jasss.2952

Ballard, Timothy, Palada, Hector, Griffin, Mark and Neal, Andrew (2021) ‘An Integrated Approach to Testing Dynamic, Multilevel Theory: Using Computational Models to Connect Theory, Model, and Data’, Organizational Research Methods, 24(2), April, pp. 251-284. doi: 10.1177/1094428119881209

Basole, Rahul C., Bodner, Douglas A. and Rouse, William B. (2013) ‘Healthcare Management Through Organizational Simulation’, Decision Support Systems, 55(2), May, pp. 552-563. doi:10.1016/j.dss.2012.10.012

Boero, Riccardo, Bravo, Giangiacomo, Castellani, Marco and Squazzoni, Flaminio (2010) ‘Why Bother with What Others Tell You? An Experimental Data-Driven Agent-Based Model’, Journal of Artificial Societies and Social Simulation, 13(3), June, article 6, <https://www.jasss.org/13/3/6.html>. doi:10.18564/jasss.1620

Bornstein, Brian H. (1999) ‘The Ecological Validity of Jury Simulations: Is the Jury Still Out?’ Law and Human Behavior, 23(1), February, pp. 75-91. doi:10.1023/A:1022326807441

Burton, Richard M. and Obel, Børge (2018) ‘The Science of Organizational Design: Fit Between Structure and Coordination’, Journal of Organization Design, 7(1), December, article 5. doi:10.1186/s41469-018-0029-2

Derbyshire, James (2020) ‘Answers to Questions on Uncertainty in Geography: Old Lessons and New Scenario Tools’, Environment and Planning A: Economy and Space, 52(4), June, pp. 710-727. doi:10.1177/0308518X19877885

Dinesen, Peter Thisted (2013) ‘Where You Come From or Where You Live? Examining the Cultural and Institutional Explanation of Generalized Trust Using Migration as a Natural Experiment’, European Sociological Review, 29(1), February, pp. 114-128. doi:10.1093/esr/jcr044

Doran, Jim (1998) ‘Simulating Collective Misbelief’, Journal of Artificial Societies and Social Simulation, 1(1), January, article 1, <https://www.jasss.org/1/1/3.html>.

Janssen, Marco A., Holahan, Robert, Lee, Allen and Ostrom, Elinor (2010) ‘Lab Experiments for the Study of Social-Ecological Systems’, Science, 328(5978), 30 April, pp. 613-617. doi:10.1126/science.1183532

Kagel, John H. and Roth, Alvin E. (eds.) (1995) The Handbook of Experimental Economics (Princeton, NJ: Princeton University Press).

Knudsen, Thorbjørn, Levinthal, Daniel A. and Puranam, Phanish (2019) ‘Editorial: A Model is a Model’, Strategy Science, 4(1), March, pp. 1-3. doi:10.1287/stsc.2019.0077

Levine, Gustav and Parkinson, Stanley (1994) Experimental Methods in Psychology (Hillsdale, NJ: Lawrence Erlbaum Associates).

Lindahl, Therese, Janssen, Marco A. and Schill, Caroline (2021) ‘Controlled Behavioural Experiments’, in Biggs, Reinette, de Vos, Alta, Preiser, Rika, Clements, Hayley, Maciejewski, Kristine and Schlüter, Maja (eds.) The Routledge Handbook of Research Methods for Social-Ecological Systems (London: Routledge), pp. 295-306. doi:10.4324/9781003021339-25

Ramanath, Ana Maria and Gilbert, Nigel (2004) ‘The Design of Participatory Agent-Based Social Simulations’, Journal of Artificial Societies and Social Simulation, 7(4), October, article 1, <https://www.jasss.org/7/4/1.html>.

Reypens, Charlotte and Levine, Sheen S. (2018) ‘Behavior in Behavioral Strategy: Capturing, Measuring, Analyzing’, in Behavioral Strategy in Perspective, Advances in Strategic Management Volume 39 (Bingley: Emerald Publishing), pp. 221-246. doi:10.1108/S0742-332220180000039016

Squazzoni, Flaminio and Casnici, Niccolò (2013) ‘Is Social Simulation a Social Science Outstation? A Bibliometric Analysis of the Impact of JASSS’, Journal of Artificial Societies and Social Simulation, 16(1), January, article 10, <http://jasss.soc.surrey.ac.uk/16/1/10.html>. doi:10.18564/jasss.2192

Taillandier, Patrick, Grignard, Arnaud, Marilleau, Nicolas, Philippon, Damien, Huynh, Quang-Nghi, Gaudou, Benoit and Drogoul, Alexis (2019) ‘Participatory Modeling and Simulation with the GAMA Platform’, Journal of Artificial Societies and Social Simulation, 22(2), March, article 3, <https://www.jasss.org/22/2/3.html>. doi:10.18564/jasss.3964

Tykhonov, Dmytro, Jonker, Catholijn, Meijer, Sebastiaan and Verwaart, Tim (2008) ‘Agent-Based Simulation of the Trust and Tracing Game for Supply Chains and Networks’, Journal of Artificial Societies and Social Simulation, 11(3), June, article 1, <https://www.jasss.org/11/3/1.html>.

Vuculescu, Oana (2017) ‘Searching Far Away from the Lamp-Post: An Agent-Based Model’, Strategic Organization, 15(2), May, pp. 242-263. doi:10.1177/1476127016669869

Zelditch, Morris Junior (2007) ‘Laboratory Experiments in Sociology’, in Webster, Murray Junior and Sell, Jane (eds.) Laboratory Experiments in the Social Sciences (New York, NY: Elsevier), pp. 183-197.


Notes

[i] This event was organised (and the resulting article was written) as part of “Towards Realistic Computational Models of Social Influence Dynamics” a project funded through ESRC (ES/S015159/1) by ORA Round 5 and involving Bruce Edmonds (PI) and Edmund Chattoe-Brown (CoI). More about SSC2021 (Social Simulation Conference 2021) can be found at https://ssc2021.uek.krakow.pl

[ii] This issue is actually very challenging for social science more generally. When considering interventions in social systems, knowing and acting might be so deeply intertwined (Derbyshire 2020) that interventions may modify the same behaviours that an experiment is aiming to understand.

[iii] In addition, experiments often require institutional ethics approval (but so do interviews, gaming activities and others sort of empirical research of course), something with which non-empirical Agent-Based Modellers may have little experience.

[iv] Chattoe-Brown had interesting personal experience of this. He took part in a simple team gaming exercise about running a computer firm. The team quickly worked out that the game assumed an infinite return to advertising (so you could have a computer magazine consisting entirely of adverts) independent of the actual quality of the product. They thus simultaneously performed very well in the game from the perspective of an external observer but remained deeply sceptical that this was a good lesson to impart about running an actual firm. But since the coordinators never asked the team members for their subjective view, they may have assumed that the simulation was also a success in its didactic mission.

[v] We should also not assume it is best to set our own standards from scratch. It may be valuable to attempt integration with existing approaches, like qualitative validity (https://conjointly.com/kb/qualitative-validity/) particularly when these are already attempting to be multidisciplinary and/or to bridge the gap between, for example, qualitative and quantitative data.

[vi] Although journals also face such a collective action problem at a different level. If they are too exacting relative to their status and existing practice, researchers will simply publish elsewhere.


Dino Carpentras, Edmund Chattoe-Brown, Bruce Edmonds, Cesar García-Diaz, Christian Kammler, Anna Pagani and Nanda Wijermans (2020) Where Now For Experiments In Agent-Based Modelling? Report of a Round Table as Part of SSC2021. Review of Artificial Societies and Social Simulation, 2nd Novermber 2021. https://rofasss.org/2021/11/02/round-table-ssc2021-experiments/


Does It Take Two (And A Creaky Search Engine) To Make An Outstation? Hunting Highly Cited Opinion Dynamics Articles in the Journal of Artificial Societies and Social Simulation (JASSS)

By Edmund Chattoe-Brown

In an important article, Squazzoni and Casnici (2013) raise the issue of how social simulation (as manifested in the Journal of Artificial Societies and Social Simulation – hereafter JASSS – the journal that has probably published the most of this kind of research for longest) cites and is cited in the wider scientific community. They discuss this in terms of social simulation being a potential “outstation” of social science (but better integrated into physical science and computing). This short note considers the same argument in reverse. As an important site of social simulation research, is it the case that JASSS is effectively representing research done more widely across the sciences?

The method used to investigate this was extremely simple (and could thus easily be extended and replicated). On 28.08.21, using the search term “opinion dynamics” in “all fields”, all sources from Web of Science (www.webofknowledge.com, hereafter WOS) that were flagged as “highly cited” were selected as a sample. For each article (only articles turned out to be highly cited), the title was searched in JASSS and the number of hits recorded. Common sense was applied in this search process to maximise the chances of success. So if a title had two sub clauses, these were searched jointly as quotations (to avoid the “hits” being very sensitive to the reproduction of punctuation linking clauses.) In addition, the title of the journal in which the article appeared was searched to give a wider sense of how well the relevant journal is known is JASSS.

However, now we come to the issue of the creaky search engine (as well as other limitations of quick and dirty searches). Obviously searching for the exact title will not find variants of that title with spelling mistakes or attempts to standardise spelling (i. e. changing behavior to behaviour). Further, it turns out that the Google search engine (which JASSS uses) does not promise the consistency that often seems to be assumed for it (http://jdebp.uk/FGA/google-result-counts-are-a-meaningless-metric.html). For example, when I searched for “SIAM Review” I mostly got 77 hits, rather often 37 hits and very rarely 0 or 1 hits. (PDFs are available for three of these outcomes from the author but the fourth could not be reproduced to be recorded in the time available.) This result occurred when another search took place seconds after the first so it is not, for example, a result of substantive changes to the content of JASSS. To deal with this problem I tried to confirm the presence of a particular article by searching jointly for all its co-authors. Mostly this approach gave a similar result (but where it does not it is noted in the table below). In addition, wherever there were a relatively large number of hits for a specific search, some of these were usually not the ones intended. (For example no hit on the term “global challenges” actually turned out to be for the journal Global Challenges.) In addition, JASSS often gives an oddly inconsistent number of hits for a specific article: It may appear as PDF and HTML as well as in multiple indices or may occur just once. (This discouraged attempts to go from hits to the specific number of unique articles citing these WOS sources. As it turns out, this additional detail would have added little to the headline result.)

The term “opinion dynamics” was chosen somewhat arbitrarily (for reasons connected with other research) and it is not claimed that this term is even close to a definitive way of capturing any models connected with opinion/attitude change. Nonetheless, it is clear that the number of hits and the type of articles reported on WOS (which is curated and quality controlled) are sufficient (and sufficiently relevant) for this to be a serviceable search term to identify a solid field of research in JASSS (and elsewhere). I shall return to this issue.

The results, shown in the table below are striking on several counts. (All these sources are fully cited in the references at the end of this article.) Most noticeably, JASSS is barely citing a significant number of articles that are very widely cited elsewhere. Because these are highly cited in WOS this cannot be because they are too new or too inaccessible. The second point is the huge discrepancy in citation for the one article on the WOS list that appears in JASSS itself (Flache et al. 2017). Thirdly, although some of these articles appear in journals that JASSS otherwise does not cite (like Global Challenges and Dynamic Games and Applications) others appear in journals that are known to JASSS and generally cited (like SIAM Review).

Reference WOS Citations Article Title Hits in JASSS Journal Title Hits in JASSS
Acemoglu and Ozdaglar (2011) 301 0 (1 based on joint authors) 2
Motsch and Tadmor (2014) 214 0 77
Van Der Linden et al. (2017) 191 0 6 (but none for the journal)
Acemoğlu et al. (2013) 186 1 2 (but 1 article)
Proskurnikov et al. (2016) 165 0 9
Dong et al. (2017) 147 0 48 (but rather few for the journal)
Jia et al. (2015) 118 0 77
Dong et al. (2018) 117 0 (1 based on joint authors) 48 (but rather few for the journal)
Flache et al. (2017) 86 58 (17 based on joint authors) N/A
Urena et al. (2019) 72 0 6
Bu et al. (2020) 56 0 5
Zhang et al. (2020) 55 0 33 (but only some of these are for the journal)
Xiong et al. (2020) 28 0 1
Carrillo et al. (2020) 13 0 0

One possible interpretation of this result is simply that none of the most highly cited articles in WOS featuring the term “opinion dynamics” happen to be more than incidentally relevant to the scientific interests of JASSS. On consideration, however, this seems a rather improbable coincidence. Firstly, these articles were chosen exactly because they are highly cited so we would have to explain how they could be perceived as so useful generally but specifically not in JASSS. Secondly, the same term (“opinion dynamics”) consistently generates 254 hits in JASSS, suggesting that the problem isn’t a lack of overlap in terminology or research interests.

This situation, however, creates a problem for more conclusive explanation. The state of affairs here is not that these articles are being cited and then rejected on scientific grounds given the interests of JASSS (thus providing arguments I could examine). It is that they are barely being cited at all. Unfortunately, it is almost impossible to establish why something is not happening. Perhaps JASSS authors are not aware of these articles to begin with. Perhaps they are aware but do not see the wider scientific value of critiquing them or attempting to engage with their irrelevance in print.

But, given that the problem is non citation, my concern can be made more persuasive (perhaps as persuasive as it can be given problems of convincingly explaining an absence) by investigating the articles themselves. (My thanks are due to Bruce Edmonds for encouraging me to strengthen the argument in this way.) There are definitely some recurring patterns in this sample. Firstly, a significant proportion of the articles are highly mathematical and, therefore (as Agent-Based Modelling often criticises) rely on extreme simplifying assumptions and toy examples. Even here, however, it is not self-evident that such articles should not be cited in JASSS merely because they are mathematical. JASSS has itself published relatively mathematical articles and, if an article contains a mathematical model that could be “agentised” (thus relaxing its extreme assumptions) which is no less empirical than similar models in JASSS (or has particularly interesting behaviours) then it is hard to see why this should not be discussed by at least a few JASSS authors. A clear example of this is provided by Acemoğlu et al. (2013) which argues that existing opinion dynamics models fail to produce the ongoing fluctuations of opinion observed in real data (see, for example, Figures 1-3 in Chattoe-Brown 2014 which also raises concerns about the face validity of popular social simulations of opinion dynamics). In fact, the assumptions of this model could easily be questioned (and real data involves turning points and not just fluctuations) but the point is that JASSS articles are not citing it and rejecting it based on argument but simply not citing it. A model capable of generating ongoing opinion fluctuations (however imperfect) is simply too important to the current state of opinion dynamics research in social simulation not to be considered at all. Another (though less conclusive) example is Motsch and Tadmor (2014) which presents a model suggesting (counter intuitively) that interaction based on heterophily can better achieve consensus than interaction based on homophily. Of course one can reject such an assumption on empirical grounds but JASSS is not currently doing that (and in fact the term heterophily is unknown in the journal except for the title of a cited article.)

Secondly, there are also a number of articles which, while not providing important results seem no less plausible or novel than typical OD articles that are published in JASSS. For example, Jia et al. (2015) add self-appraisal and social power to a standard OD model. Between debates, agents amend the efficacy they believe that they and others have in terms of swaying the outcome and take that into account going forward. Proskurnikov et al. (2016) present the results of a model in which agents can have negative ties with each other (as well as the more usual positive ones) and thus consider the coevolution of positive/negative sentiments and influence (describing what they call hostile camps i. e. groups with positive ties to each other and negative ties to other groups). This is distinct from the common repulsive effect in OD models where agents do not like the opinions of others (rather than disliking the others themselves.)

Finally, both Dong et al. (2017) and Zhang et al. (2020) reach for the idea (through modelling) that experts and leaders in OD models may not just be randomly scattered through the population as types but may exist because of formal organisations or accidents of social structure: This particular agent is either deliberately appointed to have more influence or happens to have it because of their network position.

On a completely different tack, two articles (Dong et al. 2018 and Acemoglu and Ozdaglar 2011) are literature reviews or syntheses on relevant topics and it is hard to see how such broad ranging articles could have so little value to OD research in JASSS.

It will be admitted that some of the articles in the sample are hard to evaluate with certainty. Mathematical approaches often seem to be more interested in generating mathematics than in justifying its likely value. This is particularly problematic when combined with a suggestion that the product of the research may be instrumental algorithms (designed to get things done) rather than descriptive ones (designed to understand social behaviour). An example of this is several articles which talk about achieving consensus without really explaining whether this is a technical goal (for example in a neural network) or a social phenomenon and, if the latter, whether this places constraints on what it legitimate: You can reach consensus by debate but not by shooting dissenters!

But as well as specific ideas in specific models, this sample of articles also suggest a different emphasis from those currently found within JASSS OD research. For example, there is much more interest in deliberately achieving consensus (and the corresponding hazards of manipulation or misinformation impeding that.) Reading these articles collectively gives a sense that JASSS OD models are very much liberal democratic: Agents honestly express their views (or at most are somewhat reticent to protect themselves.) They decently expect the will of the people to prevail. They do not lie strategically to sway the influential, spread rumours to discredit the opinions of opponents or flood the debate with bots. Again, this darker vision is no more right a priori than the liberal democratic one but JASSS should at least be engaging with articles modelling (or providing data on – see Van Der Linden et al. 2017) such phenomena in an OD context. (Although misinformation is mentioned in some OD articles in JASSS it does not seem to be modelled. There also seems to be another surprising glitch in the search engine which considers the term “fake news” to be a hit for misinformation!) This also puts a new slant on an ongoing challenge in OD research, identifying a plausible relationship between fact and opinion. Is misinformation a different field of research (on the grounds that opinions can never be factually wrong) or is it possible for the misinformed to develop mis-opinions? (Those that they would change if what they knew changed.) Is it really the case that Brexiteers, for example, are completely indifferent to the economic consequences which will reveal themselves or did they simply have mistaken beliefs about how high those costs might turn out to be which will cause them to regret their decision at some later stage?

Thus to sum up, while some of the articles in the sample can be dismissed as either irrelevant to JASSS or having a potential relevance that is hard to establish, the majority cannot reasonably be regarded in this way (and a few are clearly important to the existing state of OD research.) While we cannot explain why these articles are not in fact cited, we can thus call into question one possible (Panglossian) explanation for the observed pattern (that they are not cited because they have nothing to contribute).

Apart from the striking nature of the result and its obvious implication (if social simulators want to be cited more widely they need to make sure they are also citing the work of others appropriately) this study has two wider (related) implications for practice.

Firstly, systematic literature reviewing (see, for example, Hansen et al. 2019 – not published in JASSS) needs to be better enforced in social simulation: “Systematic literature review” gets just 7 hits in JASSS. It is not enough to cite just what you happen to have read or models that resemble your own, you need to be citing what the community might otherwise not be aware of or what challenges your own model assumptions. (Although, in my judgement, key assumptions of Acemoğlu et al. 2013 are implausible I don’t think that I could justify non subjectively that they are any more implausible than those of those of the Zaller-Deffuant model – Malarz et al. 2011 – given the huge awareness discrepancy which the two models manifest in social simulation.)

Secondly, we need to rethink the nature of literature reviewing as part of progressive research. I have used “opinion dynamics” here not because it is the perfect term to identify all models of opinion and attitude change but because it throws up enough hits to show that this term is widely used in social simulation. Because I have clearly stated my search term, others can critique it and extend my analysis using other relevant terms like “opinion change” or “consensus formation”. A literature review that is just a bunch of arbitrary stuff cannot be critiqued or improved systematically (rather than nit-picked for specific omissions – as reviewers often do – and even then the critique can’t tell what should have been included if there are no clearly stated search criteria.) It should not be possible for JASSS (and the social simulation community it represents) simply to disregard articles as potentially important in their implications for OD as Acemoğlu et al. (2013). Even if this article turned out to be completely wrong-headed, we need to have enough awareness of it to be able to say why before setting it aside. (Interestingly, the one citation it does receive in JASSS can be summarised as “there are some other model broadly like this” with no detailed discussion at all – and thus no clear statement of how the model presented in the citing article adds to previous models – but uninformative citation is a separate problem.)

Acknowledgements

This article as part of “Towards Realistic Computational Models of Social Influence Dynamics” a project funded through ESRC (ES/S015159/1) by ORA Round 5.

References

Acemoğlu, Daron and Ozdaglar, Asuman (2011) ‘Opinion Dynamics and Learning in Social Networks’, Dynamic Games and Applications, 1(1), March, pp. 3-49. doi:10.1007/s13235-010-0004-1

Acemoğlu, Daron, Como, Giacomo, Fagnani, Fabio and Ozdaglar, Asuman (2013) ‘Opinion Fluctuations and Disagreement in Social Networks’, Mathematics of Operations Research, 38(1), February, pp. 1-27. doi:10.1287/moor.1120.0570

Bu, Zhan, Li, Hui-Jia, Zhang, Chengcui, Cao, Jie, Li, Aihua and Shi, Yong (2020) ‘Graph K-Means Based on Leader Identification, Dynamic Game, and Opinion Dynamics’, IEEE Transactions on Knowledge and Data Engineering, 32(7), July, pp. 1348-1361. doi:10.1109/TKDE.2019.2903712

Carrillo, J. A., Gvalani, R. S., Pavliotis, G. A. and Schlichting, A. (2020) ‘Long-Time Behaviour and Phase Transitions for the Mckean–Vlasov Equation on the Torus’, Archive for Rational Mechanics and Analysis, 235(1), January, pp. 635-690. doi:10.1007/s00205-019-01430-4

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1), February, article 16, <http://www.socresonline.org.uk/19/1/16.html&gt;. doi:10.5153/sro.3315

Dong, Yucheng, Ding, Zhaogang, Martínez, Luis and Herrera, Francisco (2017) ‘Managing Consensus Based on Leadership in Opinion Dynamics’, Information Sciences, 397-398, August, pp. 187-205. doi:10.1016/j.ins.2017.02.052

Dong, Yucheng, Zhan, Min, Kou, Gang, Ding, Zhaogang and Liang, Haiming (2018) ‘A Survey on the Fusion Process in Opinion Dynamics’, Information Fusion, 43, September, pp. 57-65. doi:10.1016/j.inffus.2017.11.009

Flache, Andreas, Mäs, Michael, Feliciani, Thomas, Chattoe-Brown, Edmund, Deffuant, Guillaume, Huet, Sylvie and Lorenz, Jan (2017) ‘Models of Social Influence: Towards the Next Frontiers’, Journal of Artificial Societies and Social Simulation, 20(4), October, article 2, <http://jasss.soc.surrey.ac.uk/20/4/2.html&gt;. doi:10.18564/jasss.3521

Hansen, Paula, Liu, Xin and Morrison, Gregory M. (2019) ‘Agent-Based Modelling and Socio-Technical Energy Transitions: A Systematic Literature Review’, Energy Research and Social Science, 49, March, pp. 41-52. doi:10.1016/j.erss.2018.10.021

Jia, Peng, MirTabatabaei, Anahita, Friedkin, Noah E. and Bullo, Francesco (2015) ‘Opinion Dynamics and the Evolution of Social Power in Influence Networks’, SIAM Review, 57(3), pp. 367-397. doi:10.1137/130913250

Malarz, Krzysztof, Gronek, Piotr and Kulakowski, Krzysztof (2011) ‘Zaller-Deffuant Model of Mass Opinion’, Journal of Artificial Societies and Social Simulation, 14(1), 2, <https://www.jasss.org/14/1/2.html&gt;. doi:10.18564/jasss.1719

Motsch, Sebastien and Tadmor, Eitan (2014) ‘Heterophilious Dynamics Enhances Consensus’, SIAM Review, 56(4), pp. 577-621. doi:10.1137/120901866

Proskurnikov, Anton V., Matveev, Alexey S. and Cao, Ming (2016) ‘Opinion Dynamics in Social Networks With Hostile Camps: Consensus vs. Polarization’, IEEE Transactions on Automatic Control, 61(6), June, pp. 1524-1536. doi:10.1109/TAC.2015.2471655

Squazzoni, Flaminio and Casnici, Niccolò (2013) ‘Is Social Simulation a Social Science Outstation? A Bibliometric Analysis of the Impact of JASSS’, Journal of Artificial Societies and Social Simulation, 16(1), 10, <http://jasss.soc.surrey.ac.uk/16/1/10.html&gt;. doi:10.18564/jasss.2192

Ureña, Raquel, Chiclana, Francisco, Melançon, Guy and Herrera-Viedma, Enrique (2019) ‘A Social Network Based Approach for Consensus Achievement in Multiperson Decision Making’, Information Fusion, 47, May, pp. 72-87. doi:10.1016/j.inffus.2018.07.006

Van Der Linden, Sander, Leiserowitz, Anthony, Rosenthal, Seth and Maibach, Edward (2017) ‘Inoculating the Public against Misinformation about Climate Change’, Global Challenges, 1(2), 27 February, article 1600008. doi:10.1002/gch2.201600008

Xiong, Fei, Wang, Ximeng, Pan, Shirui, Yang, Hong, Wang, Haishuai and Zhang, Chengqi (2020) ‘Social Recommendation With Evolutionary Opinion Dynamics’, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50(10), October, pp. 3804-3816. doi:10.1109/TSMC.2018.2854000

Zhang, Zhen, Gao, Yuan and Li, Zhuolin (2020) ‘Consensus Reaching for Social Network Group Decision Making by Considering Leadership and Bounded Confidence’, Knowledge-Based Systems, 204, 27 September, article 106240. doi:10.1016/j.knosys.2020.106240


Chattoe-Brown, E. (2021) Does It Take Two (And A Creaky Search Engine) To Make An Outstation? Hunting Highly Cited Opinion Dynamics Articles in the Journal of Artificial Societies and Social Simulation (JASSS). Review of Artificial Societies and Social Simulation, 19th August 2021. https://rofasss.org/2021/08/19/outstation/


 

The Systematic Comparison of Agent-Based Policy Models – It’s time we got our act together!

By Mike Bithell and Bruce Edmonds

Model Intercomparison

The recent Covid crisis has led to a surge of new model development and a renewed interest in the use of models as policy tools. While this is in some senses welcome, the sudden appearance of many new models presents a problem in terms of their assessment, the appropriateness of their application and reconciling any differences in outcome. Even if they appear similar, their underlying assumptions may differ, their initial data might not be the same, policy options may be applied in different ways, stochastic effects explored to a varying extent, and model outputs presented in any number of different forms. As a result, it can be unclear what aspects of variations in output between models are results of mechanistic, parameter or data differences. Any comparison between models is made tricky by differences in experimental design and selection of output measures.

If we wish to do better, we suggest that a more formal approach to making comparisons between models would be helpful. However, it appears that this is not commonly undertaken most fields in a systematic and persistent way, except for the field of climate change, and closely related fields such as pollution transport or economic impact modelling (although efforts are underway to extend such systematic comparison to ecosystem models –  Wei et al., 2014, Tittensor et al., 2018⁠). Examining the way in which this is done for climate models may therefore prove instructive.

Model Intercomparison Projects (MIP) in the Climate Community

Formal intercomparison of atmospheric models goes back at least to 1989 (Gates et al., 1999)⁠ with the first atmospheric model inter-comparison project (AMIP), initiated by the World Climate Research Programme. By 1999 this had contributions from all significant atmospheric modelling groups, providing standardised time-series of over 30 model variables for one particular historical decade of simulation, with a standard experimental setup. Comparisons of model mean values with available data helped to reveal overall model strengths and weaknesses: no single model was best at simulation of all aspects of the atmosphere, with accuracy varying greatly between simulations. The model outputs also formed a reference base for further inter-comparison experiments including targets for model improvement and reduction of systematic errors, as well as a starting point for improved experimental design, software and data management standards and protocols for communication and model intercomparison. This led to AMIPII and, subsequently, to a series of Climate model inter-comparison projects (CMIP) beginning with CMIP I in 1996. The latest iteration (CMIP 6) is a collection of 23 separate model intercomparison experiments covering atmosphere, ocean, land surface, geo-engineering, and the paleoclimate. This collection is aimed at the upcoming 2021 IPCC process (AR6). Participating projects go through an endorsement process for inclusion, (a process agreed with modelling groups), based on 10 criteria designed to ensure some degree of coherence between the various models – a further 18 MIPS are also listed as currently active (https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip6). Groups contribute to a central set of common experiments covering the period 1850 to the near-present. An overview of the whole process can be found in (Eyring et al., 2016).

The current structure includes a set of three overarching questions covering the dynamics of the earth system, model systematic biases and understanding possible future change under uncertainty. Individual MIPS may build on this to address one or more of a set of 7 “grand science challenges” associated with the climate. Modelling groups agree to provide outputs in a standard form, obtained from a specified set of experiments under the same design, and to provide standardised documentation to go with their models. Originally (up to CMIP 5), outputs were then added to a central public repository for further analysis, however the output grew so large under CMIP6 that now the data is held dispersed over repositories maintained by separate groups.

Other Examples

Two further more recent examples of collective model  development may also be helpful to consider.

Firstly, an informal network collating models across more than 50 research groups has already been generated as a result of the COVID crisis –  the Covid Forecast Hub (https://covid19forecasthub.org). This is run by a small number of research groups collaborating with the US Centre for Disease Control and is strongly focussed on the epidemiology. Participants are encouraged to submit weekly forecasts, and these are integrated into a data repository and can be vizualized on the website – viewers can look at forward projections, along with associated confidence intervals and model evaluation scores, including those for an ensemble of all models. The focus on forecasts in this case arises out of the strong policy drivers for the current crisis, but the main point is that it is possible to immediately view measures of model performance and to compare the different model types: one clear message that rapidly becomes apparent is that many of the forward projections have 95% (and at some times, even 50%) confidence intervals for incident deaths that more than span the full range of the past historic data. The benefit of comparing many different models in this case is apparent, as many of the historic single-model projections diverge strongly from the data (and the models most in error are not consistently the same ones over time), although the ensemble mean tends to be better.

As a second example, one could consider the Psychological Science Accelerator (PSA: Moshontz et al 2018, https://psysciacc.org/). This is a collaborative network set up with the aim of addressing the “replication crisis” in psychology: many previously published results in psychology have proved problematic to replicate as a result of small or non-representative sampling or use of experimental designs that do not generalize well or have not been used consistently either within or across studies. The PSA seeks to ensure accumulation of reliable and generalizable evidence in psychological science, based on principles of inclusion, decentralization, openness, transparency and rigour. The existence of this network has, for example, enabled the reinvestigation of previous  experiments but with much larger and less nationally biased samples (e.g. Jones et al 2021).

The Benefits of the Intercomparison Exercises and Collaborative Model Building

More specifically, long-term intercomparison projects help to do the following.

  • Build on past effort. Rather than modellers re-inventing the wheel (or building a new framework) with each new model project, libraries of well-tested and documented models, with data archives, including code and experimental design, would allow researchers to more efficiently work on new problems, building on previous coding effort
  • Aid replication. Focussed long term intercomparison projects centred on model results with consistent standardised data formats would allow new versions of code to be quickly tested against historical archives to check whether expected results could be recovered and where differences might arise, particularly if different modelling languages were being used
  • Help to formalize. While informal code archives can help to illustrate the methods or theoretical foundations of a model, intercomparison projects help to understand which kinds of formal model might be good for particular applications, and which can be expected to produce helpful results for given desired output measures
  • Build credibility. A continuously updated set of model implementations and assessment of their areas of competence and lack thereof (as compared with available datasets) would help to demonstrate the usefulness (or otherwise) of ABM as a way to represent social systems
  • Influence Policy (where appropriate). Formal international policy organisations such as the IPCC or the more recently formed IPBES are effective partly through an underpinning of well tested and consistently updated models. As yet it is difficult to see whether such a body would be appropriate or effective for social systems, as we lack the background of demonstrable accumulated and well tested model results.

Lessons for ABM?

What might we be able to learn from the above, if we attempted to use a similar process to compare ABM policy models?

In the first place, the projects started small and grew over time: it would not be necessary, for example, to cover all possible ABM applications at the outset. On the other hand, the latest CMIP iterations include a wide range of different types of model covering many different aspects of the earth system, so that the breadth of possible model types need not be seen as a barrier.

Secondly, the climate inter-comparison project has been persistent for some 30 years – over this time many models have come and gone, but the history of inter-comparisons allows for an overview of how well these models have performed over time – data from the original AMIP I models is still available on request, supporting assessments concerning  long-term model improvement.

Thirdly, although climate models are complex – implementing a variety of different mechanisms in different ways – they can still be compared by use of standardised outputs, and at least some (although not necessarily all) have been capable of direct comparison with empirical data.

Finally, an agreed experimental design and public archive for documentation and output that is stable over time is needed; this needs to be done via a collective agreement among the modelling groups involved so as to ensure a long-term buy-in from the community as a whole, so that there is a consistent basis for long-term model development, building on past experience.

The need for aligning or reproducing ABMs has long been recognised within the community (Axtell et al. 1996; Edmonds & Hales 2003), but on a one-one basis for verifying the specification of models against their implementation, although (Hales et al. 2003) discusses a range of possibilities. However, this is far from a situation where many different models of basically the same phenomena are systematically compared – this would be a larger scale collaboration lasting over a longer time span.

The community has already established a standardised form of documentation in the ODD protocol. Sharing of model code is also becoming routine, and can be easily achieved through COMSES, Github or similar. The sharing of data in a long-term archive may require more investigation. As a starting project COVID-19 provides an ideal opportunity for setting up such a model inter-comparison project – multiple groups already have running examples, and a shared set of outputs and experiments should be straightforward to agree on. This would potentially form a basis for forward looking experiments designed to assist with possible future pandemic problems, and a basis on which to build further features into the existing disease-focussed modelling, such as the effects of economic, social and psychological issues.

Additional Challenges for ABMs of Social Phenomena

Nobody supposes that modelling social phenomena is going to have the same set of challenges that climate change models face. Some of the differences include:

  • The availability of good data. Social science is bedevilled by a paucity of the right kind of data. Although an increasing amount of relevant data is being produced, there are commercial, ethical and data protection barriers to accessing it and the data rarely concerns the same set of actors or events.
  • The understanding of micro-level behaviour. Whilst the micro-level understanding of our atmosphere is very well established, those of the behaviour of the most important actors (humans) is not. However, it may be that better data might partially substitute for a generic behavioural model of decision-making.
  • Agreement upon the goals of modelling. Although there will always be considerable variation in terms of what is wanted from a model of any particular social phenomena, a common core of agreed objectives will help focus any comparison and give confidence via ensembles of projections. Although the MIPs and Covid Forecast Hub are focussed on prediction, it may be that empirical explanation may be more important in other areas.
  • The available resources. ABM projects tend to be add-ons to larger endeavours and based around short-term grant funding. The funding for big ABM projects is yet to be established, not having the equivalent of weather forecasting to piggy-back on.
  • Persistence of modelling teams/projects. ABM tends to be quite short-term with each project developing a new model for a new project. This has made it hard to keep good modelling teams together.
  • Deep uncertainty. Whilst the set of possible factors and processes involved in a climate change model are well established, which social mechanisms need to be involved in any model of any particular social phenomena is unknown. For this reason, there is deep disagreement about the assumptions to be made in such models, as well as sharp divergence in outcome due to changes brought about by a particular mechanism but not included in a model. Whilst uncertainty in known mechanisms can be quantified, assessing the impact of those due to such deep uncertainty is much harder.
  • The sensitivity of the political context. Even in the case of Climate Change, where the assumptions made are relatively well understood and done on objective bases, the modelling exercise and its outcomes can be politically contested. In other areas, where the representation of people’s behaviour might be key to model outcomes, this will need even more care (Adoha & Edmonds 2017).

However, some of these problems were solved in the case of Climate Change as a result of the CMIP exercises and the reports they ultimately resulted in. Over time the development of the models also allowed for a broadening and updating of modelling goals, starting from a relatively narrow initial set of experiments. Ensuring the persistence of individual modelling teams is easier in the context of an internationally recognised comparison project, because resources may be easier to obtain, and there is a consistent central focus. The modelling projects became longer-term as individual researchers could establish a career doing just climate change modelling and importance of the work increasingly recognised. An ABM modelling comparison project might help solve some of these problems as the importance of its work is established.

Towards an Initial Proposal

The topic chosen for this project should be something where there: (a) is enough public interest to justify the effort, (b) there are a number of models with a similar purpose in mind being developed.  At the current stage, this suggests dynamic models of COVID spread, but there are other possibilities, including: transport models (where people go and who they meet) or criminological models (where and when crimes happen).

Whichever ensemble of models is focussed upon, these models should be compared on a core of standard, with the same:

  • Start and end dates (but not necessarily the same temporal granularity)
  • Covering the same set of regions or cases
  • Using the same population data (though possibly enhanced with extra data and maybe scaled population sizes)
  • With the same initial conditions in terms of the population
  • Outputting a core of agreed measures (but maybe others as well)
  • Checked against their agreement against a core set of cases, with agreed data sets
  • Reported on in a standard format (though with a discussion section for further/other observations)
  • well documented and with code that is open access
  • Run a minimum of times with different random seeds

Any modeller/team that had a suitable model and was willing to adhere to the rules would be welcome to participate (commercial, government or academic) and these teams would collectively decide the rules, development and write any reports on the comparisons. Other interested stakeholder groups could be involved including professional/academic associations, NGOs and government departments but in a consultative role providing wider critique – it is important that the terms and reports from the exercise be independent or any particular interest or authority.

Conclusion

We call upon those who think ABMs have the potential to usefully inform policy decisions to work together, in order that the transparency and rigour of our modelling matches our ambition. Whilst model comparison exercises of the kind described are important for any simulation work, particular care needs to be taken when the outcomes can affect people’s lives.

References

Aodha, L. & Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822. (A version is at http://cfpm.org/discussionpapers/236)

Axtell, R., Axelrod, R., Epstein, J. M., & Cohen, M. D. (1996). Aligning simulation models: A case study and results. Computational & Mathematical Organization Theory, 1(2), 123-141. https://link.springer.com/article/10.1007%2FBF01299065

Edmonds, B., & Hales, D. (2003). Replication, replication and replication: Some hard lessons from model alignment. Journal of Artificial Societies and Social Simulation, 6(4), 11. http://jasss.soc.surrey.ac.uk/6/4/11.html

Eyring, V., Bony, S., Meehl, G. A., Senior, C. A., Stevens, B., Stouffer, R. J., & Taylor, K. E. (2016). Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geoscientific Model Development, 9(5), 1937–1958. https://doi.org/10.5194/gmd-9-1937-2016

Gates, W. L., Boyle, J. S., Covey, C., Dease, C. G., Doutriaux, C. M., Drach, R. S., Fiorino, M., Gleckler, P. J., Hnilo, J. J., Marlais, S. M., Phillips, T. J., Potter, G. L., Santer, B. D., Sperber, K. R., Taylor, K. E., & Williams, D. N. (1999). An Overview of the Results of the Atmospheric Model Intercomparison Project (AMIP I). In Bulletin of the American Meteorological Society (Vol. 80, Issue 1, pp. 29–55). American Meteorological Society. https://doi.org/10.1175/1520-0477(1999)080<0029:AOOTRO>2.0.CO;2

Hales, D., Rouchier, J., & Edmonds, B. (2003). Model-to-model analysis. Journal of Artificial Societies and Social Simulation, 6(4), 5. http://jasss.soc.surrey.ac.uk/6/4/5.html

Jones, B.C., DeBruine, L.M., Flake, J.K. et al. To which world regions does the valence–dominance model of social perception apply?. Nat Hum Behav 5, 159–169 (2021). https://doi.org/10.1038/s41562-020-01007-2

Moshontz, H. + 85 others (2018) The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network ,  1(4) 501-515. https://doi.org/10.1177/2515245918797607

Tittensor, D. P., Eddy, T. D., Lotze, H. K., Galbraith, E. D., Cheung, W., Barange, M., Blanchard, J. L., Bopp, L., Bryndum-Buchholz, A., Büchner, M., Bulman, C., Carozza, D. A., Christensen, V., Coll, M., Dunne, J. P., Fernandes, J. A., Fulton, E. A., Hobday, A. J., Huber, V., … Walker, N. D. (2018). A protocol for the intercomparison of marine fishery and ecosystem models: Fish-MIP v1.0. Geoscientific Model Development, 11(4), 1421–1442. https://doi.org/10.5194/gmd-11-1421-2018

Wei, Y., Liu, S., Huntzinger, D. N., Michalak, A. M., Viovy, N., Post, W. M., Schwalm, C. R., Schaefer, K., Jacobson, A. R., Lu, C., Tian, H., Ricciuto, D. M., Cook, R. B., Mao, J., & Shi, X. (2014). The north american carbon program multi-scale synthesis and terrestrial model intercomparison project – Part 2: Environmental driver data. Geoscientific Model Development, 7(6), 2875–2893. https://doi.org/10.5194/gmd-7-2875-2014


Bithell, M. and Edmonds, B. (2020) The Systematic Comparison of Agent-Based Policy Models - It’s time we got our act together!. Review of Artificial Societies and Social Simulation, 11th May 2021. https://rofasss.org/2021/05/11/SystComp/