Tag Archives: bruceedmonds

An Institute for Crisis Modelling (ICM) – Towards a resilience center for sustained crisis modeling capability

By Fabian Lorig1*, Bart de Bruin2, Melania Borit3, Frank Dignum4, Bruce Edmonds5, Sinéad M. Madden6, Mario Paolucci7, Nicolas Payette8, Loïs Vanhée4

*Corresponding author
1 Internet of Things and People Research Center, Malmö University, Sweden
2 Delft University of Technology, Netherlands
3 CRAFT Lab, Arctic University of Norway, Tromsø, Norway
4 Department of Computing Science, Umeå University, Sweden
5 Centre for Policy Modelling, Manchester Metropolitan University Business School, UK
6 School of Engineering, University of Limerick, Ireland
7 Laboratory of Agent Based Social Simulation, ISTC/CNR, Italy
8 Complex Human-Environmental Systems Simulation Laboratory, University of Oxford, UK

The Need for an ICM

Most crises and disasters do occur suddenly and hit the society while it is unprepared. This makes it particularly challenging to react quick to their occurrence, to adapt to the resulting new situation, to minimize the societal impact, and to recover from the disturbance. A recent example was the Covid-19 crisis, which revealed weak points of our crisis preparedness. Governments were trying to put restrictions in place to limit the spread of the virus while ensuring the well-being of the population and at the same time preserving economic stability. It quickly became clear that interventions which worked well in some countries did not seem to have the intended effect in other countries and the reason for this is that the success of interventions to a great extent depends on individual human behavior.

Agent-based Social Simulations (ABSS) explicitly model the behavior of the individuals and their interactions in the population and allow us to better understand social phenomena. Thus, ABSS are perfectly suited for investigating how our society might be affected by different crisis scenarios and how policies might affect the societal impact and consequences of these disturbances. Particularly during the Covid-19 crisis, a great number of ABSS have been developed to inform policy making around the globe (e.g., Dignum et al. 2020, Balkely et al. 2021, Lorig et al. 2021). However, weaknesses in creating useful and explainable simulations in a short time also became apparent and there is still a lack of consistency to be better prepared for the next crisis (Squazzoni et al. 2020). Especially, ABSS development approaches are, at this moment, more geared towards simulating one particular situation and validating the simulation using data from that situation. In order to be prepared for a crisis, instead, one needs to simulate many different scenarios for which data might not yet be available. They also typically need a more interactive interface where stake holders can experiment with different settings, policies, etc.

For ABSS to become an established, reliable, and well-esteemed method for supporting crisis management, we need to organize and consolidate the available competences and resources. It is not sufficient to react once a crisis occurs but instead, we need to proactively make sure that we are prepared for future disturbances and disasters. For this purpose, we also need to systematically address more fundamental problems of ABSS as a method of inquiry and particularly consider the specific requirements for the use of ABSS to support policy making, which may differ from the use of ABSS in academic research. We therefore see the need for establishing an Institute for Crisis Modelling (ICM), a resilience center to ensure sustained crisis modeling capability.

The vision of starting an Institute for Crisis Modelling was the result of the discussions and working groups at the Lorentz Center workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” that took place in Leiden, Netherlands from 27 February to 3 March 2023**.

Vision of the ICM

“To have tools suitable to support policy actors in situations that are of
big uncertainty, large consequences, and dependent on human behavior.”

The ICM consists of a taskforce for quickly and efficiently supporting policy actors (e.g., decision makers, policy makers, policy analysts) in situations that are of big uncertainty, large consequences, and dependent on human behavior. For this purpose, the taskforce consists of a larger (informal) network of associates that contribute with their knowledge, skills, models, tools, and networks. The group of associates is composed of a core group of multidisciplinary modeling experts (ranging from social scientists and formal modelers to programmers) as well as of partners that can contribute to specific focus areas (like epidemiology, water management, etc.). The vision of ICM is to consolidate and institutionalize the use of ABSS as a method for crisis management. Although physically ABSS competences may be distributed over a variety of universities, research centers, and other institutions, the ICM serves as a virtual location that coordinates research developments and provides a basic level of funding and communication channel for ABSS for crisis management. This does not only provide policy actors with a single point of contact, making it easier for them to identify who to reach when simulation expertise is needed and to develop long-term trust relationships. It also enables us to jointly and systematically evolve ABSS to become a valuable and established tool for crisis response. The center combines all necessary resources, competences, and tools to quickly develop new models, to adapt existing models, and to efficiently react to new situations.

To achieve this goal and to evolve and establish ABSS as a valuable tool for policy makers in crisis situations, research is needed in different areas. This includes the collection, development, critical analysis, and review of fundamental principles, theories, methods, and tools used in agent-based modeling. This also includes research on data handling (analysis, sharing, access, protection, visualization), data repositories, ontologies, user-interfaces, methodologies, documentation, and ethical principles. Some of these points are concisely described in (Dignum, 2021, Ch. 14 and 15).

The ICM shall be able to provide a wide portfolio of models, methods, techniques, design patterns, and components required to quickly and effectively facilitate the work of policy actors in crisis situations by providing them with adequate simulation models. For the purpose of being able to provide specialized support, the institute will coordinate the human effort (e.g., the modelers) and have specific focus areas for which expertise and models are available. This might be, for instance, pandemics, natural disasters, or financial crises. For each of these focus areas, the center will develop different use cases, which ensures and facilitates rapid responses due to the availability of models, knowledge, and networks.

Objectives of the ICM

To achieve this vision, there are a series of objectives that a resilience center for sustained crisis modeling capability in crisis situations needs to address:

1) Coordinate and promote research

Providing quick and appropriate support for policy actors in crisis situations requires not only a profound knowledge on existing models, methods, tools, and theories but also the systematic development of new approaches and methodologies. This is to advance and evolve ABSS for being better prepared for future crises and will serve as a beacon for organizing the ABSS research oriented towards practical applications.

2) Enable trusted connections with policy actors

Sustainable collaborations and interactions with decision-makers and policy analysts as well as other relevant stakeholders is a great challenge in ABSS. Getting in contact with the right actors, “speaking the same language”, and having realistic expectations are only some of the common problems that need to be addressed. Thus, the ICM should not only connect to policy actors in times of crises, but have continuous interactions, provide sample simulations, develop use cases, and train the policy actors wherever possible.

3) Enable sustainability of the institute itself

Classic funding schemes are unfit for responding in crises, which require fast responses with always-available resources as well as the continuous build-up of knowledge, skills, network, and technological buildup requires long-term. Sustainable funding is needed that for enabling such a continuity, for which the IBM provides a demarked, unifying frame.

4) Actively maintain the network of associates

Maintaining a network of experts is challenging because it requires different competences and experiences. PhD candidates, for instance, might have a great practical experience in using different simulation frameworks, however, after their graduation, some might leave academia and others might continue to other positions where they do not have the opportunity to use their simulation expertise. Thus, new experts need to be acquired continuously to form a resilient and balanced network.

5) Inform policy actors

The most advanced and profound models cannot do any good in crisis situations in case of a lacking demand from policy actors. Many modelers perceive a certain hesitation from policy actors regarding the use of ABSS which might be due to them being unfamiliar with the potential benefits and use-cases of ABSS, lacking trust in the method itself, or simply due to a lack of awareness that ABSS actually exists. Hence, the center needs to educate policy makers and raise awareness as well as improve trust in ABSS.

6) Train the next generation of experts

To quickly develop suitable ABSS models in critical situations requires a variety of expertise. In addition to objective 4, the acquisition of associates, it is also of great importance to educate and train the next generation of experts. ABSS research is still a niche and not taught as an inherent part of the spectrum of methods of most disciplines. The center shall promote and strengthen ABSS education to ensure the training of the next generation of experts.

7) Engage the general public

Finally, the success of ABSS does not only depend on the trust of policy actors but also on how it is perceived by the general public. When developing interventions during the Covid-19 crisis and giving recommendations, the trust in the method was a crucial success factor. Also, developing realistic models requires the active participation of the general public.

Next steps

For ABSS to become a valuable and established tool for supporting policy actors in crisis situations, we are convinced that our efforts need to be institutionalized. This allows us to consolidate available competences, models, and tools as well as to coordinate research endeavors and the development of new approaches required to ensure a sustained crisis modeling capability.

To further pursue this vision, a Special Interest Group (SIG) on Building ResilienCe with Social Simulations (BRICSS) was established at the European Social Simulation Association (ESSA). Moreover, Special Tracks will be organized at the 2023 Social Simulation Conference (SSC) to bring together interested experts.

However, for this vision to become reality, the next steps towards establishing an Institute for Crisis Modelling consist of bringing together ambitious and competent associates as well as identifying core funding opportunities for the center. If the readers feel motivated to contribute in any way to this topic, they are encouraged to contact Frank Dignum, Umeå University, Sweden or any of the authors of this article.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise. The final report of the workshop as well as more information can be found on the webpage of the Lorentz Center: https://www.lorentzcenter.nl/agent-based-simulations-for-societal-resilience-in-crisis-situations.html

References

Blakely, T., Thompson, J., Bablani, L., Andersen, P., Ouakrim, D. A., Carvalho, N., Abraham, P., Boujaoude, M.A., Katar, A., Akpan, E., Wilson, N. & Stevenson, M. (2021). Determining the optimal COVID-19 policy response using agent-based modelling linked to health and cost modelling: Case study for Victoria, Australia. Medrxiv, 2021-01.

Dignum, F., Dignum, V., Davidsson, P., Ghorbani, A., van der Hurk, M., Jensen, M., Kammler C., Lorig, F., Ludescher, L.G., Melchior, A., Mellema, R., Pastrav, C., Vanhee, L. & Verhagen, H. (2020). Analysing the combined health, social and economic impacts of the coronavirus pandemic using agent-based social simulation. Minds and Machines, 30, 177-194. doi: 10.1007/s11023-020-09527-6

Dignum, F. (ed.). (2021) Social Simulation for a Crisis; Results and Lessons from Simulating the COVID-19 Crisis. Springer.

Lorig, Fabian, Johansson, Emil and Davidsson, Paul (2021) ‘Agent-Based Social Simulation of the Covid-19 Pandemic: A Systematic Review’ Journal of Artificial Societies and Social Simulation 24(3), 5. http://jasss.soc.surrey.ac.uk/24/3/5.html. doi: 10.18564/jasss.4601

Squazzoni, F. et al. (2020) ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action‘ Journal of Artificial Societies and Social Simulation 23(2), 10. http://jasss.soc.surrey.ac.uk/23/2/10.html. doi: 10.18564/jasss.4298


Lorig, F., de Bruin, B., Borit, M., Dignum, F., Edmonds, B., Madden, S.M., Paolucci, M., Payette, N. and Vanhée, L. (2023) An Institute for Crisis Modelling (ICM) –
Towards a resilience center for sustained crisis modeling capability. Review of Artificial Societies and Social Simulation, 22 May 2023. https://rofasss.org/2023/05/22/icm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Making Models FAIR: An educational initiative to build good ABM practices

By Marco A. Janssen1, Kelly Claborn1, Bruce Edmonds2, Mohsen Shahbaznezhadfard1 and Manuela Vanegas-Ferro1

  1. Arizona State University, USA
  2. Manchester Metropolitan University, UK

Imagine a world where models are available to build upon. You do not have to build from scratch and painstakingly try to figure out how published papers are getting the published results. To achieve this utopian world, models have to be findable, accessible, interoperable, and reusable (FAIR). With the “Making Models FAIR” initiative, we seek to contribute to moving towards this world.

The initiative – Making Models FAIR – aims to provide capacity building opportunities to improve the skills, practices, and protocols to make computational models findable, accessible, interoperable and reusable (FAIR). You can find detailed information about the project on the website (tobefair.org), but here we will present the motivations behind the initiative and a brief outline of the activities.

There is increasing interest to make data and model code FAIR, and there is quite a lot of discussion on standards (https://www.openmodelingfoundation.org/ ). What is lacking are opportunities to gain skills for how to do this in practice. We have selected a list of highly cited publications from different domains and developed a protocol for making those models FAIR. The protocol may be adapted over time when we learn what works well.

This list of model publications provides opportunities to learn the skills needed to make models FAIR. The current list is a starting point, and you can suggest alternative model publications as desired. The main goal is to provide the modeling community a place to build capacity in making models FAIR. How do you use Github, code a model in a language or platform of your choice, and write good model documentation? These are necessary skills for collaboration and developing FAIR models. A suggested way of participating is for an instructor to have student groups participate in this activity, selecting a model publication that is of interest to their research.

To make a model FAIR, we focus on five activities:

  1. If the code is not available with the publication, find out whether the code is available (contact the authors) or replicate the model based on the model documentation. It might also happen that the code is available in programming language X, but you want to have it available in another language.
  2. If the code does not have a license, make sure an appropriate license is selected to make it available.
  3. Get a DOI, which is a permanent link to the model code and documentation. You could use comses.net or zenodo.org or similar services.
  4. Can you improve the model documentation? There is typically a form of documentation in a publication, in the article or an appendix, but is this detailed enough to understand how and why certain model choices have been made? Could you replicate the model from the information provided in the model documentation?
  5. What is the state of the model code? We know that most of us are not professional programmers and might be hesitant to share our code. Good practice is to provide comments on what different procedures are doing, defining variables, and not leave all kinds of wild ideas commented out left in the code base.

Most of the models listed do not have code available with the publication, which will require participants to contact the original others to obtain the code and/or to reproduce the code from the model documentation.

We are eager to learn what challenges people experience to make models FAIR. This could help to improve the protocols we provide. We also hope that those who made a model FAIR publish a contribution in RofASSS or relevant modeling journals. For publishing contributions in journals, it would be interesting to use a FAIR model to explore the robustness of the model results, especially for models that have been published many years ago and for which there were less computational resources available.

The tobefair.org website contains a lot of detailed information and educational opportunities. Below is a diagram from the site that aims to illustrate the road map of making models FAIR, so you can easily find the relevant information. Learn more by navigating to the About page and clicking through the diagram.

Making simulation models findable, accessible, interoperable and reusable is an important part of good scientific practice for simulation research. If important models fail to reach this standard, then this makes it hard for others to reproduce, check and extend them. If you want to be involved – to improve the listed models, or to learn the skills to make models FAIR – we hope you will participate in the project by going to tobefair.org and contributing.


Janssen, M.A., Claborn, K., Edmonds, B., Shahbaznezhadfard, M. and Vanegas-Ferro, M. (2023) Making Models FAIR: An educational initiative to build good ABM practices. Review of Artificial Societies and Social Simulation, 8 May 2023. https://rofasss.org/2023/05/11/fair/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

The inevitable “layering” of models to extend the reach of our understanding

By Bruce Edmonds

Just as physical tools and machines extend our physical abilities, models extend our mental abilities, enabling us to understand and control systems beyond our direct intellectual reach” (Calder  & al. 2018)

Motivation

There is a modelling norm that one should be able to completely understand one’s own model. Whilst acknowledging there is a trade-off between a model’s representational adequacy and its simplicity of formulation, this tradition assumes there will be a “sweet spot” where the model is just tractable but also good enough to be usefully informative about the target of modelling – in the words attributed to Einstein, “Everything should be made as simple as possible, but no simpler1. But what do we do about all the phenomena where to get an adequate model2 one has to settle for a complex one (where by “complex” I mean a model that we do not completely understand)? Despite the tradition in Physics to the contrary, it would be an incredibly strong assumption that there are no such phenomena, i.e. that an adequate simple model is always possible (Edmonds 2013).

There are three options in these difficult cases.

  • Do not model the phenomena at all until we can find an adequate model we can fully understand. Given the complexity of much around us this would mean to not model these for the foreseeable future and maybe never.
  • Accept inadequate simpler models and simply hope that these are somehow approximately right3. This option would allow us to get answers but with no idea whether they were at all reliable. There are many cases of overly simplistic models leading policy astray (Adoha & Edmonds 2017; Thompson 2022), so this is dangerous if such models influence decisions with real consequences.
  • Use models that are good for our purpose but that we only partially understand. This is the option examined in this paper.

When the purpose is empirical the last option is equivalent to preferring empirical grounding over model simplicity (Edmonds & Moss 2005).

Partially Understood Models

In practice this argument has already been won – we do not completely understand many computer simulations that we use and rely on. For example, due to the chaotic nature of the dynamics of the weather, forecasting models are run multiple times with slightly randomised inputs and the “ensemble” of forecasts inspected to get an idea of the range of different outcomes that could result (some of which might be qualitatively different from the others)4. Working out the outcomes in each case requires the computational tracking of a huge numbers of entities in a way that is far beyond what the human mind can do5. In fact, the whole of “Complexity Science” can be seen as different ways to get some understanding of systems for which there is no analytic solution6.

Of course, this raises the question of what is meant by “understand” a model, for this is not something that is formally defined. This could involve many things, including the following.

  1. That the micro-level – the individual calculations or actions done by the model each time step – is understood. This is equivalent to understanding each line of the computer code.
  2. That some of the macro-level outcomes that result from the computation of the whole model is understood in terms of partial theories or “rules of thumb”.
  3. That all the relevant macro-level outcomes can be determined to a high degree of accuracy without simulating the model (e.g. by a mathematical model).

Clearly, level (1) is necessary for most modelling purposes in order to know the model is behaving as intended. The specification of this micro-level is usually how such models are made, so if this differs from what was intended then this would be a bug. Thus this level would be expected of most models7. However, this does not necessarily mean that this is at the finest level of detail possible – for example, we usually do not bother about how random number generators work, but simply rely on its operation, but in this case we have very good level (3) of understanding for these sub-routines.

At the other extreme, a level (3) understanding is quite rare outside the realm of physics. In a sense, having this level of understanding makes the model redundant, so would probably not be the case for most working models (those used regularly)8. As discussed above, there will be many kinds of phenomena for which this level of understanding is not feasible.

Clearly, what many modelers find useful is a combination of levels (1) & (2) – that is, the detailed, micro-level steps that the model takes are well understood and the outcomes understood well enough for the intended task. For example, when using a model to establish a complex explanation9 (of some observed pattern in data using certain mechanisms or structures) then one might understand the implementation of the candidate mechanisms and verify that the outcomes fit the target pattern for a range of parameters, but not completely understand the detail of the causation involved. There might well be some understanding, for example how robust this is to minor variations in the initial conditions or the working of the mechanisms involved (e.g. by adding some noise to the processes). A complete understanding might not be accessible but this does not stop an explanation being established (although a  better understanding is an obvious goal for future research or avenue for critiques of the explanation).

Of course, any lack of a complete, formal understanding leaves some room for error. The argument here is not deriding the desirability of formal understanding, but is against prioritising that over model adequacy. Also the lack of a formal, level (3), understanding of a model does not mean we cannot take more pragmatic routes to checking it. For example: performing a series of well-designed simulation experiments that intend to potentially refute the stated conclusions, systematically comparing to other models, doing a thorough sensitivity analysis and independently reproducing models can help ensure their reliability. These can be compared with engineering methods – one may not have a proof that a certain bridge design is solid over all possible dynamics, but practical measures and partial modelling can ensure that any risk is so low as to be negligible. If we had to wait until bridge designs were proven beyond doubt, we would simply have to do without them.

Layering Models to Leverage some Understanding

As a modeller, if I do not understand something my instinct is to model it. This instinct does not change if what I do not understand is, itself, a model. The result is a model of the original model – a meta-model. This is, in fact, common practice. I may select certain statistics summarising the outcomes and put these on a graph; I might analyse the networks that have emerged during model runs; I may use maths to approximate or capture some aspect of the dynamics; I might cluster and visualise the outcomes using Machine Learning techniques; I might make a simpler version of the original and compare them. All of these might give me insights into the behaviour of the original model. Many of these are so normal we do not think of this as meta-modelling. Indeed, empirically-based models are already, in a sense, meta-models, since the data that they represent are themselves a kind of descriptive model of reality (gained via measurement processes).

This meta-modelling strategy can be iterated to produce meta-meta-models etc. resulting in “layers” of models, with each layer modelling some aspect of the one “below” until one reaches the data and then what the data measures. Each layer should be able to be compared and checked with the layer “below”, and analysed by the layer “above”.

An extended example of such layering was built during the SCID (Social Complexity of Immigration and Diversity) project10 and illustrated in Figure 1. In this a complicated simulation (Model 1) was built to incorporate some available data and what was known concerning the social and behavioural processes that lead people to bother to vote (or not). This simulation was used as a counter-example to show how assumptions about the chaining effect of interventions might be misplaced (Fieldhouse et al. 2016). A much simpler simulation was then built by theoretical physicists (Model 2), so that it produced the same selected outcomes over time and aa range of parameter values. This allowed us to show that some of the features in the original (such as dynamic networks) were essential to get the observed dynamics in it (Lafuerza et al. 2016a). This simpler model was in turn modelled by an even simpler model (Model 3) that was amenable to an analytic model (Model 4) that allowed us to obtain some results concerning the origin of a region of bistability in the dynamics (Lafuerza et al. 2016b).

Layering fig 1

Figure 1. The Layering of models that were developed in part of the SCID project

Although there are dangers in such layering – each layer could introduce a new weakness – there are also methodological advantages, including the following. (A) Each model in the chain (except model 4) is compared and checked against both the layer below and that above. Such multiple model comparisons are excellent for revealing hidden assumptions and unanticipated effects. (B) Whilst previously what might have happened was a “heroic” leap of abstraction from evidence and understanding straight to Model 3 or 4, here abstraction happens over a series of more modest steps, each of which is more amenable to checking and analysis. When you stage abstraction the introduced assumptions are more obvious and easier to analyse.

One can imagine such “layering” developing in many directions to leverage useful (but indirect) understanding, for example the following.

  • Using an AI algorithm to learn patterns in some data (e.g. medical data for disease diagnosis) but then modelling its working to obtain some human-accessible understanding of how it is doing it.
  • Using a machine learning model to automatically identify the different “phase spaces” in model results where qualitatively different model behaviour is exhibited, so one can then try to simplify the model within each phase.
  • Automatically identifying the processes and structures that are common to a given set of models to facilitate the construction of a more general, ‘umbrella’ model that approximates all the outcomes that would have resulted from the set, but within a narrower range of conditions.

As the quote at the top implies, we are used to settling for partial control of what machines do because it allows us to extend our physical abilities in useful ways. Each time we make their control more indirect, we need to check that this is safe and adequate for purpose. In the cars we drive there are ever more layers of electronic control between us and the physical reality it drives through which we adjust to – we are currently adjusting to more self-drive abilities. Of course, the testing and monitoring of these systems is very important but that will not stop the introduction of layers that will make them safer and more pleasant to drive.

The same is true of our modelling, which we will need to apply in ever more layers in order to leverage useful understanding which would not be accessible otherwise. Yes, we will need to use practical methods to test their fitness for purpose and reliability, and this might include the complete verification of some components (where this is feasible), but we cannot constrain ourselves to only models we completely understand.

Concluding Discussion

If the above seems obvious, then why am I bothering to write this? I think for a few reasons. Firstly, to answer the presumption that understanding one’s model must have priority over all other considerations (such as empirical adequacy) so that sometimes we must accept and use partially understood models. Secondly, to point out that such layering has benefits as well as difficulties – especially if it can stage abstraction into more verifiable steps and thus avoid huge leaps to simple but empirically-isolated models. Thirdly, because such layering will become increasingly common and necessary.

In order to extend our mental reach further, we will need to develop increasingly complicated and layered modelling. To do this we will need to accept that our understanding is leveraged via partially understood models, but also to develop the practical methods to ensure their adequacy for purpose.

Notes

[1] These are a compressed version of his actual words during a 1933 lecture, which were: “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” (Robinson 2018)
[2] Adequate for whatever our purpose for it is (Edmonds & al. 2019).
[3]The weasel words I once heard from a Mathematician excusing an analytic model he knew to be simplistic were: that, although he knew it was wrong, it was useful for “capturing core dynamics” (though how he knew that they were not completely wrong eludes me).
[4] For an introduction to this approach read the European Centre for Medium-Range Weather Forecasts’ fact sheet on “Ensemble weather forecasting” at: https://www.ecmwf.int/en/about/media-centre/focus/2017/fact-sheet-ensemble-weather-forecasting
[5] In principle, a person could do all the calculations involved in a forecast but only with the aid of exterior tools such as pencil and paper to keep track of it all so it is arguable whether the person doing the individual calculations has an “understanding” of the complete picture. Lewis Fry Richardson, who pioneered the idea of numerical forecasting of weather in the 1920s, did a 1-day forecast by hand to illustrate his method (Lynch 2008), but this does not change the argument.
[6] An analytic solution is when one can obtain a closed-form equation that characterises all the outcomes by manipulating the mathematical symbols in a proof. If one has to numerically calculate outcomes for different initial conditions and parameters this is a computational solution.
[7] For purely predictive models, whose purpose is only to anticipate an unknown value to a useful level of accuracy, this is not strictly necessary. For example, how some AI/Machine learning models work may not clear at the micro-level, but as long as it works (successfully predicts) this does not matter – even if its predictive ability is due to a bug.
[8] Models may still be useful in this case, for example to check the assumptions made in the matching mathematical or other understanding.
[9] For more on this use see (Edmonds et al. 2019).
[10] For more about this project see http://cfpm.org/scid

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, 2019-2023, grant number ES/S015159/1 and was supported as part of the EPSRC-funded “SCID” project 2010-2016, grant number EP/H02171X/1.

References

Calder, M., Craig, C., Culley, D., de Cani, R., Donnelly, C.A., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N. Hargrove, C., Hinds, D., Lane, D.C., Mitchell, D., Pavey, G., Robertson, D., Rosewell, B., Sherwin, S., Walport, M. and Wilson, A. (2018) Computational modelling for decision-making: where, why, what, who and how. Royal Society Open Science, DOI:10.1098/rsos.172096.

Edmonds, B. (2013) Complexity and Context-dependency. Foundations of Science, 18(4):745-755. DOI:10.1007/s10699-012-9303-x

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144. DOI:10.1007/978-3-540-32243-6_11

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. DOI:10.18564/jasss.3993

Fieldhouse, E., Lessard-Phillips, L. & Edmonds, B. (2016) Cascade or echo chamber? A complex agent-based simulation of voter turnout. Party Politics. 22(2):241-256.  DOI:10.1177/1354068815605671

Lafuerza, LF, Dyson, L, Edmonds, B & McKane, AJ (2016a) Simplification and analysis of a model of social interaction in voting, European Physical Journal B, 89:159. DOI:10.1140/epjb/e2016-70062-2

Lafuerza L.F., Dyson L., Edmonds B., & McKane A.J. (2016b) Staged Models for Interdisciplinary Research. PLoS ONE, 11(6): e0157261. DOI:10.1371/journal.pone.0157261

Lynch, P. (2008). The origins of computer weather prediction and climate modeling. Journal of Computational Physics, 227(7), 3431-3444. DOI:10.1016/j.jcp.2007.02.034

Robinson, A. (2018) Did Einstein really say that? Nature, 557, 30. DOI:10.1038/d41586-018-05004-4

Thompson, E. (2022) Escape from Model Land. Basic Books. ISBN-13: 9781529364873


Edmonds, B. (2023) The inevitable “layering” of models to extend the reach of our understanding. Review of Artificial Societies and Social Simulation, 9 Feb 2023. https://rofasss.org/2023/02/09/layering


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Socio-Cognitive Systems – a position statement

By Frank Dignum1, Bruce Edmonds2 and Dino Carpentras3

1Department of Computing Science, Faculty of Science and Technology, Umeå University, frank.dignum@umu.se
2Centre for Policy Modelling, Manchester Metropolitan University, bruce@edmonds.name
3Department of Psychology, University of Limerick, dino.carpentras@gmail.com

In this position paper we argue for the creation of a new ‘field’: Socio-Cognitive Systems. The point of doing this is to highlight the importance of a multi-levelled approach to understanding those phenomena where the cognitive and the social are inextricably intertwined – understanding them together.

What goes on ‘in the head’ and what goes on ‘in society’ are complex questions. Each of these deserves serious study on their own – motivating whole fields to answer them. However, it is becoming increasingly clear that these two questions are deeply related. Humans are fundamentally social beings, and it is likely that many features of their cognition have evolved because they enable them to live within groups (Herrmann et al. 20007). Whilst some of these social features can be studied separately (e.g. in a laboratory), others only become fully manifest within society at large. On the other hand, it is also clear that how society ‘happens’ is complicated and subtle and that these processes are shaped by the nature of our cognition. In other words, what people ‘think’ matters for understanding how society ‘is’ and vice versa. For many reasons, both of these questions are difficult to answer. As a result of these difficulties, many compromises are necessary in order to make progress on them, but each compromise also implies some limitations. The main two types of compromise consist of limiting the analysis to only one of the two (i.e. either cognition or society)[1]. To take but a few examples of this.

  1. Neuro-scientists study what happens between systems of neurones to understand how the brain does things and this is so complex that even relatively small ensembles of neurones are at the limits of scientific understanding.
  2. Psychologists see what can be understood of cognition from the outside, usually in the laboratory so that some of the many dimensions can be controlled and isolated. However, what can be reproduced in a laboratory is a limited part of behaviour that might be displayed in a natural social context.
  3. Economists limit themselves to the study of the (largely monetary) exchange of services/things that could occur under assumptions of individual rationality, which is a model of thinking not based upon empirical data at the individual level. Indeed it is known to contradict a lot of the data and may only be a good approximation for average behaviour under very special circumstances.
  4. Ethnomethodologists will enter a social context and describe in detail the social and individual experience there, but not generalise beyond that and not delve into the cognition of those they observe.
  5. Other social scientists will take a broader view, look at a variety of social evidence, and theorise about aspects of that part of society. They (almost always) do not include individual cognition into account in these and do not seek to integrate the social and the cognitive levels.

Each of these in the different ways separate the internal mechanisms of thought from the wider mechanisms of society or limits its focus to a very specific topic. This is understandable; what each is studying is enough to keep them occupied for many lifetimes. However, this means that each of these has developed their own terms, issues, approaches and techniques which make relating results between fields difficult (as Kuhn, 1962, pointed out).

SCS Picture 1

Figure 1: Schematic representation of the relationship between the individual and society. Individuals’ cognition is shaped by society, at the same time, society is shaped by individuals’ beliefs and behaviour.

This separation of the cognitive and the social may get in the way of understanding many things that we observe. Some phenomena seem to involve a combination of these aspects in a fundamental way – the individual (and its cognition) being part of society as well as society being part of the individual. Some examples of this are as follows (but please note that this is far from an exhaustive list).

  • Norms. A social norm is a constraint or obligation upon action imposed by society (or perceived as such). One may well be mistaken about a norm (e.g. whether it is ok to casually talk to others at a bus stop), thus it is also a belief – often not told to one explicitly but something one needs to infer from observation. However, for a social norm to hold it also needs to be an observable convention. Decisions to violate social norms require that the norm is an explicit (referable) object in the cognitive model. But the violation also has social consequences. If people react negatively to violations the norm can be reinforced. But if violations are ignored it might lead to a norm disappearing. How new norms come about, or how old ones fade away, is a complex set of interlocking cognitive and social processes. Thus social norms are a phenomena that essentially involves both the social and the cognitive (Conte et al. 2013).
  • Joint construction of social reality. Many of the constraints on our behaviour come from our perception of social reality. However, we also create this social reality and constantly update it. For example, we can invent a new procedure to select a person as head of department or exit a treaty and thus have different ways of behaving after this change. However, these changes are not unconstrained in themselves. Sometimes the time is “ripe for change”, while at other times resistance is too big for any change to take place (even though a majority of the people involved would like to change). Thus what is socially real for us depends on what people individually believe is real, but this depends in complex ways on what other people believe and their status. And probably even more important: the “strength” of a social structure depends on the use people make of it. E.g. a head of department becomes important if all decisions in the department are deferred to the head. Even though this might not be required by university or law.
  • Identity. Our (social) identity determines the way other people perceive us (e.g. a sports person, a nerd, a family man) and therefore creates expectations about our behaviour. We can create our identities ourselves and cultivate them, but at the same time, when we have a social identity, we try to live up to it. Thus, it will partially determine our goals and reactions and even our feeling of self-esteem when we live up to our identity or fail to do so. As individuals we (at least sometimes) have a choice as to our desired identity, but in practice, this can only be realised with the consent of society. As a runner I might feel the need to run at least three times a week in order for other people to recognize me as runner. At the same time a person known as a runner might be excused from a meeting if training for an important event. Thus reinforcing the importance of the “runner” identity.
  • Social practices. The concept already indicates that social practices are about the way people habitually interact and through this interaction shape social structures. Practices like shaking hands when greeting do not always have to be efficient, but they are extremely socially important. For example, different groups, countries and cultures will have different practices when greeting and performing according to the practice shows whether you are part of the in-group or out-group. However, practices can also change based on circumstances and people, as it happened, for example, to the practice of shaking hands during the covid-19 pandemic. Thus, they are flexible and adapting to the context. They are used as flexible mechanisms to efficiently fit interactions in groups, connecting persons and group behaviour.

As a result, this division between cognitive and the social gets in the way not only of theoretical studies, but also in practical applications such as policy making. For example, interventions aimed at encouraging vaccination (such as compulsory vaccination) may reinforce the (social) identity of the vaccine hesitant. However, this risk and its possible consequences for society cannot be properly understood without a clear grasp of the dynamic evolution of social identity.

Computational models and systems provide a way of trying to understand the cognitive and the social together. For computational modellers, there is no particular reason to confine themselves to only the cognitive or only the social because agent-based systems can include both within a single framework. In addition, the computational system is a dynamic model that can represent the interactions of the individuals that connect the cognitive models and the social models. Thus the fact that computational models have a natural way to represent the actions as an integral and defining part of the socio-cognitive system is of prime importance. Given that the actions are an integral part of the model it is well suited to model the dynamics of socio-cognitive systems and track changes at both the social and the cognitive level. Therefore, within such systems we can study how cognitive processes may act to produce social phenomena whilst, at the same time, as how social realities are shaping the cognitive processes. Caarley and Newell (1994) discusses what is necessary at the agent level for sociality, Hofested et al. (2021) talk about how to understand sociality using computational models (including theories of individual action) – we want to understand both together. Thus, we can model the social embeddedness that Granovetter (1985) talked about – going beyond over- or under-socialised representations of human behaviour. It is not that computational models are innately suitable for modelling either the cognitive or the social, but that they can be appropriately structured (e.g. sets of interacting parts bridging micro-, meso- and macro-levels) and include arbitrary levels of complexity. Lots of models that represent the social have entities that stand for the cognitive, but do not explicitly represent much of that detail – similarly much cognitive modelling implies the social in terms of the stimuli and responses of an individual that would be to other social entities, but where these other entities are not explicitly represented or are simplified away.

Socio-Cognitive Systems (SCS) are: those models and systems where both cognitive and social complexity are represented with a meaningful level of processual detail.

A good example of an application where this appeared of the biggest importance was in simulations for the covid-19 crisis. The spread of the corona virus on macro level could be given by an epidemiological model, but the actual spreading depended crucially on the human behaviour that resulted from individuals’ cognitive model of the situation. In Dignum (2021) it was shown how the socio-cognitive system approach was fundamental to obtaining better insights in the effectiveness of a range of covid-19 restrictions.

Formality here is important. Computational systems are formal in the sense that they can be unambiguously passed around (i.e. unlike language, it is not differently re-interpreted by each individual) and operate according to their own precisely specified and explicit rules. This means that the same system can be examined and experimented on by a wider community of researchers. Sometimes, even when the researchers from different fields find it difficult to talk to one another, they can fruitfully cooperate via a computational model (e.g. Lafuerza et al. 2016). Other kinds of formal systems (e.g. logic, maths) are geared towards models that describe an entire system from a birds eye view. Although there are some exceptions like fibred logics Gabbay (1996), these are too abstract to be of good use to model practical situations. The lack of modularity and has been addressed in context logics Giunchiglia, F., & Ghidini, C. (1998). However, the contexts used in this setting are not suitable to generate a more general societal model. It results in most typical mathematical models using a number of agents which is either one, two or infinite (Miller and Page 2007), while important social phenomena happen with a “medium sized” population. What all these formalisms miss is a natural way of specifying the dynamics of the system that is modelled, while having ways to modularly describe individuals and the society resulting from their interactions. Thus, although much of what is represented in Socio-Cognitive Systems is not computational, the lingua franca for talking about them is.

The ‘double complexity’ of combining the cognitive and the social in the same system will bring its own methodological challenges. Such complexity will mean that many socio-cognitive systems will be, themselves, hard to understand or analyse. In the covid-19 simulations, described in (Dignum 2021), a large part of the work consisted of analysing, combining and representing the results in ways that were understandable. As an example, for one scenario 79 pages of graphs were produced showing different relations between potentially relevant variables. New tools and approaches will need to be developed to deal with this. We only have some hints of these, but it seems likely that secondary stages of analysis – understanding the models – will be necessary, resulting in a staged approach to abstraction (Lafuerza et al. 2016). In other words, we will need to model the socio-cognitive systems, maybe in terms of further (but simpler) socio-cognitive systems, but also maybe with a variety of other tools. We do not have a view on this further analysis, but this could include: machine learning, mathematics, logic, network analysis, statistics, and even qualitative approaches such as discourse analysis.

An interesting input for the methodology of designing and analysing socio-cognitive systems is anthropology and specifically ethnographical methods. Again, for the covid-19 simulations the first layer of the simulation was constructed based on “normal day life patterns”. Different types of persons were distinguished that each have their own pattern of living. These patterns interlock and form a fabric of social interactions that overall should satisfy most of the needs of the agents. Thus we calibrate the simulation based on the stories of types of people and their behaviours. Note that doing the same just based on available data of behaviour would not account for the underlying needs and motives of that behaviour and would not be a good basis for simulating changes. The stories that we used looked very similar to the type of reports ethnographers produce about certain communities. Thus further investigating this connection seems worthwhile.

For representing the output of the complex socio-cognitive systems we can also use the analogue of stories. Basically, different stories show the underlying (assumed) causal relations between phenomena that are observed. E.g. seeing an increase in people having lunch with friends can be explained by the fact that a curfew prevents people having dinner with their friends, while they still have a need to socialize. Thus the alternative of going for lunch is chosen more often. One can see that the explaining story uses both social as well as cognitive elements to describe the results. Although in the covid-19 simulations we have created a number of these stories, they were all created by hand after (sometimes weeks) of careful analysis of the results. Thus for this kind of approach to be viable, new tools are required.

Although human society is the archetypal socio-cognitive system, it is not the only one. Both social animals and some artificial systems also come under this category. These may be very different from the human, and in the case of artificial systems completely different. Thus, Socio-Cognitive Systems is not limited to the discussion of observable phenomena, but can include constructed or evolved computational systems, and artificial societies. Examination of these (either theoretically or experimentally) opens up the possibility of finding either contrasts or commonalities between such systems – beyond what happens to exist in the natural world. However, we expect that ideas and theories that were conceived with human socio-cognitive systems in mind might often be an accessible starting point for understanding these other possibilities.

In a way, Socio-Cognitive Systems bring together two different threads in the work of Herbert Simon. Firstly, as in Simon (1948) it seeks to take seriously the complexity of human social behaviour without reducing this to overly simplistic theories of individual behaviour. Secondly, it adopts the approach of explicitly modelling the cognitive in computational models (Newell & Simon 1972). Simon did not bring these together in his lifetime, perhaps due to the limitations and difficulty of deploying the computational tools to do so. Instead, he tried to develop alternative mathematical models of aspects of thought (Simon 1957). However, those models were limited by being mathematical rather than computational.

To conclude, a field of Socio-Cognitive Systems would consider the cognitive and the social in an integrated fashion – understanding them together. We suggest that computational representation or implementation might be necessary to provide concrete reference between the various disciplines that are needed to understand them. We want to encourage research that considers the cognitive and the social in a truly integrated fashion. If by labelling a new field does this it will have achieved its purpose. However, there is the possibility that completely new classes of theory and complexity may be out there to be discovered – phenomena that are denied if either the cognitive or the social are not taken together – a new world of a socio-cognitive systems.

Notes

[1] Some economic models claim to bridge between individual behaviour and macro outcomes, however this is traditionally notional. Many economists admit that their primary cognitive models (varieties of economic rationality) are not valid for individuals but are what people on average do – i.e. this is a macro-level model. In other economic models whole populations are formalised using a single representative agent. Recently, there are some agent-based economic models emerging, but often limited to agree with traditional models.

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, grant number ES/S015159/1.

References

Carley, K., & Newell, A. (1994). The nature of the social agent. Journal of mathematical sociology, 19(4): 221-262. DOI: 10.1080/0022250X.1994.9990145

Conte R., Andrighetto G. and Campennì M. (eds) (2013) Minding Norms – Mechanisms and dynamics of social order in agent societies. Oxford University Press, Oxford.

Dignum, F. (ed.) (2021) Social Simulation for a Crisis; Results and Lessons from Simulating the COVID-19 Crisis. Springer.

Herrmann E., Call J, Hernández-Lloreda MV, Hare B, Tomasello M (2007) Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science 317(5843): 1360-1366. DOI: 10.1126/science.1146282

Hofstede, G.J, Frantz, C., Hoey, J., Scholz, G. and Schröder, T. (2021) Artificial Sociality Manifesto. Review of Artificial Societies and Social Simulation, 8th Apr 2021. https://rofasss.org/2021/04/08/artsocmanif/

Gabbay, D. M. (1996). Fibred Semantics and the Weaving of Logics Part 1: Modal and Intuitionistic Logics. The Journal of Symbolic Logic, 61(4), 1057–1120.

Ghidini, C., & Giunchiglia, F. (2001). Local models semantics, or contextual reasoning= locality+ compatibility. Artificial intelligence, 127(2), 221-259. DOI: 10.1016/S0004-3702(01)00064-9

Granovetter, M. (1985) Economic action and social structure: The problem of embeddedness. American Journal of Sociology 91(3): 481-510. DOI: 10.1086/228311

Kuhn, T,S, (1962) The structure of scientific revolutions. University of Chicago Press, Chicago

Lafuerza L.F., Dyson L., Edmonds B., McKane A.J. (2016) Staged Models for Interdisciplinary Research. PLoS ONE 11(6): e0157261, DOI: 10.1371/journal.pone.0157261

Miller, J. H., Page, S. E., & Page, S. (2009). Complex adaptive systems. Princeton university press.

Newell A, Simon H.A. (1972) Human problem solving. Prentice Hall, Englewood Cliffs, NJ

Simon, H.A. (1948) Administrative behaviour: A study of the decision making processes in administrative organisation. Macmillan, New York

Simon, H.A. (1957) Models of Man: Social and rational. John Wiley, New York


Dignum, F., Edmonds, B. and Carpentras, D. (2022) Socio-Cognitive Systems – A Position Statement. Review of Artificial Societies and Social Simulation, 2nd Apr 2022. https://rofasss.org/2022/04/02/scs


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

The Poverty of Suggestivism – the dangers of “suggests that” modelling

By Bruce Edmonds

Vagueness and refutation

A model[1] is basically composed of two parts (Zeigler 1976, Wartofsky 1979):

  1. A set of entities (such as mathematical equations, logical rules, computer code etc.) which can be used to make some inferences as to the consequences of that set (usually in conjunction with some data and parameter values)
  2. A mapping from this set to what it aims to represent – what the bits mean

Whilst a lot of attention has been paid to the internal rigour of the set of entities and the inferences that are made from them (1), the mapping to what that represents (2) has often been left as implicit or incompletely described – sometimes only indicated by the labels given to its parts. The result is a model that vaguely relates to its target, suggesting its properties analogically. There is not a well-defined way that the model is to be applied to anything observed, but a new map is invented each time it is used to think about a particular case. I call this way of modelling “Suggestivism”, because the model “suggests” things about what is being modelled.

This is partly a recapitulation of Popper’s critique of vague theories in his book “The Poverty of Historicism” (1957). He characterised such theories as “irrefutable”, because whatever the facts, these theories could be made to fit them. Irrefutability is an indicator of a lack of precise mapping to reality – such vagueness makes refutation very hard. However, it is only an indicator; there may be other reasons than vagueness for it not being possible to test a theory – it is their disconnection from well-defined empirical reference that is the issue here.

Some might go as far as suggesting that any model or theory that is not refutable is “unscientific”, but this goes too far, implying a very restricted definition of what ‘science’ is. We need analogies to think about what we are doing and to gain insight into what we are studying, e.g. (Hartman 1997) – for humans they are unavoidable, ‘baked’ into the way language works (Lakoff 1987). A model might make a set of ideas clear and help map out the consequences of a set of assumptions/structures/processes. Many of these suggestivist models relate to a set of ideas and it is the ideas that relate to what is observed (albeit informally) (Edmonds 2001). However, such models do not capture anything reliable about what they refer to, and in that sense are not part of the set of the established statements and theories that is at the core of science  (Arnold 2014).

The dangers of suggestivist modelling

As above, there are valid uses of abstract or theoretical modelling where this is explicitly acknowledged and where no conclusions about observed phenomena are made. So what are the dangers of suggestivist modelling – why am I making such a fuss about it?

Firstly, that people often seem to confuse a model as an analogy – a way of thinking about stuff – and a model that tells us reliably about what we are studying. Thus they give undue weight to the analyses of abstract models that are, in fact, just thought experiments. Making models is a very intimate way of theorising – one spends an extended period of time interacting with one’s model: developing, checking, analysing etc. The result is a particularly strong version of “Kuhnian Spectacles” (Kuhn 1962) causing us to see the world though our model for weeks after. Under this strong influence it is natural to confuse what we can reliably infer about the world and how we are currently perceiving/thinking about it. Good scientists should then pause and wait for this effect to wear off so that they can effectively critique what they have done, its limitations and what its implications are. However, often in the rush to get their work out, modellers often do not do this, resulting in a sloppy set of suggestive interpretations of their modelling.

Secondly, empirical modelling is hard. It is far easier (and, frankly, more fun) to play with non-empirical models. A scientific culture that treats suggestivist modelling as substantial progress and significantly rewards modellers that do it, will effectively divert a lot of modelling effort in this direction. Chattoe-Brown (2018) displayed evidence of this in his survey of opinion dynamics models – abstract, suggestivist modelling got far more reward (in terms of citations) than those that tried to relate their model to empirical data in a direct manner. Abstract modelling has a role in science, but if it is easier and more rewarding then the field will become unbalanced. It may give the impression of progress but not deliver on this impression. In a more mature science, researchers working on measurement methods (steps from observation to models) and collecting good data are as important as the theorists (Moss 1998).

Thirdly, it is hard to judge suggestivist models. Given their connection to the modelling target is vague there cannot be any decisive test of its success. Good modellers should declare the exact purpose of their model, e.g. that is analogical or merely exploring the consequences of theory (Edmonds et al. 2019), but then accept the consequences of this choice – namely, that it excludes  making conclusions about the observed world. If it is for a theoretical exploration then the comprehensiveness of the exploration, the scope of the exploration and the applicability of the model can be judged, but if the model is analogical or illustrative then this is harder. Whilst one model may suggest X, another may suggest the opposite. It is quite easy to fix a model to get the outcomes one wants. Clearly, if a model makes startling suggestions – illustrating totally new ideas or making a counter-example to widely held assumptions – then this helps science by widening the pool of theories or hypotheses that are considered. However most suggestivist modelling does not do this.

Fourthly, their sheer flexibility of as to application causes problems – if one works hard enough one can invent mappings to a wide range of cases, the limits are only those of our imagination. In effect, having a vague mapping from model to what it models adds in huge flexibility in a similar way to having a large number of free (non-empirical) parameters. This flexibility gives an impression of generality, and many desire simple and general models for complex phenomena. However, this is illusory because a different mapping is needed for each case, to make it apply. Given the above (1)+(2) definition of a model this means that, in fact, it is a different model for each case – what a model refers to, is part of the model. The same flexibility makes such models impossible to refute, since one can just adjust the mapping to save them. The apparent generality and lack of refutation means that such models hang around in the literature, due to their surface attractiveness.

Finally, these kinds of model are hugely influential beyond the community of modellers to the wider public including policy actors. Narratives that start in abstract models make their way out and can be very influential (Vranckx 1999). Despite the lack of rigorous mapping from model to reality, suggestivist models look impressive, look scientific. For example, very abstract models from the Neo-Classical ‘Chicago School’ of economists supported narratives about the optimal efficiency of markets, leading to a reluctance to regulate them (Krugman 2009). A lack of regulation seemed to be one of the factors behind the 2007/8 economic crash (Baily et al 2008). Modellers may understand that other modellers get over-enthusiastic and over-interpret their models, but others may not. It is the duty of modellers to give an accurate impression of the reliability of any modelling results and not to over-hype them.

How to recognise a suggestivist model

It can be hard to detangle how empirically vague a model is, because many descriptions about modelling work do not focus on making the mapping to what it represents precise. The reasons for this are various, for example: the modeller might be conflating reality and what is in the model in their minds, the researcher is new to modelling and has not really decided what the purpose of their model is, the modeller might be over-keen to establish the importance of their work and so is hyping the motivation and conclusions, they might simply not got around to thinking enough about the relationship between their model and what it might represent, or they might not have bothered to make the relationship explicit in their description. Whatever the reason the reader of any description of such work is often left with an archaeological problem: trying to unearth what the relationship might be, based on indirect clues only. The only way to know for certain is to take a case one knows about and try and apply the model to it, but this is a time consuming process and relies upon having a case with suitable data available. However, there are some indicators, albeit fallible ones, including the following.

  • A relatively simple model is interpreted as explaining a wide range of observed, complex phenomena
  • No data from an observed case study is compared to data from the model (often no data is brought in at all, merely abstract observations) – despite this, conclusions about some observed phenomena are made
  • The purpose of the model is not explicitly declared
  • The language of the paper seems to conflate talking about the model with what is being modelled
  • In the paper there are sudden abstraction ‘jumps’ between the motivation and the description of the model and back again to the interpretation of the results in terms of that motivation. The abstraction jumps involved are large and justified by some a priori theory or modelling precedents rather than evidence.

How to avoid suggestivist modelling

How to avoid the dangers of suggestivist modelling should be clear from the above discussion, but I will make them explicit here.

  • Be clear about the model purpose – that is does the model aim to achieve, which indicates how it should be judged by others (Edmonds et al 2019)
  • Do not make any conclusions about the real world if you have not related the model to any data
  • Do not make any policy conclusions – things that might affect other people’s lives – without at least some independent validation of the model outcomes
  • Document how a model relates (or should relate) to data, the nature of that data and maybe even the process whereby that data should be obtained (Achter et al 2019)
  • Be explicit as possible about what kinds of phenomena the model applies to – the limits of its scope
  • Keep the language about the model and what is being modelled distinct – for any statement it should be clear whether it is talking about the model or what it models (Edmonds 2020)
  • Highlight any bold assumptions in the specification of the model or describe what empirical foundation there is for them – be honest about these

Conclusion

Models can serve many different purposes (Epstein 2008). This is fine as long as the purpose of models are always made clear, and model results are not interpreted further than their established purpose allows. Research which gives the impression that analogical, illustrative or theoretical modelling can tell us anything reliable about observed complex phenomena is not only sloppy science, but can have a deleterious impact – giving an impression of progress whilst diverting attention from empirically reliable work. Like a bad investment: if it looks too good and too easy to be true, it probably isn’t.

Notes

[1] We often use the word “model” in a lazy way to indicate (1) rather than (1)+(2) in this definition, but a set of entities without any meaning or mapping to anything else is not a model, as it does not represent anything. For example, a random set of equations or program instructions does not make a model.

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, grant number ES/S015159/1.

References

Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. & Siebers, P.-O. (2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/

Arnold, E. (2014). What’s wrong with social simulations?. The Monist, 97(3), 359-377. DOI:10.5840/monist201497323

Baily, M. N., Litan, R. E., & Johnson, M. S. (2008). The origins of the financial crisis. Fixing Finance Series – Paper 3, The Brookings Institution. https://www.brookings.edu/wp-content/uploads/2016/06/11_origins_crisis_baily_litan.pdf

Chattoe-Brown, E. (2018) What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table? Review of Artificial Societies and Social Simulation, 11th June 2018. https://rofasss.org/2018/06/11/ecb/

Edmonds, B. (2001) The Use of Models – making MABS actually work. In. Moss, S. and Davidsson, P. (eds.), Multi Agent Based Simulation, Lecture Notes in Artificial Intelligence, 1979:15-32. http://cfpm.org/cpmrep74.html

Edmonds, B. (2020) Basic Modelling Hygiene – keep descriptions about models and what they model clearly distinct. Review of Artificial Societies and Social Simulation, 22nd May 2020. https://rofasss.org/2020/05/22/modelling-hygiene/

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Epstein, J. M. (2008). Why model?. Journal of artificial societies and social simulation, 11(4), 12. https://jasss.soc.surrey.ac.uk/11/4/12.html

Hartmann, S. (1997): Modelling and the Aims of Science. In: Weingartner, P. et al (ed.) : The Role of Pragmatics in Contemporary Philosophy: Contributions of the Austrian Ludwig Wittgenstein Society. Vol. 5. Wien und Kirchberg: Digi-Buch. pp. 380-385. https://epub.ub.uni-muenchen.de/25393/

Krugman, P. (2009) How Did Economists Get It So Wrong? New York Times, Sept. 2nd 2009. https://www.nytimes.com/2009/09/06/magazine/06Economic-t.html

Kuhn, T.S. (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

Lakoff, G. (1987) Women, fire, and dangerous things. University of Chicago Press, Chicago.

Morgan, M. S., & Morrison, M. (1999). Models as mediators. Cambridge: Cambridge University Press.

Moss, S. (1998) Social Simulation Models and Reality: Three Approaches. Centre for Policy Modelling  Discussion Paper: CPM-98-35, http://cfpm.org/cpmrep35.html

Popper, K. (1957). The poverty of historicism. Routledge.

Vranckx, An. (1999) Science, Fiction & the Appeal of Complexity. In Aerts, Diederik, Serge Gutwirth, Sonja Smets, and Luk Van Langehove, (eds.) Science, Technology, and Social Change: The Orange Book of “Einstein Meets Magritte.” Brussels: Vrije Universiteit Brussel; Dordrecht: Kluwer., pp. 283–301.

Wartofsky, M. W. (1979). The model muddle: Proposals for an immodest realism. In Models (pp. 1-11). Springer, Dordrecht.

Zeigler, B. P. (1976). Theory of Modeling and Simulation. Wiley Interscience, New York.


Edmonds, B. (2022) The Poverty of Suggestivism – the dangers of "suggests that" modelling. Review of Artificial Societies and Social Simulation, 28th Feb 2022. https://rofasss.org/2022/02/28/poverty-suggestivism


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Where Now For Experiments In Agent-Based Modelling? Report of a Round Table at SSC2021, held on 22 September 2021


By Dino Carpentras1, Edmund Chattoe-Brown2*, Bruce Edmonds3, Cesar García-Diaz4, Christian Kammler5, Anna Pagani6 and Nanda Wijermans7

*Corresponding author, 1Centre for Social Issues Research, University of Limerick, 2School of Media, Communication and Sociology, University of Leicester, 3Centre for Policy Modelling, Manchester Metropolitan University, 4Department of Business Administration, Pontificia Universidad Javeriana, 5Department of Computing Science, Umeå University, 6Laboratory on Human-Environment Relations in Urban Systems (HERUS), École Polytechnique Fédérale de Lausanne (EPFL), 7Stockholm Resilience Centre, Stockholm University.

Introduction

This round table was convened to advance and improve the use of experimental methods in Agent-Based Modelling, in the hope that both existing and potential users of the method would be able to identify steps towards this aim[i]. The session began with a presentation by Bruce Edmonds (http://cfpm.org/slides/experiments%20and%20ABM.pptx) whose main argument was that the traditional idea of experimentation (controlling extensively for the environment and manipulating variables) was too simplistic to add much to the understanding of the sort of complex systems modelled by ABMs and that we should therefore aim to enhance experiments (for example using richer experimental settings, richer measures of those settings and richer data – like discussions between participants as well as their behaviour). What follows is a summary of the main ideas discussed organised into themed sections.

What Experiments Are

Defining the field of experiments proved to be challenging on two counts. The first was that there are a number of labels for potentially relevant approaches (experiments themselves – for example, Boero et al. 2010, gaming – for example, Tykhonov et al. 2008, serious games – for example Taillandier et al. 2019, companion/participatory modelling – for example, Ramanath and Gilbert 2004 and web based gaming – for example, Basole et al. 2013) whose actual content overlap is unclear. Is it the case that a gaming approach is generally more in line with the argument proposed by Edmonds? How can we systematically distinguish the experimental content of a serious game approach from a gaming approach? This seems to be a problem in immature fields where the labels are invented first (often on the basis of a few rather divergent instances) and the methodology has to grow into them. It would be ludicrous if we couldn’t be sure whether a piece of research was survey based or interview based (and this would radically devalue the associated labels if it were so.)

The second challenge is also more general in Agent-Based Modelling which is the same labels being used differently by different researchers. It is not productive to argue about which uses are correct but it is important that the concepts behind the different uses are clear so a common scheme of labelling might ultimately be agreed. So, for example, experiment can be used (and different round table participants had different perspectives on the uses they expected) to mean laboratory experiments (simplified settings with human subjects – again see, for example, Boero et al. 2010), experiments with ABMs (formal experimentation with a model that doesn’t necessarily have any empirical content – for example, Doran 1998) and natural experiments (choice of cases in the real world to, for example, test a theory – see Dinesen 2013).

One approach that may help with this diversity is to start developing possible dimensions of experimentation. One might be degree of control (all the way from very stripped down behavioural laboratory experiments to natural situations where the only control is to select the cases). Another might be data diversity: From pure analysis of ABMs (which need not involve data at all), through laboratory experiments that record only behaviour to ethnographic collection and analysis of diverse data in rich experiments (like companion modelling exercises.) But it is important for progress that the field develops robust concepts that allow meaningful distinctions and does not get distracted into pointless arguments about labelling. Furthermore, we must consider the possible scientific implications of experimentation carried out at different points in the dimension space: For example, what are the relative strengths and limitations of experiments that are more or less controlled or more or less data diverse? Is there a “sweet spot” where the benefit of experiments is greatest to Agent-Based Modelling? If so, what is it and why?

The Philosophy of Experiment

The second challenge is the different beliefs (often associated with different disciplines) about the philosophical underpinnings of experiment such as what we might mean by a cause. In an economic experiment, for example, the objective may be to confirm a universal theory of decision making through displayed behaviour only. (It is decisions described by this theory which are presumed to cause the pattern of observed behaviour.) This will probably not allow the researcher to discover that their basic theory is wrong (people are adaptive not rational after all) or not universal (agents have diverse strategies), or that some respondents simply didn’t understand the experiment (deviations caused by these phenomena may be labelled noise relative to the theory being tested but in fact they are not.)

By contrast qualitative sociologists believe that subjective accounts (including accounts of participation in the experiment itself) can be made reliable and that they may offer direct accounts of certain kinds of cause: If I say I did something for a certain reason then it is at least possible that I actually did (and that the reason I did it is therefore its cause). It is no more likely that agreement will be reached on these matters in the context of experiments than it has been elsewhere. But Agent-Based Modelling should keep its reputation for open mindedness by seeing what happens when qualitative data is also collected and not just rejecting that approach out of hand as something that is “not done”. There is no need for Agent-Based Modelling blindly to follow the methodology of any one existing discipline in which experiments are conducted (and these disciplines often disagree vigorously on issues like payment and deception with no evidence on either side which should also make us cautious about their self-evident correctness.)

Finally, there is a further complication in understanding experiments using analogies with the physical sciences. In understanding the evolution of a river system, for example, one can control/intervene, one can base theories on testable micro mechanisms (like percolation) and one can observe. But there is no equivalent to asking the river what it intends (whether we can do this effectively in social science or not).[ii] It is not totally clear how different kinds of data collection like these might relate to each other in the social sciences, for example, data from subjective accounts, behavioural experiments (which may show different things from what respondents claim) and, for example, brain scans (which side step the social altogether.) This relationship between different kinds of data currently seems incompletely explored and conceptualised. (There is a tendency just to look at easy cases like surveys versus interviews.)

The Challenge of Experiments as Practical Research

This is an important area where the actual and potential users of experiments participating in the round table diverged. Potential users wanted clear guidance on the resources, skills and practices involved in doing experimental work (and see similar issues in the behavioural strategy literature, for example, Reypens and Levine 2018). At the most basic level, when does a researcher need to do an experiment (rather than a survey, interviews or observation), what are the resource requirements in terms of time, facilities and money (laboratory experiments are unusual in often needing specific funding to pay respondents rather than substituting the researcher working for free) what design decisions need to be made (paying subjects, online or offline, can subjects be deceived?), how should the data be analysed (how should an ABM be validated against experimental data?) and so on.[iii] (There are also pros and cons to specific bits of potentially supporting technology like Amazon Mechanical Turk, Qualtrics and Prolific, which have not yet been documented and systematically compared for the novice with a background in Agent-Based Modelling.) There is much discussion about these matters in the traditional literatures of social sciences that do experiments (see, for example, Kagel and Roth 1995, Levine and Parkinson 1994 and Zelditch 2014) but this has not been summarised and tuned specifically for the needs of Agent-Based Modellers (or published where they are likely to see it).

However, it should not be forgotten that not all research efforts need this integration within the same project, so thinking about the problems that really need it is critical. Nonetheless, triangulation is indeed necessary within research programmes. For instance, in subfields such as strategic management and organisational design, it is uncommon to see an ABM integrated with an experiment as part of the same project (though there are exceptions, such as Vuculescu 2017). Instead, ABMs are typically used to explore “what if” scenarios, build process theories and illuminate potential empirical studies. In this approach, knowledge is accumulated instead through the triangulation of different methodologies in different projects (see Burton and Obel 2018). Additionally, modelling and experimental efforts are usually led by different specialists – for example, there is a Theoretical Organisational Models Society whose focus is the development of standards for theoretical organisation science.

In a relatively new and small area, all we often have is some examples of good practice (or more contentiously bad practice) of which not everyone is even aware. A preliminary step is thus to see to what extent people know of good practice and are able to agree that it is good (and perhaps why it is good).

Finally, there was a slightly separate discussion about the perspectives of experimental participants themselves. It may be that a general problem with unreal activity is that you know it is unreal (which may lead to problems with ecological validity – Bornstein 1999.) On the other hand, building on the enrichment argument put forward by Edmonds (above), there is at least anecdotal observational evidence that richer and more realistic settings may cause people to get “caught up” and perhaps participate more as they would in reality. Nonetheless, there are practical steps we can take to learn more about these phenomena by augmenting experimental designs. For example we might conduct interviews (or even group discussions) before and after experiments. This could make the initial biases of participants explicit and allow them to self-evaluate retrospectively the extent to which they got engaged (or perhaps even over-engaged) during the game. The first such questionnaire could be available before attending the experiment, whilst another could be administered right after the game (and perhaps even a third a week later). In addition to practical design solutions, there are also relevant existing literatures that experimental researchers should probably draw on in this area, for example that on systemic design and the associated concept of worldviews. But it is fair to say that we do not yet fully understand the issues here but that they clearly matter to the value of experimental data for Agent-Based Modelling.[iv]

Design of Experiments

Something that came across strongly in the round table discussion as argued by existing users of experimental methods was the desirability of either designing experiments directly based on a specific ABM structure (rather than trying to use a stripped down – purely behavioural – experiment) or mixing real and simulated participants in richer experimental settings. In line with the enrichment argument put forward by Edmonds, nobody seemed to be using stripped down experiments to specify, calibrate or validate ABM elements piecemeal. In the examples provided by round table participants, experiments corresponding closely to the ABM (and mixing real and simulated participants) seemed particularly valuable in tackling subjects that existing theory had not yet really nailed down or where it was clear that very little of the data needed for a particular ABM was available. But there was no sense that there is a clearly defined set of research designs with associated purposes on which the potential user can draw. (The possible role of experiments in supporting policy was also mentioned but no conclusions were drawn.)

Extracting Rich Data from Experiments

Traditional experiments are time consuming to do, so they are frequently optimised to obtain the maximum power and discrimination between factors of interest. In such situations they will often limit their data collection to what is strictly necessary for testing their hypotheses. Furthermore, it seems to be a hangover from behaviourist psychology that one does not use self-reporting on the grounds that it might be biased or simply involve false reconstruction (rationalisation). From the point of view of building or assessing ABMs this approach involves a wasted opportunity. Due to the flexible nature of ABMs there is a need for as many empirical constraints upon modelling as possible. These constraints can come from theory, evidence or abstract principles (such as simplicity) but should not hinder the design of an ABM but rather act as a check on its outcomes. Game-like situations can provide rich data about what is happening, simultaneously capturing decisions on action, the position and state of players, global game outcomes/scores and what players say to each other (see, for example, Janssen et al. 2010, Lindahl et al. 2021). Often, in social science one might have a survey with one set of participants, interviews with others and longitudinal data from yet others – even if these, in fact, involve the same people, the data will usually not indicate this through consistent IDs. When collecting data from a game (and especially from online games) there is a possibility for collecting linked data with consistent IDs – including interviews – that allows for a whole new level of ABM development and checking.

Standards and Institutional Bootstrapping

This is also a wider problem in newer methods like Agent-Based Modelling. How can we foster agreement about what we are doing (which has to build on clear concepts) and institutionalise those agreements into standards for a field (particularly when there is academic competition and pressure to publish).[v] If certain journals will not publish experiments (or experiments done in certain ways) what can we do about that? JASSS was started because it was so hard to publish ABMs. It has certainly made that easier but is there a cost through less publication in other journals? See, for example, Squazzoni and Casnici (2013). Would it have been better for the rigour and wider acceptance of Agent-Based Modelling if we had met the standards of other fields rather than setting our own? This strategy, harder in the short term, may also have promoted communication and collaboration better in the long term. If reviewing is arbitrary (reviewers do not seem to have a common view of what makes an experiment legitimate) then can that situation be improved (and in particular how do we best go about that with limited resources?) To some extent, normal individualised academic work may achieve progress here (researchers make proposals, dispute and refine them and their resulting quality ensures at least some individualised adoption by other researchers) but there is often an observable gap in performance: Even though most modellers will endorse the value of data for modelling in principle most models are still non-empirical in practice (Angus and Hassani-Mahmooei 2015, Figure 9). The jury is still out on the best way to improve reviewer consistency, use the power of peer review to impose better standards (and thus resolve a collective action problem under academic competition[vi]) and so on but recognising and trying to address these issues is clearly important to the health of experimental methods in Agent-Based Modelling. Since running experiments in association with ABMs is already challenging, adding the problem of arbitrary reviewer standards makes the publication process even harder. This discourages scientists from following this path and therefore retards this kind of research generally. Again, here, useful resources (like the Psychological Science Accelerator, which facilitates greater experimental rigour by various means) were suggested in discussion as raw material for our own improvements to experiments in Agent-Based Modelling.

Another issue with newer methods such as Agent-Based Modelling is the path to legitimation before the wider scientific community. The need to integrate ABMs with experiments does not necessarily imply that the legitimation of the former is achieved by the latter. Experimental economists, for instance, may still argue that (in the investigation of behaviour and its implications for policy issues), experiments and data analysis alone suffice. They may rightly ask: What is the additional usefulness of an ABM? If an ABM always needs to be justified by an experiment and then validated by a statistical model of its output, then the method might not be essential at all. Orthodox economists skip the Agent-Based Modelling part: They build behavioural experiments, gather (rich) data, run econometric models and make predictions, without the need (at least as they see it) to build any computational representation. Of course, the usefulness of models lies in the premise that they may tell us something that experiments alone cannot (see Knudsen et al. 2019). But progress needs to be made in understanding (and perhaps reconciling) these divergent positions. The social simulation community therefore needs to be clearer about exactly what ABMs can contribute beyond the limitations of an experiment, especially when addressing audiences of non-modellers (Ballard et al. 2021). Not only is a model valuable when rigorously validated against data, but also whenever it makes sense of the data in ways that traditional methods cannot.

Where Now?

Researchers usually have more enthusiasm than they have time. In order to make things happen in an academic context it is not enough to have good ideas, people need to sign up and run with them. There are many things that stand a reasonable chance of improving the profile and practice of experiments in Agent-Based Modelling (regular sessions at SSC, systematic reviews, practical guidelines and evaluated case studies, discussion groups, books or journal special issues, training and funding applications that build networks and teams) but to a great extent, what happens will be decided by those who make it happen. The organisers of this round table (Nanda Wijermans and Edmund Chattoe-Brown) are very keen to support and coordinate further activity and this summary of discussions is the first step to promote that. We hope to hear from you.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16, <http://jasss.soc.surrey.ac.uk/18/4/16.html>. doi:10.18564/jasss.2952

Ballard, Timothy, Palada, Hector, Griffin, Mark and Neal, Andrew (2021) ‘An Integrated Approach to Testing Dynamic, Multilevel Theory: Using Computational Models to Connect Theory, Model, and Data’, Organizational Research Methods, 24(2), April, pp. 251-284. doi: 10.1177/1094428119881209

Basole, Rahul C., Bodner, Douglas A. and Rouse, William B. (2013) ‘Healthcare Management Through Organizational Simulation’, Decision Support Systems, 55(2), May, pp. 552-563. doi:10.1016/j.dss.2012.10.012

Boero, Riccardo, Bravo, Giangiacomo, Castellani, Marco and Squazzoni, Flaminio (2010) ‘Why Bother with What Others Tell You? An Experimental Data-Driven Agent-Based Model’, Journal of Artificial Societies and Social Simulation, 13(3), June, article 6, <https://www.jasss.org/13/3/6.html>. doi:10.18564/jasss.1620

Bornstein, Brian H. (1999) ‘The Ecological Validity of Jury Simulations: Is the Jury Still Out?’ Law and Human Behavior, 23(1), February, pp. 75-91. doi:10.1023/A:1022326807441

Burton, Richard M. and Obel, Børge (2018) ‘The Science of Organizational Design: Fit Between Structure and Coordination’, Journal of Organization Design, 7(1), December, article 5. doi:10.1186/s41469-018-0029-2

Derbyshire, James (2020) ‘Answers to Questions on Uncertainty in Geography: Old Lessons and New Scenario Tools’, Environment and Planning A: Economy and Space, 52(4), June, pp. 710-727. doi:10.1177/0308518X19877885

Dinesen, Peter Thisted (2013) ‘Where You Come From or Where You Live? Examining the Cultural and Institutional Explanation of Generalized Trust Using Migration as a Natural Experiment’, European Sociological Review, 29(1), February, pp. 114-128. doi:10.1093/esr/jcr044

Doran, Jim (1998) ‘Simulating Collective Misbelief’, Journal of Artificial Societies and Social Simulation, 1(1), January, article 1, <https://www.jasss.org/1/1/3.html>.

Janssen, Marco A., Holahan, Robert, Lee, Allen and Ostrom, Elinor (2010) ‘Lab Experiments for the Study of Social-Ecological Systems’, Science, 328(5978), 30 April, pp. 613-617. doi:10.1126/science.1183532

Kagel, John H. and Roth, Alvin E. (eds.) (1995) The Handbook of Experimental Economics (Princeton, NJ: Princeton University Press).

Knudsen, Thorbjørn, Levinthal, Daniel A. and Puranam, Phanish (2019) ‘Editorial: A Model is a Model’, Strategy Science, 4(1), March, pp. 1-3. doi:10.1287/stsc.2019.0077

Levine, Gustav and Parkinson, Stanley (1994) Experimental Methods in Psychology (Hillsdale, NJ: Lawrence Erlbaum Associates).

Lindahl, Therese, Janssen, Marco A. and Schill, Caroline (2021) ‘Controlled Behavioural Experiments’, in Biggs, Reinette, de Vos, Alta, Preiser, Rika, Clements, Hayley, Maciejewski, Kristine and Schlüter, Maja (eds.) The Routledge Handbook of Research Methods for Social-Ecological Systems (London: Routledge), pp. 295-306. doi:10.4324/9781003021339-25

Ramanath, Ana Maria and Gilbert, Nigel (2004) ‘The Design of Participatory Agent-Based Social Simulations’, Journal of Artificial Societies and Social Simulation, 7(4), October, article 1, <https://www.jasss.org/7/4/1.html>.

Reypens, Charlotte and Levine, Sheen S. (2018) ‘Behavior in Behavioral Strategy: Capturing, Measuring, Analyzing’, in Behavioral Strategy in Perspective, Advances in Strategic Management Volume 39 (Bingley: Emerald Publishing), pp. 221-246. doi:10.1108/S0742-332220180000039016

Squazzoni, Flaminio and Casnici, Niccolò (2013) ‘Is Social Simulation a Social Science Outstation? A Bibliometric Analysis of the Impact of JASSS’, Journal of Artificial Societies and Social Simulation, 16(1), January, article 10, <http://jasss.soc.surrey.ac.uk/16/1/10.html>. doi:10.18564/jasss.2192

Taillandier, Patrick, Grignard, Arnaud, Marilleau, Nicolas, Philippon, Damien, Huynh, Quang-Nghi, Gaudou, Benoit and Drogoul, Alexis (2019) ‘Participatory Modeling and Simulation with the GAMA Platform’, Journal of Artificial Societies and Social Simulation, 22(2), March, article 3, <https://www.jasss.org/22/2/3.html>. doi:10.18564/jasss.3964

Tykhonov, Dmytro, Jonker, Catholijn, Meijer, Sebastiaan and Verwaart, Tim (2008) ‘Agent-Based Simulation of the Trust and Tracing Game for Supply Chains and Networks’, Journal of Artificial Societies and Social Simulation, 11(3), June, article 1, <https://www.jasss.org/11/3/1.html>.

Vuculescu, Oana (2017) ‘Searching Far Away from the Lamp-Post: An Agent-Based Model’, Strategic Organization, 15(2), May, pp. 242-263. doi:10.1177/1476127016669869

Zelditch, Morris Junior (2007) ‘Laboratory Experiments in Sociology’, in Webster, Murray Junior and Sell, Jane (eds.) Laboratory Experiments in the Social Sciences (New York, NY: Elsevier), pp. 183-197.


Notes

[i] This event was organised (and the resulting article was written) as part of “Towards Realistic Computational Models of Social Influence Dynamics” a project funded through ESRC (ES/S015159/1) by ORA Round 5 and involving Bruce Edmonds (PI) and Edmund Chattoe-Brown (CoI). More about SSC2021 (Social Simulation Conference 2021) can be found at https://ssc2021.uek.krakow.pl

[ii] This issue is actually very challenging for social science more generally. When considering interventions in social systems, knowing and acting might be so deeply intertwined (Derbyshire 2020) that interventions may modify the same behaviours that an experiment is aiming to understand.

[iii] In addition, experiments often require institutional ethics approval (but so do interviews, gaming activities and others sort of empirical research of course), something with which non-empirical Agent-Based Modellers may have little experience.

[iv] Chattoe-Brown had interesting personal experience of this. He took part in a simple team gaming exercise about running a computer firm. The team quickly worked out that the game assumed an infinite return to advertising (so you could have a computer magazine consisting entirely of adverts) independent of the actual quality of the product. They thus simultaneously performed very well in the game from the perspective of an external observer but remained deeply sceptical that this was a good lesson to impart about running an actual firm. But since the coordinators never asked the team members for their subjective view, they may have assumed that the simulation was also a success in its didactic mission.

[v] We should also not assume it is best to set our own standards from scratch. It may be valuable to attempt integration with existing approaches, like qualitative validity (https://conjointly.com/kb/qualitative-validity/) particularly when these are already attempting to be multidisciplinary and/or to bridge the gap between, for example, qualitative and quantitative data.

[vi] Although journals also face such a collective action problem at a different level. If they are too exacting relative to their status and existing practice, researchers will simply publish elsewhere.


Dino Carpentras, Edmund Chattoe-Brown, Bruce Edmonds, Cesar García-Diaz, Christian Kammler, Anna Pagani and Nanda Wijermans (2020) Where Now For Experiments In Agent-Based Modelling? Report of a Round Table as Part of SSC2021. Review of Artificial Societies and Social Simulation, 2nd Novermber 2021. https://rofasss.org/2021/11/02/round-table-ssc2021-experiments/


The Systematic Comparison of Agent-Based Policy Models – It’s time we got our act together!

By Mike Bithell and Bruce Edmonds

Model Intercomparison

The recent Covid crisis has led to a surge of new model development and a renewed interest in the use of models as policy tools. While this is in some senses welcome, the sudden appearance of many new models presents a problem in terms of their assessment, the appropriateness of their application and reconciling any differences in outcome. Even if they appear similar, their underlying assumptions may differ, their initial data might not be the same, policy options may be applied in different ways, stochastic effects explored to a varying extent, and model outputs presented in any number of different forms. As a result, it can be unclear what aspects of variations in output between models are results of mechanistic, parameter or data differences. Any comparison between models is made tricky by differences in experimental design and selection of output measures.

If we wish to do better, we suggest that a more formal approach to making comparisons between models would be helpful. However, it appears that this is not commonly undertaken most fields in a systematic and persistent way, except for the field of climate change, and closely related fields such as pollution transport or economic impact modelling (although efforts are underway to extend such systematic comparison to ecosystem models –  Wei et al., 2014, Tittensor et al., 2018⁠). Examining the way in which this is done for climate models may therefore prove instructive.

Model Intercomparison Projects (MIP) in the Climate Community

Formal intercomparison of atmospheric models goes back at least to 1989 (Gates et al., 1999)⁠ with the first atmospheric model inter-comparison project (AMIP), initiated by the World Climate Research Programme. By 1999 this had contributions from all significant atmospheric modelling groups, providing standardised time-series of over 30 model variables for one particular historical decade of simulation, with a standard experimental setup. Comparisons of model mean values with available data helped to reveal overall model strengths and weaknesses: no single model was best at simulation of all aspects of the atmosphere, with accuracy varying greatly between simulations. The model outputs also formed a reference base for further inter-comparison experiments including targets for model improvement and reduction of systematic errors, as well as a starting point for improved experimental design, software and data management standards and protocols for communication and model intercomparison. This led to AMIPII and, subsequently, to a series of Climate model inter-comparison projects (CMIP) beginning with CMIP I in 1996. The latest iteration (CMIP 6) is a collection of 23 separate model intercomparison experiments covering atmosphere, ocean, land surface, geo-engineering, and the paleoclimate. This collection is aimed at the upcoming 2021 IPCC process (AR6). Participating projects go through an endorsement process for inclusion, (a process agreed with modelling groups), based on 10 criteria designed to ensure some degree of coherence between the various models – a further 18 MIPS are also listed as currently active (https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip6). Groups contribute to a central set of common experiments covering the period 1850 to the near-present. An overview of the whole process can be found in (Eyring et al., 2016).

The current structure includes a set of three overarching questions covering the dynamics of the earth system, model systematic biases and understanding possible future change under uncertainty. Individual MIPS may build on this to address one or more of a set of 7 “grand science challenges” associated with the climate. Modelling groups agree to provide outputs in a standard form, obtained from a specified set of experiments under the same design, and to provide standardised documentation to go with their models. Originally (up to CMIP 5), outputs were then added to a central public repository for further analysis, however the output grew so large under CMIP6 that now the data is held dispersed over repositories maintained by separate groups.

Other Examples

Two further more recent examples of collective model  development may also be helpful to consider.

Firstly, an informal network collating models across more than 50 research groups has already been generated as a result of the COVID crisis –  the Covid Forecast Hub (https://covid19forecasthub.org). This is run by a small number of research groups collaborating with the US Centre for Disease Control and is strongly focussed on the epidemiology. Participants are encouraged to submit weekly forecasts, and these are integrated into a data repository and can be vizualized on the website – viewers can look at forward projections, along with associated confidence intervals and model evaluation scores, including those for an ensemble of all models. The focus on forecasts in this case arises out of the strong policy drivers for the current crisis, but the main point is that it is possible to immediately view measures of model performance and to compare the different model types: one clear message that rapidly becomes apparent is that many of the forward projections have 95% (and at some times, even 50%) confidence intervals for incident deaths that more than span the full range of the past historic data. The benefit of comparing many different models in this case is apparent, as many of the historic single-model projections diverge strongly from the data (and the models most in error are not consistently the same ones over time), although the ensemble mean tends to be better.

As a second example, one could consider the Psychological Science Accelerator (PSA: Moshontz et al 2018, https://psysciacc.org/). This is a collaborative network set up with the aim of addressing the “replication crisis” in psychology: many previously published results in psychology have proved problematic to replicate as a result of small or non-representative sampling or use of experimental designs that do not generalize well or have not been used consistently either within or across studies. The PSA seeks to ensure accumulation of reliable and generalizable evidence in psychological science, based on principles of inclusion, decentralization, openness, transparency and rigour. The existence of this network has, for example, enabled the reinvestigation of previous  experiments but with much larger and less nationally biased samples (e.g. Jones et al 2021).

The Benefits of the Intercomparison Exercises and Collaborative Model Building

More specifically, long-term intercomparison projects help to do the following.

  • Build on past effort. Rather than modellers re-inventing the wheel (or building a new framework) with each new model project, libraries of well-tested and documented models, with data archives, including code and experimental design, would allow researchers to more efficiently work on new problems, building on previous coding effort
  • Aid replication. Focussed long term intercomparison projects centred on model results with consistent standardised data formats would allow new versions of code to be quickly tested against historical archives to check whether expected results could be recovered and where differences might arise, particularly if different modelling languages were being used
  • Help to formalize. While informal code archives can help to illustrate the methods or theoretical foundations of a model, intercomparison projects help to understand which kinds of formal model might be good for particular applications, and which can be expected to produce helpful results for given desired output measures
  • Build credibility. A continuously updated set of model implementations and assessment of their areas of competence and lack thereof (as compared with available datasets) would help to demonstrate the usefulness (or otherwise) of ABM as a way to represent social systems
  • Influence Policy (where appropriate). Formal international policy organisations such as the IPCC or the more recently formed IPBES are effective partly through an underpinning of well tested and consistently updated models. As yet it is difficult to see whether such a body would be appropriate or effective for social systems, as we lack the background of demonstrable accumulated and well tested model results.

Lessons for ABM?

What might we be able to learn from the above, if we attempted to use a similar process to compare ABM policy models?

In the first place, the projects started small and grew over time: it would not be necessary, for example, to cover all possible ABM applications at the outset. On the other hand, the latest CMIP iterations include a wide range of different types of model covering many different aspects of the earth system, so that the breadth of possible model types need not be seen as a barrier.

Secondly, the climate inter-comparison project has been persistent for some 30 years – over this time many models have come and gone, but the history of inter-comparisons allows for an overview of how well these models have performed over time – data from the original AMIP I models is still available on request, supporting assessments concerning  long-term model improvement.

Thirdly, although climate models are complex – implementing a variety of different mechanisms in different ways – they can still be compared by use of standardised outputs, and at least some (although not necessarily all) have been capable of direct comparison with empirical data.

Finally, an agreed experimental design and public archive for documentation and output that is stable over time is needed; this needs to be done via a collective agreement among the modelling groups involved so as to ensure a long-term buy-in from the community as a whole, so that there is a consistent basis for long-term model development, building on past experience.

The need for aligning or reproducing ABMs has long been recognised within the community (Axtell et al. 1996; Edmonds & Hales 2003), but on a one-one basis for verifying the specification of models against their implementation, although (Hales et al. 2003) discusses a range of possibilities. However, this is far from a situation where many different models of basically the same phenomena are systematically compared – this would be a larger scale collaboration lasting over a longer time span.

The community has already established a standardised form of documentation in the ODD protocol. Sharing of model code is also becoming routine, and can be easily achieved through COMSES, Github or similar. The sharing of data in a long-term archive may require more investigation. As a starting project COVID-19 provides an ideal opportunity for setting up such a model inter-comparison project – multiple groups already have running examples, and a shared set of outputs and experiments should be straightforward to agree on. This would potentially form a basis for forward looking experiments designed to assist with possible future pandemic problems, and a basis on which to build further features into the existing disease-focussed modelling, such as the effects of economic, social and psychological issues.

Additional Challenges for ABMs of Social Phenomena

Nobody supposes that modelling social phenomena is going to have the same set of challenges that climate change models face. Some of the differences include:

  • The availability of good data. Social science is bedevilled by a paucity of the right kind of data. Although an increasing amount of relevant data is being produced, there are commercial, ethical and data protection barriers to accessing it and the data rarely concerns the same set of actors or events.
  • The understanding of micro-level behaviour. Whilst the micro-level understanding of our atmosphere is very well established, those of the behaviour of the most important actors (humans) is not. However, it may be that better data might partially substitute for a generic behavioural model of decision-making.
  • Agreement upon the goals of modelling. Although there will always be considerable variation in terms of what is wanted from a model of any particular social phenomena, a common core of agreed objectives will help focus any comparison and give confidence via ensembles of projections. Although the MIPs and Covid Forecast Hub are focussed on prediction, it may be that empirical explanation may be more important in other areas.
  • The available resources. ABM projects tend to be add-ons to larger endeavours and based around short-term grant funding. The funding for big ABM projects is yet to be established, not having the equivalent of weather forecasting to piggy-back on.
  • Persistence of modelling teams/projects. ABM tends to be quite short-term with each project developing a new model for a new project. This has made it hard to keep good modelling teams together.
  • Deep uncertainty. Whilst the set of possible factors and processes involved in a climate change model are well established, which social mechanisms need to be involved in any model of any particular social phenomena is unknown. For this reason, there is deep disagreement about the assumptions to be made in such models, as well as sharp divergence in outcome due to changes brought about by a particular mechanism but not included in a model. Whilst uncertainty in known mechanisms can be quantified, assessing the impact of those due to such deep uncertainty is much harder.
  • The sensitivity of the political context. Even in the case of Climate Change, where the assumptions made are relatively well understood and done on objective bases, the modelling exercise and its outcomes can be politically contested. In other areas, where the representation of people’s behaviour might be key to model outcomes, this will need even more care (Adoha & Edmonds 2017).

However, some of these problems were solved in the case of Climate Change as a result of the CMIP exercises and the reports they ultimately resulted in. Over time the development of the models also allowed for a broadening and updating of modelling goals, starting from a relatively narrow initial set of experiments. Ensuring the persistence of individual modelling teams is easier in the context of an internationally recognised comparison project, because resources may be easier to obtain, and there is a consistent central focus. The modelling projects became longer-term as individual researchers could establish a career doing just climate change modelling and importance of the work increasingly recognised. An ABM modelling comparison project might help solve some of these problems as the importance of its work is established.

Towards an Initial Proposal

The topic chosen for this project should be something where there: (a) is enough public interest to justify the effort, (b) there are a number of models with a similar purpose in mind being developed.  At the current stage, this suggests dynamic models of COVID spread, but there are other possibilities, including: transport models (where people go and who they meet) or criminological models (where and when crimes happen).

Whichever ensemble of models is focussed upon, these models should be compared on a core of standard, with the same:

  • Start and end dates (but not necessarily the same temporal granularity)
  • Covering the same set of regions or cases
  • Using the same population data (though possibly enhanced with extra data and maybe scaled population sizes)
  • With the same initial conditions in terms of the population
  • Outputting a core of agreed measures (but maybe others as well)
  • Checked against their agreement against a core set of cases, with agreed data sets
  • Reported on in a standard format (though with a discussion section for further/other observations)
  • well documented and with code that is open access
  • Run a minimum of times with different random seeds

Any modeller/team that had a suitable model and was willing to adhere to the rules would be welcome to participate (commercial, government or academic) and these teams would collectively decide the rules, development and write any reports on the comparisons. Other interested stakeholder groups could be involved including professional/academic associations, NGOs and government departments but in a consultative role providing wider critique – it is important that the terms and reports from the exercise be independent or any particular interest or authority.

Conclusion

We call upon those who think ABMs have the potential to usefully inform policy decisions to work together, in order that the transparency and rigour of our modelling matches our ambition. Whilst model comparison exercises of the kind described are important for any simulation work, particular care needs to be taken when the outcomes can affect people’s lives.

References

Aodha, L. & Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822. (A version is at http://cfpm.org/discussionpapers/236)

Axtell, R., Axelrod, R., Epstein, J. M., & Cohen, M. D. (1996). Aligning simulation models: A case study and results. Computational & Mathematical Organization Theory, 1(2), 123-141. https://link.springer.com/article/10.1007%2FBF01299065

Edmonds, B., & Hales, D. (2003). Replication, replication and replication: Some hard lessons from model alignment. Journal of Artificial Societies and Social Simulation, 6(4), 11. http://jasss.soc.surrey.ac.uk/6/4/11.html

Eyring, V., Bony, S., Meehl, G. A., Senior, C. A., Stevens, B., Stouffer, R. J., & Taylor, K. E. (2016). Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geoscientific Model Development, 9(5), 1937–1958. https://doi.org/10.5194/gmd-9-1937-2016

Gates, W. L., Boyle, J. S., Covey, C., Dease, C. G., Doutriaux, C. M., Drach, R. S., Fiorino, M., Gleckler, P. J., Hnilo, J. J., Marlais, S. M., Phillips, T. J., Potter, G. L., Santer, B. D., Sperber, K. R., Taylor, K. E., & Williams, D. N. (1999). An Overview of the Results of the Atmospheric Model Intercomparison Project (AMIP I). In Bulletin of the American Meteorological Society (Vol. 80, Issue 1, pp. 29–55). American Meteorological Society. https://doi.org/10.1175/1520-0477(1999)080<0029:AOOTRO>2.0.CO;2

Hales, D., Rouchier, J., & Edmonds, B. (2003). Model-to-model analysis. Journal of Artificial Societies and Social Simulation, 6(4), 5. http://jasss.soc.surrey.ac.uk/6/4/5.html

Jones, B.C., DeBruine, L.M., Flake, J.K. et al. To which world regions does the valence–dominance model of social perception apply?. Nat Hum Behav 5, 159–169 (2021). https://doi.org/10.1038/s41562-020-01007-2

Moshontz, H. + 85 others (2018) The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network ,  1(4) 501-515. https://doi.org/10.1177/2515245918797607

Tittensor, D. P., Eddy, T. D., Lotze, H. K., Galbraith, E. D., Cheung, W., Barange, M., Blanchard, J. L., Bopp, L., Bryndum-Buchholz, A., Büchner, M., Bulman, C., Carozza, D. A., Christensen, V., Coll, M., Dunne, J. P., Fernandes, J. A., Fulton, E. A., Hobday, A. J., Huber, V., … Walker, N. D. (2018). A protocol for the intercomparison of marine fishery and ecosystem models: Fish-MIP v1.0. Geoscientific Model Development, 11(4), 1421–1442. https://doi.org/10.5194/gmd-11-1421-2018

Wei, Y., Liu, S., Huntzinger, D. N., Michalak, A. M., Viovy, N., Post, W. M., Schwalm, C. R., Schaefer, K., Jacobson, A. R., Lu, C., Tian, H., Ricciuto, D. M., Cook, R. B., Mao, J., & Shi, X. (2014). The north american carbon program multi-scale synthesis and terrestrial model intercomparison project – Part 2: Environmental driver data. Geoscientific Model Development, 7(6), 2875–2893. https://doi.org/10.5194/gmd-7-2875-2014


Bithell, M. and Edmonds, B. (2020) The Systematic Comparison of Agent-Based Policy Models - It’s time we got our act together!. Review of Artificial Societies and Social Simulation, 11th May 2021. https://rofasss.org/2021/05/11/SystComp/


 

Basic Modelling Hygiene – keep descriptions about models and what they model clearly distinct

By Bruce Edmonds

The essence of a model is that it relates to something else – what it models – even if this is only a vague or implicit mapping. Otherwise a model would be indistinguishable from any other computer code, set of equations etc (Hesse 1964; Wartofsky 1966). The centrality of this essence makes it unsurprising that many modellers seem to conflate the two.

This is made worse by three factors.

  1. A strong version of Kuhn’s “Spectacles” (Kuhn 1962) where the researcher goes beyond using the model as a way of thinking about the world to projecting their model onto the world, so they see the world only through that “lens”. This effect seems to be much stronger for simulation modelling due to the intimate interaction that occurs over a period of time between modellers and their model.
  2. It is a natural modelling heuristic to make the model more like what it models (Edmonds & al. 2019), introducing more elements of realism. This is especially strong with agent-based modelling which lends itself to complication and descriptive realism.
  3. It is advantageous to stress the potential connections between a model (however abstract) and possible application areas. It is common to start an academic paper with a description of a real-world issue to motivate the work being reported on; then (even if the work is entirely abstract and unvalidated) to suggest conclusions for what is observed. A lack of substantiated connections between model and any empirical data can be covered up by slick passing from the world to the model and back again and a lack of clarity as to what their research achieves (Edmonds & al. 2019).

Whatever the reasons the result is similar – that the language used to describe entities, processes and outcomes in the model is the same as that used for its descriptions of what is intended to be modelled.

Such conflation is common in academic papers (albeit to different degrees). Expert modellers will not usually be confused by such language because they understand the modelling process and know what to look for in a paper. Thus one might ask, what is the harm of a little rhetoric and hype in the reporting of models? After all, we want modellers to be motivated and should thus be tolerant of their enthusiasm. To show the danger I will thus look at an example that talks about modelling aspects of ethnocentrism.

In their paper, entitled “The Evolutionary Dominance of Ethnocentric Cooperation“, Hartshorn, Kaznatcheev & Shultz (2013) further analyse the model described in (Hammond & Axelrod 2006). The authors have reimplemented the original model and extensively analysed it especially the temporal dynamics. The paper is solely about the original model and its properties, there is no pretence of any validation or calibration with respect to any data. The problem is in the language used, because it the language could equally well refer to the model and the real world.

Take the first sentence of its abstract: “Recent agent-based computer simulations suggest that ethnocentrism, often thought to rely on complex social cognition and learning, may have arisen through biological evolution“. This sounds like the simulation suggests something about the world we live in – that, as the title suggests, Ethnocentric cooperation naturally dominates other strategies (e.g. humanitarianism) and so it is natural. The rest of the abstract then goes on in the same sort of language which could equally apply to the model and the real world.

Expert modellers will understand that they were talking about the purely abstract properties of the model, but this will not be clear to other readers. However, in this case there is evidence that it is a problem. This paper has, in recent years, shot to the top of page requests from the JASSS website (22nd May 2020) at 162,469 requests over a 7-day period, but is nowhere in the top 50 articles in terms of JASSS-JASSS citations. Tracing where these requests come from, results in many alt-right and Russian web sites. It seems that many on the far right see this paper as confirmation of their Nationalist and Racist viewpoints. This is far more attention than a technical paper just about a model would get, so presumably they took it as confirmation about real-world conclusions (or were using it to fool others about the scientific support for their viewpoints) – namely that Ethnocentrism does beat Humanitarianism and this is an evolutionary inevitability [note 1].

This is an extreme example of the confusion that occurs when non-expert modellers read many papers on modelling. Modellers too often imply a degree of real-world relevance when this is not justified by their research. They often imply real-world conclusions before any meaningful validation has been done. As agent-based simulation reaches a less specialised audience, this will become more important.

Some suggestions to avoid this kind of confusion:

  • After the motivation section, carefully outline what part this research will play in the broader programme – do not leave this implicit or imply a larger role than is justified
  • Add in the phrase “in the model” frequently in the text, even if this is a bit repetitive [note 2]
  • Keep  discussions about the real world in a different sections from those that discuss the model
  • Have an explicit statement of what the model can reliably say about the real world
  • Use different terms when referring to parts of the model and part of the real world (e.g. actors for real world individuals, agents in the model)
  • Be clear about the intended purpose of the model – what can be achieved as a result of this research (Edmonds et al. 2019) – for example, do not imply the model will be able to predict future real world properties until this has been demonstrated (de Matos Fernandes & Keijzer 2020)
  • Be very cautious in what you conclude from your model – make sure this is what has been already achieved rather than a reflection of your aspirations (in fact it might be better to not mention such hopes at all until they are realised)

Notes

  1. To see that this kind of conclusion is not necessary see (Hales & Edmonds 2019).
  2. This is similar to a campaign to add the words “in mice” in reports about medical “breakthroughs”, (https://www.statnews.com/2019/04/15/in-mice-twitter-account-hype-science-reporting)

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, grant number ES/S015159/1.

References

Edmonds, B., et al. (2019) Different Modelling Purposes, Journal of Artificial Societies and Social Simulation 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html>. doi:10.18564/jasss.3993

Hammond, R. A., N. D. and Axelrod, R. (2006). The Evolution of Ethnocentrism. Journal of Conflict Resolution, 50(6), 926–936. doi:10.1177/0022002706293470

Hartshorn, Max, Kaznatcheev, Artem and Shultz, Thomas (2013) The Evolutionary Dominance of Ethnocentric Cooperation, Journal of Artificial Societies and Social Simulation 16(3), 7. <http://jasss.soc.surrey.ac.uk/16/3/7.html>. doi:10.18564/jasss.2176

Hesse, M. (1964). Analogy and confirmation theory. Philosophy of Science, 31(4), 319-327.

Kuhn, T. S. (1962). The Structure of Scientific Revolutions. Univ. of Chicago Press.

de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Wartofsky, M. (1966). the Model Muddle – Proposals for an Immodest Realism. Journal Of Philosophy, 63(19), 589-589.


Edmonds, B. (2020) Basic Modelling Hygiene - keep descriptions about models and what they model clearly distinct. Review of Artificial Societies and Social Simulation, 22nd May 2020. https://rofasss.org/2020/05/22/modelling-hygiene/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

What more is needed for Democratically Accountable Modelling?

By Bruce Edmonds

(A contribution to the: JASSS-Covid19-Thread)

In the context of the Covid19 outbreak, the (Squazzoni et al 2020) paper argued for the importance of making complex simulation models open (a call reiterated in Barton et al 2020) and that relevant data needs to be made available to modellers. These are important steps but, I argue, more is needed.

The Central Dilemma

The crux of the dilemma is as follows. Complex and urgent situations (such as the Covid19 pandemic) are beyond the human mind to encompass – there are just too many possible interactions and complexities. For this reason one needs complex models, to leverage some understanding of the situation as a guide for what to do. We can not directly understand the situation, but we can understand some of what a complex model tells us about the situation. The difficulty is that such models are, themselves, complex and difficult to understand. It is easy to deceive oneself using such a model. Professional modellers only just manage to get some understanding of such models (and then, usually, only with help and critique from many other modellers and having worked on it for some time: Edmonds 2020) – politicians and the public have no chance of doing so. Given this situation, any decision-makers or policy actors are in an invidious position – whether to trust what the expert modellers say if it contradicts their own judgement. They will be criticised either way if, in hindsight, that decision appears to have been wrong. Even if the advice supports their judgement there is the danger of giving false confidence.

What options does such a policy maker have? In authoritarian or secretive states there is no problem (for the policy makers) – they can listen to who they like (hiring or firing advisers until they get advice they are satisfied with), and then either claim credit if it turned out to be right or blame the advisers if it was not. However, such decisions are very often not value-free technocratic decisions, but ones that involve complex trade-offs that affect people’s lives. In these cases the democratic process is important for getting good (or at least accountable) decisions. However, democratic debate and scientific rigour often do not mix well [note 1].

A Cautionary Tale

As discussed in (Adoha & Edmonds 2019) Scientific modelling can make things worse, as in the case of the North Atlantic Cod Fisheries Collapse. In this case, the modellers became enmeshed within the standards and wishes of those managing the situation and ended up confirming their wishful thinking. An effect of technocratising the decision-making about how much it is safe to catch had the effect of narrowing down the debate to particular measurement and modelling processes (which turned out to be gravely mistaken). In doing so the modellers contributed to the collapse of the industry, with severe social and ecological consequences.

What to do?

How to best interface between scientific and policy processes is not clear, however some directions are becoming apparent.

  • That the process of developing and giving advice to policy actors should become more transparent, including who is giving advice and on what basis. In particular, any reservations or caveats that the experts add should be open to scrutiny so the line between advice (by the experts) and decision-making (by the politicians) is clearer.
  • That such experts are careful not to over-state or hype their own results. For example, implying that their model can predict (or forecast) the future of complex situations and so anticipate the effects of policy before implementation (de Matos Fernandes and Keijzer 2020). Often a reliable assessment of results only occurs after a period of academic scrutiny and debate.
  • Policy actors need to learn a little bit about modelling, in particular when and how modelling can be reliably used. This is discussed in (Government Office for Science 2018, Calder et al. 2018) which also includes a very useful checklist for policy actors who deal with modellers.
  • That the public learn some maturity about the uncertainties in scientific debate and conclusions. Preliminary results and critiques tend to be jumped on too early to support one side within polarised debate or models rejected simply on the grounds they are not 100% certain. We need to collectively develop ways of facing and living with uncertainty.
  • That the decision-making process is kept as open to input as possible. That the modelling (and its limitations) should not be used as an excuse to limit what the voices that are heard, or the debate to a purely technical one, excluding values (Aodha & Edmonds 2017).
  • That public funding bodies and journals should insist on researchers making their full code and documentation available to others for scrutiny, checking and further development (readers can help by signing the Open Modelling Foundation’s open letter and the campaign for Democratically Accountable Modelling’s manifesto).

Some Relevant Resources

  • CoMSeS.net — a collection of resources for computational model-based science, including a platform for publicly sharing simulation model code and documentation and forums for discussion of relevant issues (including one for covid19 models)
  • The Open Modelling Foundation — an international open science community that works to enable the next generation modelling of human and natural systems, including its standards and methodology.
  • The European Social Simulation Association — which is planning to launch some initiatives to encourage better modelling standards and facilitate access to data.
  • The Campaign for Democratic Modelling — which campaigns concerning the issues described in this article.

Notes

note1: As an example of this see accounts of the relationship between the UK scientific advisory committees and the Government in the Financial Times and BuzzFeed.

References

Barton et al. (2020) Call for transparency of COVID-19 models. Science, Vol. 368(6490), 482-483. doi:10.1126/science.abb8637

Aodha, L.Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822. (see also http://cfpm.org/discussionpapers/236)

Calder, M., Craig, C., Culley, D., de Cani, R., Donnelly, C.A., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N. Hargrove, C., Hinds, D., Lane, D.C., Mitchell, D., Pavey, G., Robertson, D., Rosewell, B., Sherwin, S., Walport, M. & Wilson, A. (2018) Computational modelling for decision-making: where, why, what, who and how. Royal Society Open Science,

Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13th April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/

de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Government Office for Science (2018) Computational Modelling: Technological Futures. https://www.gov.uk/government/publications/computational-modelling-blackett-review

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Good Modelling Takes a Lot of Time and Many Eyes

By Bruce Edmonds

(A contribution to the: JASSS-Covid19-Thread)

It is natural to want to help in a crisis (Squazzoni et al. 2020), but it is important to do something that is actually useful rather than just ‘adding to the noise’. Usefully modelling disease spread within complex societies is not easy to do – which essentially means there are two options:

  1. Model it in a fairly abstract manner to explore ideas and mechanisms, but without the empirical grounding and validation needed to reliably support policy making.
  2. Model it in an empirically testable manner with a view to answering some specific questions and possibly inform policy in a useful manner.

Which one does depends on the modelling purpose one has in mind (Edmonds et al. 2019). Both routes are legitimate as long as one is clear as to what it can and cannot do. The dangers come when there is confusion –  taking the first route whilst giving policy actors the impression one is doing the second risks deceiving people and giving false confidence (Edmonds & Adoha 2019, Elsenbroich & Badham 2020). Here I am only discussing the second, empirically ambitious route.

Some of the questions that policy-makers might want to ask, include, what might happen if we: close the urban parks, allow children of a specific range of ages go to school one day a week, cancel 75% of the intercity trains, allow people to go to beauty spots, visit sick relatives in hospital or test people as they recover and give them a certificate to allow them to go back to work?

To understand what might happen in these scenarios would require an agent-based model where agents made the kind of mundane, every-day decisions of where to go and who to meet, such that the patterns and outputs of the model were consistent with known data (possibly following the ‘Pattern-Oriented Modelling’ of Grimm & Railsback 2012). This is currently lacking. However this would require:

  1. A long-term, iterative development (Bithell 2018), with many cycles of model development followed by empirical comparison and data collection. This means that this kind of model might be more useful for the next epidemic rather than the current one.
  2. A collective approach rather than one based on individual modellers. In any very complex model it is impossible to understand it all – there are bound to be small errors and programmed mechanisms will subtly interaction with others. As (Siebers & Venkatesan 2020) pointed out this means collaborating with people from other disciplines (which always takes time to make work), but it also means an open approach where lots of modellers routinely inspect, replicate, pull apart, critique and play with other modellers’ work – without anyone getting upset or feeling criticised. This does involve an institutional and normative embedding of good modelling practice (as discussed in Squazzoni et al. 2020) but also requires a change in attitude – from individual to collective achievement.

Both are necessary if we are to build the modelling infrastructure that may allow us to model policy options for the next epidemic. We will need to start now if we are to be ready because it will not be easy.

References

Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H., & Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html> doi: 10.18564/jasss.3993

Elsenbroich, C. and Badham, J. (2020) Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020. https://rofasss.org/2020/04/12/focussing-on-our-strengths/

Grimm, V., & Railsback, S. F. (2012). Pattern-oriented modelling: a ‘multi-scope’for predictive systems ecology. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1586), 298-310. doi:10.1098/rstb.2011.0180

Siebers, P-O. and Venkatesan, S. (2020) Get out of your silos and work together. Review of Artificial Societies and Social Simulation, 8th April 2020. https://rofasss.org/2020/0408/get-out-of-your-silos

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13th April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)