Tag Archives: open_modelling_thread

Making Models FAIR: An educational initiative to build good ABM practices

By Marco A. Janssen1, Kelly Claborn1, Bruce Edmonds2, Mohsen Shahbaznezhadfard1 and Manuela Vanegas-Ferro1

  1. Arizona State University, USA
  2. Manchester Metropolitan University, UK

Imagine a world where models are available to build upon. You do not have to build from scratch and painstakingly try to figure out how published papers are getting the published results. To achieve this utopian world, models have to be findable, accessible, interoperable, and reusable (FAIR). With the “Making Models FAIR” initiative, we seek to contribute to moving towards this world.

The initiative – Making Models FAIR – aims to provide capacity building opportunities to improve the skills, practices, and protocols to make computational models findable, accessible, interoperable and reusable (FAIR). You can find detailed information about the project on the website (tobefair.org), but here we will present the motivations behind the initiative and a brief outline of the activities.

There is increasing interest to make data and model code FAIR, and there is quite a lot of discussion on standards (https://www.openmodelingfoundation.org/ ). What is lacking are opportunities to gain skills for how to do this in practice. We have selected a list of highly cited publications from different domains and developed a protocol for making those models FAIR. The protocol may be adapted over time when we learn what works well.

This list of model publications provides opportunities to learn the skills needed to make models FAIR. The current list is a starting point, and you can suggest alternative model publications as desired. The main goal is to provide the modeling community a place to build capacity in making models FAIR. How do you use Github, code a model in a language or platform of your choice, and write good model documentation? These are necessary skills for collaboration and developing FAIR models. A suggested way of participating is for an instructor to have student groups participate in this activity, selecting a model publication that is of interest to their research.

To make a model FAIR, we focus on five activities:

  1. If the code is not available with the publication, find out whether the code is available (contact the authors) or replicate the model based on the model documentation. It might also happen that the code is available in programming language X, but you want to have it available in another language.
  2. If the code does not have a license, make sure an appropriate license is selected to make it available.
  3. Get a DOI, which is a permanent link to the model code and documentation. You could use comses.net or zenodo.org or similar services.
  4. Can you improve the model documentation? There is typically a form of documentation in a publication, in the article or an appendix, but is this detailed enough to understand how and why certain model choices have been made? Could you replicate the model from the information provided in the model documentation?
  5. What is the state of the model code? We know that most of us are not professional programmers and might be hesitant to share our code. Good practice is to provide comments on what different procedures are doing, defining variables, and not leave all kinds of wild ideas commented out left in the code base.

Most of the models listed do not have code available with the publication, which will require participants to contact the original others to obtain the code and/or to reproduce the code from the model documentation.

We are eager to learn what challenges people experience to make models FAIR. This could help to improve the protocols we provide. We also hope that those who made a model FAIR publish a contribution in RofASSS or relevant modeling journals. For publishing contributions in journals, it would be interesting to use a FAIR model to explore the robustness of the model results, especially for models that have been published many years ago and for which there were less computational resources available.

The tobefair.org website contains a lot of detailed information and educational opportunities. Below is a diagram from the site that aims to illustrate the road map of making models FAIR, so you can easily find the relevant information. Learn more by navigating to the About page and clicking through the diagram.

Making simulation models findable, accessible, interoperable and reusable is an important part of good scientific practice for simulation research. If important models fail to reach this standard, then this makes it hard for others to reproduce, check and extend them. If you want to be involved – to improve the listed models, or to learn the skills to make models FAIR – we hope you will participate in the project by going to tobefair.org and contributing.


Janssen, M.A., Claborn, K., Edmonds, B., Shahbaznezhadfard, M. and Vanegas-Ferro, M. (2023) Making Models FAIR: An educational initiative to build good ABM practices. Review of Artificial Societies and Social Simulation, 8 May 2023. https://rofasss.org/2023/05/11/fair/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Towards an Agent-based Platform for Crisis Management

By Christian Kammler1, Maarten Jensen1, Rajith Vidanaarachchi2 Cezara Păstrăv1

  1. Department of Computer Science, Umeå University, Sweden
    Transport, Health, and Urban Design (THUD)
  2. Research Lab, The University of Melbourne, Australia

Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” — John Woods

1       Introduction

Agent-based modelling can be a valuable tool for gaining insight into crises [3], both, during and before to increase resilience. However, in the current state of the art, the models have to build up from scratch which is not well suitable for a crisis situation as it hinders quick responses. Consequently, the models do not play the central supportive role that they could. Not only is it hard to compare existing models (given the absence of existing standards) and asses their quality, but also the most widespread toolkits, such as Netlogo [6], MESA (Python) [4], Repast (Java) [1,5], or Agents.jl (Julia) [2], are specific for the modelling field and lack the platform support necessary to empower policy makers to use the model (see Figure 1).

Fig. 1. Platform in the middle as a connector between the code and the model and interaction point for the user. It must not require any expert knowledge.

While some of these issues are systemic within the field of ABM (Agent-Based Modelling) itself, we aim to alleviate some of them in this particular context by using a platform purpose-built for developing and using ABM in a crisis. To do so, we view the problem through a multi-dimensional space which is as follows (consisting of the dimensions A-F):

  • A: Back-end to front-end interactivity
  • B: User and stakeholder levels
    – Social simulators to domain experts to policymakers
    – Skills and expertise in coding, modelling and manipulating a model
  • C: Crisis levels (Risk, Crisis, Resilience – also identified as – Pre Crisis, In Crisis, Post Crisis)
  • D: Language specific to language independent
  • E: Domain specific to domain-independent (e.g.: flooding, pandemic, climate change, )
  • F: Required iteration level (Instant, rapid, slow)

A platform can now be viewed as a vector within this space. While all of these axes require in-depth research (for example in terms of correlation or where existing platforms fit), we chose to focus on the functionalities we believe would be the most relevant in ABM for crises.

2       Rapid Development

During a crisis, time is compressed, and fast iterations are necessary (mainly focusing on axes C and F), making instant and rapid/fast iterations necessary while slow iterations are not suitable. As the crisis develops, the model may need to be adjusted to quickly absorb new data, actors, events, and response strategies, leading to new scenarios that need to be modelled and simulated. In this environment, models need to be built with reusability and rapid versioning in mind from the beginning, otherwise every new change makes the model more unstable and less trustworthy.

While a suite of best practices exists in general Software Development, they are not widely used in the agent-based modelling community. The platform needs a coding environment that favors modular reusable code, easy storage and sharing of such modules in well-organized libraries and makes it easy to integrate existing modules with new code.

Having this modularity is not only helping with the right side of Figure 1, we can also use it to help with the left side of the Figure at the same time. Meaning that the conceptual model can be part of the respective module, allowing to quickly determine if a module is relevant and understanding what the module is doing. Furthermore, it can be used to create a top-level drag and drop like model building environment to allow for rapid changes without having to write code (given that we take of the interface properly).

Having the code and the conceptual model together would also lower the effort required to review these modules. The platform can further help with this task by keeping track of which modules have been reviewed, and with versioning of the modules, as they can be annotated accordingly. It has to be noted however,

that such as system does not guarantee a trustworthy model, even though it might be up to date in terms of versioning.

3       Model transparency

Another key factor we want to focus on is the stakeholder dimension (axis B). These people are not experts in terms of models, mainly the left side of Figure 1, and thus need extensive support to be empowered to use the simulation in a – for them  – meaningful  way. While for  the visualization side  (the how? )  we can use insights from Data Visualization, for the why side it is not that easy.

In a crisis, it is crucial to quickly determine why the model behaves in a certain way in order to interpret the results. Here, the platform can help by offering tools to build model narratives (at agent, group, or whole population level), to detect events and trends, and to compare model behavior between runs. We can take inspiration from the larger software development field for a few useful ideas on how to visually track model elements, log the behavior of model elements, or raise flags when certain conditions or events are detected. However, we also have to be careful here, as we easily move towards the technical solution side and away from the stakeholder and policy maker. Therefore, more research has to be done on what support policy makers actually need. An avenue here can be techniques from data story-telling.

4       The way forward

What this platform will look like depends on the approaches we take going forward. We think that the following two questions are central (also to prompt further research):

  1. What are relevant roles that can be identified for a platform?
  2. Given a role for the platform, where should it exist within the space de- scribed, and what attributes/characteristics should it have?

While these questions are key to identify whether or not existing platforms can be extended and shaped in the way we need them or if we need to build a sandbox from scratch, we strongly advocate or an open source approach. An open source approach can not only help to allow for the use of the range of expertise spread across the field, but also alleviate some of the trust challenges. One of the main challenges is that  a  trustworthy,  well-curated  model  base with different modules does not yet exist. As such, the platform should aim first to aid in building this shared resource and add more related functionality as it becomes relevant. As for model tracking tools, we should aim for simple tools first and build more complex functionality on top of them later.

A starting point can be to build modules for existing crises, such as earth- quakes or floods where it is possible to pre-identify most of the modelling needs, the level of stakeholder engagement, the level of policymaker engagement, etc.

With this we can establish the process of open-source modelling and learn how to integrate new knowledge quickly, and be potentially better prepared for unknown crises in the future.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise.

References

  1. Collier, N., North, M.: Parallel agent-based simulation with repast for high per- formance computing. SIMULATION 89(10), 1215–1235 (2013), https://doi.org/10. 1177/0037549712462620
  2. Datseris, G., Vahdati, A.R., DuBois, T.C.: Agents.jl: a performant and feature-full agent-based modeling software of minimal code complexity. SIMULATION 0(0), 003754972110688 (2022), https://doi.org/10.1177/00375497211068820
  3. Dignum, F. (ed.): Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer International Publishing, Cham (2021)
  4. Kazil, J., Masad, D., Crooks, A.: Utilizing python for agent-based modeling: The mesa framework. In: Thomson, R., Bisgin, H., Dancy, C., Hyder, A., Hussain, M. (eds.) Social, Cultural, and Behavioral Modeling. pp. 308–317. Springer Interna- tional Publishing, Cham (2020)
  5. North, M.J., Collier, N.T., Ozik, J., Tatara, E.R., Macal, C.M., Bragen, M., Sydelko, P.: Complex adaptive systems modeling with Repast Simphony. Complex Adaptive Systems Modeling 1(1), 3 (March 2013), https://doi.org/10.1186/2194-3206-1-3
  6. Wilensky, U.: Netlogo. http://ccl.northwestern.edu/netlogo/, Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL (1999), http://ccl.northwestern.edu/netlogo/

Kammler, C., Jensen, M., Vidanaarachchi, R. and Păstrăv, C. (2023) Towards an Agent-based Platform for Crisis Management. Review of Artificial Societies and Social Simulation, 10 May 2023. https://rofasss.org/2023/05/10/abm4cm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

RofASSS to encourage reproduction reports and reviews of old papers&books

Reproducing simulation models is essential for verifying them and critiquing them. This involves a lot more work than one would think (Axtell & al. 1996) and can reveal surprising flaws, even in the simplest of models (e.g. Edmonds & Hales 2003). Such reproduction is especially vital if the model outcomes are likely to affect people’s lives (Chattoe-Brown & al. 2021).

Whilst substantial pieces of work – where there is extensive analysis or extension – can be submitted to JASSS/CMOT, some such reports might be much simpler and not justify a full journal paper. Thus RofASSS has decided to encourage researchers to submit reports of reproductions here – however simple or complicated.

Similarly, JASSS, CMOT etc. do publish book reviews, but these tend to be of recent books. Although new books are of obvious interest to those at the cutting edge of research, it often happens that important papers & books are forgotten or overlooked. At RofASSS we would like to encourage reviews of any relevant book or paper, however old.

References

Axtell, R., Axelrod, R., Epstein, J. M., & Cohen, M. D. (1996). Aligning simulation models: A case study and results. Computational & Mathematical Organization Theory, 1, 123-141. DOI: 10.1007/BF01299065

Edmonds, B., & Hales, D. (2003). Replication, replication and replication: Some hard lessons from model alignment. Journal of Artificial Societies and Social Simulation, 6(4), 11. https://jasss.soc.surrey.ac.uk/6/4/11.html

Chattoe-Brown, E. Gilbert, N., Robertson, D. A. & Watts, C. (2021) Reproduction as a Means of Evaluating Policy Models: A Case Study of a COVID-19 Simulation. medRxiv 2021.01.29.21250743; DOI: 10.1101/2021.01.29.21250743

Benefits of Open Research in Social Simulation: An Early-Career Researcher’s Perspective

By Hyesop Shin

Research Associate at the School of Geographical and Earth Sciences, University of Glasgow, UK

In March 2017, in my first year of PhD, I attended a talk at the Microsoft Research Lab in Cambridge UK. It was about the importance of reproducibility and replicability in science. Inspired by the talk, I redesigned my research beyond my word processer and hard disk to open repositories and social media. Through my experience, there have been some challenges to learn other people’s work and replicate them to my project, but I found it more beneficial to share my problem and solutions for other people who may have encountered the same problem.

Having spoken to many early career researchers (ECRs) regarding the need for open science, specifically whether sharing codes is essential, the consensus was that it was not an essential component for their degree. A few answered that they were too embarrassed to share their codes online because their codes were not well coded enough. I somewhat empathised with their opinions, but at the same time, would insist that open research can gain more benefits than shame.

I wrote this short piece to openly discuss the benefits of conducting open research and suggest some points that ECRs should keep in mind. During the writing, there are some screenshots taken from my PhD work (Shin, 2021). I conclude my writing by accommodating personal experiences or other thoughts that might give more insights to the audience.

Benefits of Aiming an Open Project

I argue here that being transparent and honest about your model development strengthens the credibility of the research. In doing so, my thesis shared the original data, the scripts with annotations that are downloadable and executable, and wiki pages to summarise the outcomes and interpretations (see Figure 1 for examples). This evidence enables scholars and technicians to visit the repository if they are interested in the source codes or outcomes. Also, people can comment if any errors or bugs are identified, or the model is not executing on their machine or may suggest alternative ways to tackle the same problem. Even during the development, many developers share their work via online repositories (e.g. Github, Gitlab), and social media to ask for advice. Agent-based models are mostly uploaded on CoMSeS.net (previously named OpenABM). All of this can improve the quality of research.

Picture 1

Figure 1 A screenshot of a Github page showing how open platforms can help other people to understand the outcomes step by step

More practically, one can learn new ideas by helping each other. If there was a technical issue that can’t be solved, the problem should not be kept hidden, but rather be opened and solved together with experts online and offline. Figure 2 is a pragmatic example of posing questions to a wide range of developers on Stackoverflow – an online community of programmers to share and build codes. Providing my NetLogo codes, I asked how to send an agent group from one location to the other. The anonymous person, whose ID was JenB, kindly responded to me with a new set of codes, which helped me structure the codes more effectively.

Picture 2

Figure 2 Raising a question about sending agents from one location to another in NetLogo

Another example was about the errors I had encountered whilst I was running NetLogo with an R package “nlrx” (Salecker et al., 2019). Here, R was used as a compiler to submit iterative NetLogo jobs on the HPC (High Performance Computing) cluster to improve the execution speed. However, much to my surprise, I received error messages due to early terminations of failed HPC jobs. Not knowing what to do, I posed a question to the developer of the package (see Figure 3) and luckily got a response that the R ecosystem stores all the assigned objects in the RAM, but even with gigabytes of RAM, it struggles to write 96,822 patches over 8764 ticks on a spreadsheet.

Stackoverflow has kindly informed that NetLogo has a memory ceiling of 1GB[i] and keeps each run in the memory before it shuts down. Thus, if the model is huge and requires several iterations, then it is more likely that the execution speed will decrease after a few iterations. Before this information was seen, it was not understood why the model took 1 hour 20 minutes to finish the first run but struggled to maintain that speed on the twentieth run. Hence, sharing technical obstacles that occur in the middle of research can save a lot of time even for those who are contemplating similar research.

Picture 3

Figure 3 Comments posted on an online repository regarding the memory issue that NetLogo and R encountered

The Future for Open Research

For future quantitative studies in social simulation, this paper suggests students and researchers in their early careers should acclimatise themselves to using open-source platforms to conduct sustainable research. As clarity, conciseness, and coherence are featured as the important C’s for writing skills, good programming should take into consideration the following points.

First is clarity and conciseness (C&C). Here, clarity means that the scripts should be neatly documented. The computer does not know whether the codes are dirty or neat, it only cares whether it is syntactically correct, but it matters when other people attempt to understand the task. If the outcome produces the same results, it is always better to write clearer and simpler codes for other people and future upgrades. Thus, researchers should refer to other people’s work and learn how to code effectively. Another way to maintain clarity in coding is to keep descriptive and distinctive names for new variables. This statement might seem contradictory to the conciseness issue, but this is important as one of the common mistakes users make is to assign variables with abstract names such as LP1, LP2…LP10, which seems clear and concise for the model builder, but is even harder for the others when reviewing the code. The famous quote from Einstein, “Everything should be made as simple as possible, but not simpler.” is the appropriate phrase that model builders should always keep in mind. Hence, instead of coding LP9, names such as LandPriceIncreaseRate2009 (camel cases) or landprice_incrate_2009 (snake cases) can be more effective for the reviewers to understand the model.

Second is reproducibility and replicability (R&R). To be reproducible and replicable, initially, no errors should occur when others execute the script, and possible errors or bugs should be reported. It will also be more useful to document the libraries and the dependencies required. This is quite important as different OSs (operating systems) have different behaviours to install packages. For instance, the sf package in R has slightly different ways to install the package between OSs where Windows and MacOSX can be installed from the binary package while Linux needs to separately install GDAL (to read and write vector and raster data), Proj (which deals with projection), and GEOS (which provides geospatial functions) prior to the package installation. Finally, it would be very helpful if unit testing is included in the model. While R and Python provide splendid examples in their vignettes, NetLogo remains to offer the library models but goes no further than that. Offering unit testing examples can give a better understanding when the whole model is too complicated for others to comprehend. It can also give the impression that the modeller has full control of the model because without the unit test the verification process becomes error-prone. The good news is that NetLogo has most recently released the Beginner’s Interactive Dictionary with friendly explanations with videos and code examples[ii].

Third is to maintain version control. In terms of sustainability, researchers should be aware of software maintenance. Much programming software relies on libraries and packages that are built on a particular version. If the software is upgraded and no longer accepts the previous versions, then the package developers need to keep updating to run it on a new version. For example, NetLogo 6.0 experienced a significant change compared to versions 5.X. The biggest change was the replacement of tasks[iii] by anonymous procedures (Wilensky, 1999). This means that tasks are no longer primitives but are converted to arrow syntax. For example, if there is a list of [a b c], the previous task is asked to add the first, second, and third element as foreach [a b c] [ ?a+?b+?c ], while the new version does the same job as foreach [a b c][ add_all → a + b + c]. If the models haven’t converted to a new version it can be viewable as a read-only model but can’t be executed. Other geospatial packages in R such as rgdal and sf, have also struggled whenever a major update was made on their own packages or on the R version itself due to a lot of dependencies. Even ArcGIS, a UI (User Interface) software, had issues when they upgraded it from version 9.3 to 10. The projects that were coded under the VBA script in 9.3 were broken because it was not recognised as a correct function in the new version based on Python. This is also another example that backward compatibility and deprecation mechanisms are important.

Lastly, for more advanced users, it is also recommended to use a collaborative platform that executes every result from the codes with the exact version. One of the platforms is Codeocean. The Nature research team has recently chosen the platform to peer-review the codes (Perkel, 2019). The Nature editors and peer-reviewers strongly believed that coding has become a norm across many disciplines, and hence have asserted that the model process including the quality of data, conciseness, reproducibility, and documentation of the model should be placed as a requirement. Although the training procedure can be difficult at first, it will lead researchers to conduct themselves with more responsibility.

Looking for Opinions

With the advent of the era of big data and data science where people collaborate online and the ‘sharing is caring’ atmosphere has become a norm (Arribas-Bel et al., 2021; Lovelace, 2021), I insist that open research should no longer be an option. However, one may argue that although open research is by far an excellent model that can benefit many of today’s projects, there are certain types of risks that might concern ECRs such as intellectual property issues, code quality and technical security. Thus, if you have had different opinions regarding this issue, or simply favour adding your experiences during your PhD in social simulation, please add your thoughts via a thread.

Notes

[i] http://ccl.northwestern.edu/netlogo/docs/faq.html#how-big-can-my-model-be-how-many-turtles-patches-procedures-buttons-and-so-on-can-my-model-contain

[ii] https://ccl.northwestern.edu/netlogo/bind/

[iii] Tasks can be equations, x + y, or a set of lists [1 2 3 4 5]

References

Arribas-Bel, D., Alvanides, S., Batty, M., Crooks, A., See, L., & Wolf, L. (2021). Urban data/code: A new EP-B section. Environment and Planning B: Urban Analytics and City Science, 23998083211059670. https://doi.org/10.1177/23998083211059670

Lovelace, R. (2021). Open source tools for geographic analysis in transport planning. Journal of Geographical Systems, 23(4), 547–578. https://doi.org/10.1007/s10109-020-00342-2

Perkel, J. M. (2019). Make code accessible with these cloud services. Nature, 575(7781), 247. https://doi.org/10.1038/d41586-019-03366-x

Salecker, J., Sciaini, M., Meyer, K. M., & Wiegand, K. (2019). The nlrx r package: A next-generation framework for reproducible NetLogo model analyses. Methods in Ecology and Evolution, 10(11), 1854–1863. https://doi.org/10.1111/2041-210X.13286

Shin, H. (2021). Assessing Health Vulnerability to Air Pollution in Seoul Using an Agent-Based Simulation. University of Cambridge. https://doi.org/https://doi.org/10.17863/CAM.65615

Wilensky, U. (1999). Netlogo. Northwestern University: Evanston, IL, USA. https://ccl.northwestern.edu/netlogo/


Shin, H. (2021) Benefits of Open Research in Social Simulation: An Early-Career Researcher’s Perspective. Review of Artificial Societies and Social Simulation, 24th Nov 2021. https://rofasss.org/2021/11/23/benefits-open-research/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

The Systematic Comparison of Agent-Based Policy Models – It’s time we got our act together!

By Mike Bithell and Bruce Edmonds

Model Intercomparison

The recent Covid crisis has led to a surge of new model development and a renewed interest in the use of models as policy tools. While this is in some senses welcome, the sudden appearance of many new models presents a problem in terms of their assessment, the appropriateness of their application and reconciling any differences in outcome. Even if they appear similar, their underlying assumptions may differ, their initial data might not be the same, policy options may be applied in different ways, stochastic effects explored to a varying extent, and model outputs presented in any number of different forms. As a result, it can be unclear what aspects of variations in output between models are results of mechanistic, parameter or data differences. Any comparison between models is made tricky by differences in experimental design and selection of output measures.

If we wish to do better, we suggest that a more formal approach to making comparisons between models would be helpful. However, it appears that this is not commonly undertaken most fields in a systematic and persistent way, except for the field of climate change, and closely related fields such as pollution transport or economic impact modelling (although efforts are underway to extend such systematic comparison to ecosystem models –  Wei et al., 2014, Tittensor et al., 2018⁠). Examining the way in which this is done for climate models may therefore prove instructive.

Model Intercomparison Projects (MIP) in the Climate Community

Formal intercomparison of atmospheric models goes back at least to 1989 (Gates et al., 1999)⁠ with the first atmospheric model inter-comparison project (AMIP), initiated by the World Climate Research Programme. By 1999 this had contributions from all significant atmospheric modelling groups, providing standardised time-series of over 30 model variables for one particular historical decade of simulation, with a standard experimental setup. Comparisons of model mean values with available data helped to reveal overall model strengths and weaknesses: no single model was best at simulation of all aspects of the atmosphere, with accuracy varying greatly between simulations. The model outputs also formed a reference base for further inter-comparison experiments including targets for model improvement and reduction of systematic errors, as well as a starting point for improved experimental design, software and data management standards and protocols for communication and model intercomparison. This led to AMIPII and, subsequently, to a series of Climate model inter-comparison projects (CMIP) beginning with CMIP I in 1996. The latest iteration (CMIP 6) is a collection of 23 separate model intercomparison experiments covering atmosphere, ocean, land surface, geo-engineering, and the paleoclimate. This collection is aimed at the upcoming 2021 IPCC process (AR6). Participating projects go through an endorsement process for inclusion, (a process agreed with modelling groups), based on 10 criteria designed to ensure some degree of coherence between the various models – a further 18 MIPS are also listed as currently active (https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip6). Groups contribute to a central set of common experiments covering the period 1850 to the near-present. An overview of the whole process can be found in (Eyring et al., 2016).

The current structure includes a set of three overarching questions covering the dynamics of the earth system, model systematic biases and understanding possible future change under uncertainty. Individual MIPS may build on this to address one or more of a set of 7 “grand science challenges” associated with the climate. Modelling groups agree to provide outputs in a standard form, obtained from a specified set of experiments under the same design, and to provide standardised documentation to go with their models. Originally (up to CMIP 5), outputs were then added to a central public repository for further analysis, however the output grew so large under CMIP6 that now the data is held dispersed over repositories maintained by separate groups.

Other Examples

Two further more recent examples of collective model  development may also be helpful to consider.

Firstly, an informal network collating models across more than 50 research groups has already been generated as a result of the COVID crisis –  the Covid Forecast Hub (https://covid19forecasthub.org). This is run by a small number of research groups collaborating with the US Centre for Disease Control and is strongly focussed on the epidemiology. Participants are encouraged to submit weekly forecasts, and these are integrated into a data repository and can be vizualized on the website – viewers can look at forward projections, along with associated confidence intervals and model evaluation scores, including those for an ensemble of all models. The focus on forecasts in this case arises out of the strong policy drivers for the current crisis, but the main point is that it is possible to immediately view measures of model performance and to compare the different model types: one clear message that rapidly becomes apparent is that many of the forward projections have 95% (and at some times, even 50%) confidence intervals for incident deaths that more than span the full range of the past historic data. The benefit of comparing many different models in this case is apparent, as many of the historic single-model projections diverge strongly from the data (and the models most in error are not consistently the same ones over time), although the ensemble mean tends to be better.

As a second example, one could consider the Psychological Science Accelerator (PSA: Moshontz et al 2018, https://psysciacc.org/). This is a collaborative network set up with the aim of addressing the “replication crisis” in psychology: many previously published results in psychology have proved problematic to replicate as a result of small or non-representative sampling or use of experimental designs that do not generalize well or have not been used consistently either within or across studies. The PSA seeks to ensure accumulation of reliable and generalizable evidence in psychological science, based on principles of inclusion, decentralization, openness, transparency and rigour. The existence of this network has, for example, enabled the reinvestigation of previous  experiments but with much larger and less nationally biased samples (e.g. Jones et al 2021).

The Benefits of the Intercomparison Exercises and Collaborative Model Building

More specifically, long-term intercomparison projects help to do the following.

  • Build on past effort. Rather than modellers re-inventing the wheel (or building a new framework) with each new model project, libraries of well-tested and documented models, with data archives, including code and experimental design, would allow researchers to more efficiently work on new problems, building on previous coding effort
  • Aid replication. Focussed long term intercomparison projects centred on model results with consistent standardised data formats would allow new versions of code to be quickly tested against historical archives to check whether expected results could be recovered and where differences might arise, particularly if different modelling languages were being used
  • Help to formalize. While informal code archives can help to illustrate the methods or theoretical foundations of a model, intercomparison projects help to understand which kinds of formal model might be good for particular applications, and which can be expected to produce helpful results for given desired output measures
  • Build credibility. A continuously updated set of model implementations and assessment of their areas of competence and lack thereof (as compared with available datasets) would help to demonstrate the usefulness (or otherwise) of ABM as a way to represent social systems
  • Influence Policy (where appropriate). Formal international policy organisations such as the IPCC or the more recently formed IPBES are effective partly through an underpinning of well tested and consistently updated models. As yet it is difficult to see whether such a body would be appropriate or effective for social systems, as we lack the background of demonstrable accumulated and well tested model results.

Lessons for ABM?

What might we be able to learn from the above, if we attempted to use a similar process to compare ABM policy models?

In the first place, the projects started small and grew over time: it would not be necessary, for example, to cover all possible ABM applications at the outset. On the other hand, the latest CMIP iterations include a wide range of different types of model covering many different aspects of the earth system, so that the breadth of possible model types need not be seen as a barrier.

Secondly, the climate inter-comparison project has been persistent for some 30 years – over this time many models have come and gone, but the history of inter-comparisons allows for an overview of how well these models have performed over time – data from the original AMIP I models is still available on request, supporting assessments concerning  long-term model improvement.

Thirdly, although climate models are complex – implementing a variety of different mechanisms in different ways – they can still be compared by use of standardised outputs, and at least some (although not necessarily all) have been capable of direct comparison with empirical data.

Finally, an agreed experimental design and public archive for documentation and output that is stable over time is needed; this needs to be done via a collective agreement among the modelling groups involved so as to ensure a long-term buy-in from the community as a whole, so that there is a consistent basis for long-term model development, building on past experience.

The need for aligning or reproducing ABMs has long been recognised within the community (Axtell et al. 1996; Edmonds & Hales 2003), but on a one-one basis for verifying the specification of models against their implementation, although (Hales et al. 2003) discusses a range of possibilities. However, this is far from a situation where many different models of basically the same phenomena are systematically compared – this would be a larger scale collaboration lasting over a longer time span.

The community has already established a standardised form of documentation in the ODD protocol. Sharing of model code is also becoming routine, and can be easily achieved through COMSES, Github or similar. The sharing of data in a long-term archive may require more investigation. As a starting project COVID-19 provides an ideal opportunity for setting up such a model inter-comparison project – multiple groups already have running examples, and a shared set of outputs and experiments should be straightforward to agree on. This would potentially form a basis for forward looking experiments designed to assist with possible future pandemic problems, and a basis on which to build further features into the existing disease-focussed modelling, such as the effects of economic, social and psychological issues.

Additional Challenges for ABMs of Social Phenomena

Nobody supposes that modelling social phenomena is going to have the same set of challenges that climate change models face. Some of the differences include:

  • The availability of good data. Social science is bedevilled by a paucity of the right kind of data. Although an increasing amount of relevant data is being produced, there are commercial, ethical and data protection barriers to accessing it and the data rarely concerns the same set of actors or events.
  • The understanding of micro-level behaviour. Whilst the micro-level understanding of our atmosphere is very well established, those of the behaviour of the most important actors (humans) is not. However, it may be that better data might partially substitute for a generic behavioural model of decision-making.
  • Agreement upon the goals of modelling. Although there will always be considerable variation in terms of what is wanted from a model of any particular social phenomena, a common core of agreed objectives will help focus any comparison and give confidence via ensembles of projections. Although the MIPs and Covid Forecast Hub are focussed on prediction, it may be that empirical explanation may be more important in other areas.
  • The available resources. ABM projects tend to be add-ons to larger endeavours and based around short-term grant funding. The funding for big ABM projects is yet to be established, not having the equivalent of weather forecasting to piggy-back on.
  • Persistence of modelling teams/projects. ABM tends to be quite short-term with each project developing a new model for a new project. This has made it hard to keep good modelling teams together.
  • Deep uncertainty. Whilst the set of possible factors and processes involved in a climate change model are well established, which social mechanisms need to be involved in any model of any particular social phenomena is unknown. For this reason, there is deep disagreement about the assumptions to be made in such models, as well as sharp divergence in outcome due to changes brought about by a particular mechanism but not included in a model. Whilst uncertainty in known mechanisms can be quantified, assessing the impact of those due to such deep uncertainty is much harder.
  • The sensitivity of the political context. Even in the case of Climate Change, where the assumptions made are relatively well understood and done on objective bases, the modelling exercise and its outcomes can be politically contested. In other areas, where the representation of people’s behaviour might be key to model outcomes, this will need even more care (Adoha & Edmonds 2017).

However, some of these problems were solved in the case of Climate Change as a result of the CMIP exercises and the reports they ultimately resulted in. Over time the development of the models also allowed for a broadening and updating of modelling goals, starting from a relatively narrow initial set of experiments. Ensuring the persistence of individual modelling teams is easier in the context of an internationally recognised comparison project, because resources may be easier to obtain, and there is a consistent central focus. The modelling projects became longer-term as individual researchers could establish a career doing just climate change modelling and importance of the work increasingly recognised. An ABM modelling comparison project might help solve some of these problems as the importance of its work is established.

Towards an Initial Proposal

The topic chosen for this project should be something where there: (a) is enough public interest to justify the effort, (b) there are a number of models with a similar purpose in mind being developed.  At the current stage, this suggests dynamic models of COVID spread, but there are other possibilities, including: transport models (where people go and who they meet) or criminological models (where and when crimes happen).

Whichever ensemble of models is focussed upon, these models should be compared on a core of standard, with the same:

  • Start and end dates (but not necessarily the same temporal granularity)
  • Covering the same set of regions or cases
  • Using the same population data (though possibly enhanced with extra data and maybe scaled population sizes)
  • With the same initial conditions in terms of the population
  • Outputting a core of agreed measures (but maybe others as well)
  • Checked against their agreement against a core set of cases, with agreed data sets
  • Reported on in a standard format (though with a discussion section for further/other observations)
  • well documented and with code that is open access
  • Run a minimum of times with different random seeds

Any modeller/team that had a suitable model and was willing to adhere to the rules would be welcome to participate (commercial, government or academic) and these teams would collectively decide the rules, development and write any reports on the comparisons. Other interested stakeholder groups could be involved including professional/academic associations, NGOs and government departments but in a consultative role providing wider critique – it is important that the terms and reports from the exercise be independent or any particular interest or authority.

Conclusion

We call upon those who think ABMs have the potential to usefully inform policy decisions to work together, in order that the transparency and rigour of our modelling matches our ambition. Whilst model comparison exercises of the kind described are important for any simulation work, particular care needs to be taken when the outcomes can affect people’s lives.

References

Aodha, L. & Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822. (A version is at http://cfpm.org/discussionpapers/236)

Axtell, R., Axelrod, R., Epstein, J. M., & Cohen, M. D. (1996). Aligning simulation models: A case study and results. Computational & Mathematical Organization Theory, 1(2), 123-141. https://link.springer.com/article/10.1007%2FBF01299065

Edmonds, B., & Hales, D. (2003). Replication, replication and replication: Some hard lessons from model alignment. Journal of Artificial Societies and Social Simulation, 6(4), 11. http://jasss.soc.surrey.ac.uk/6/4/11.html

Eyring, V., Bony, S., Meehl, G. A., Senior, C. A., Stevens, B., Stouffer, R. J., & Taylor, K. E. (2016). Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geoscientific Model Development, 9(5), 1937–1958. https://doi.org/10.5194/gmd-9-1937-2016

Gates, W. L., Boyle, J. S., Covey, C., Dease, C. G., Doutriaux, C. M., Drach, R. S., Fiorino, M., Gleckler, P. J., Hnilo, J. J., Marlais, S. M., Phillips, T. J., Potter, G. L., Santer, B. D., Sperber, K. R., Taylor, K. E., & Williams, D. N. (1999). An Overview of the Results of the Atmospheric Model Intercomparison Project (AMIP I). In Bulletin of the American Meteorological Society (Vol. 80, Issue 1, pp. 29–55). American Meteorological Society. https://doi.org/10.1175/1520-0477(1999)080<0029:AOOTRO>2.0.CO;2

Hales, D., Rouchier, J., & Edmonds, B. (2003). Model-to-model analysis. Journal of Artificial Societies and Social Simulation, 6(4), 5. http://jasss.soc.surrey.ac.uk/6/4/5.html

Jones, B.C., DeBruine, L.M., Flake, J.K. et al. To which world regions does the valence–dominance model of social perception apply?. Nat Hum Behav 5, 159–169 (2021). https://doi.org/10.1038/s41562-020-01007-2

Moshontz, H. + 85 others (2018) The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network ,  1(4) 501-515. https://doi.org/10.1177/2515245918797607

Tittensor, D. P., Eddy, T. D., Lotze, H. K., Galbraith, E. D., Cheung, W., Barange, M., Blanchard, J. L., Bopp, L., Bryndum-Buchholz, A., Büchner, M., Bulman, C., Carozza, D. A., Christensen, V., Coll, M., Dunne, J. P., Fernandes, J. A., Fulton, E. A., Hobday, A. J., Huber, V., … Walker, N. D. (2018). A protocol for the intercomparison of marine fishery and ecosystem models: Fish-MIP v1.0. Geoscientific Model Development, 11(4), 1421–1442. https://doi.org/10.5194/gmd-11-1421-2018

Wei, Y., Liu, S., Huntzinger, D. N., Michalak, A. M., Viovy, N., Post, W. M., Schwalm, C. R., Schaefer, K., Jacobson, A. R., Lu, C., Tian, H., Ricciuto, D. M., Cook, R. B., Mao, J., & Shi, X. (2014). The north american carbon program multi-scale synthesis and terrestrial model intercomparison project – Part 2: Environmental driver data. Geoscientific Model Development, 7(6), 2875–2893. https://doi.org/10.5194/gmd-7-2875-2014


Bithell, M. and Edmonds, B. (2020) The Systematic Comparison of Agent-Based Policy Models - It’s time we got our act together!. Review of Artificial Societies and Social Simulation, 11th May 2021. https://rofasss.org/2021/05/11/SystComp/


 

What more is needed for Democratically Accountable Modelling?

By Bruce Edmonds

(A contribution to the: JASSS-Covid19-Thread)

In the context of the Covid19 outbreak, the (Squazzoni et al 2020) paper argued for the importance of making complex simulation models open (a call reiterated in Barton et al 2020) and that relevant data needs to be made available to modellers. These are important steps but, I argue, more is needed.

The Central Dilemma

The crux of the dilemma is as follows. Complex and urgent situations (such as the Covid19 pandemic) are beyond the human mind to encompass – there are just too many possible interactions and complexities. For this reason one needs complex models, to leverage some understanding of the situation as a guide for what to do. We can not directly understand the situation, but we can understand some of what a complex model tells us about the situation. The difficulty is that such models are, themselves, complex and difficult to understand. It is easy to deceive oneself using such a model. Professional modellers only just manage to get some understanding of such models (and then, usually, only with help and critique from many other modellers and having worked on it for some time: Edmonds 2020) – politicians and the public have no chance of doing so. Given this situation, any decision-makers or policy actors are in an invidious position – whether to trust what the expert modellers say if it contradicts their own judgement. They will be criticised either way if, in hindsight, that decision appears to have been wrong. Even if the advice supports their judgement there is the danger of giving false confidence.

What options does such a policy maker have? In authoritarian or secretive states there is no problem (for the policy makers) – they can listen to who they like (hiring or firing advisers until they get advice they are satisfied with), and then either claim credit if it turned out to be right or blame the advisers if it was not. However, such decisions are very often not value-free technocratic decisions, but ones that involve complex trade-offs that affect people’s lives. In these cases the democratic process is important for getting good (or at least accountable) decisions. However, democratic debate and scientific rigour often do not mix well [note 1].

A Cautionary Tale

As discussed in (Adoha & Edmonds 2019) Scientific modelling can make things worse, as in the case of the North Atlantic Cod Fisheries Collapse. In this case, the modellers became enmeshed within the standards and wishes of those managing the situation and ended up confirming their wishful thinking. An effect of technocratising the decision-making about how much it is safe to catch had the effect of narrowing down the debate to particular measurement and modelling processes (which turned out to be gravely mistaken). In doing so the modellers contributed to the collapse of the industry, with severe social and ecological consequences.

What to do?

How to best interface between scientific and policy processes is not clear, however some directions are becoming apparent.

  • That the process of developing and giving advice to policy actors should become more transparent, including who is giving advice and on what basis. In particular, any reservations or caveats that the experts add should be open to scrutiny so the line between advice (by the experts) and decision-making (by the politicians) is clearer.
  • That such experts are careful not to over-state or hype their own results. For example, implying that their model can predict (or forecast) the future of complex situations and so anticipate the effects of policy before implementation (de Matos Fernandes and Keijzer 2020). Often a reliable assessment of results only occurs after a period of academic scrutiny and debate.
  • Policy actors need to learn a little bit about modelling, in particular when and how modelling can be reliably used. This is discussed in (Government Office for Science 2018, Calder et al. 2018) which also includes a very useful checklist for policy actors who deal with modellers.
  • That the public learn some maturity about the uncertainties in scientific debate and conclusions. Preliminary results and critiques tend to be jumped on too early to support one side within polarised debate or models rejected simply on the grounds they are not 100% certain. We need to collectively develop ways of facing and living with uncertainty.
  • That the decision-making process is kept as open to input as possible. That the modelling (and its limitations) should not be used as an excuse to limit what the voices that are heard, or the debate to a purely technical one, excluding values (Aodha & Edmonds 2017).
  • That public funding bodies and journals should insist on researchers making their full code and documentation available to others for scrutiny, checking and further development (readers can help by signing the Open Modelling Foundation’s open letter and the campaign for Democratically Accountable Modelling’s manifesto).

Some Relevant Resources

  • CoMSeS.net — a collection of resources for computational model-based science, including a platform for publicly sharing simulation model code and documentation and forums for discussion of relevant issues (including one for covid19 models)
  • The Open Modelling Foundation — an international open science community that works to enable the next generation modelling of human and natural systems, including its standards and methodology.
  • The European Social Simulation Association — which is planning to launch some initiatives to encourage better modelling standards and facilitate access to data.
  • The Campaign for Democratic Modelling — which campaigns concerning the issues described in this article.

Notes

note1: As an example of this see accounts of the relationship between the UK scientific advisory committees and the Government in the Financial Times and BuzzFeed.

References

Barton et al. (2020) Call for transparency of COVID-19 models. Science, Vol. 368(6490), 482-483. doi:10.1126/science.abb8637

Aodha, L.Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822. (see also http://cfpm.org/discussionpapers/236)

Calder, M., Craig, C., Culley, D., de Cani, R., Donnelly, C.A., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N. Hargrove, C., Hinds, D., Lane, D.C., Mitchell, D., Pavey, G., Robertson, D., Rosewell, B., Sherwin, S., Walport, M. & Wilson, A. (2018) Computational modelling for decision-making: where, why, what, who and how. Royal Society Open Science,

Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13th April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/

de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Government Office for Science (2018) Computational Modelling: Technological Futures. https://www.gov.uk/government/publications/computational-modelling-blackett-review

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Good Modelling Takes a Lot of Time and Many Eyes

By Bruce Edmonds

(A contribution to the: JASSS-Covid19-Thread)

It is natural to want to help in a crisis (Squazzoni et al. 2020), but it is important to do something that is actually useful rather than just ‘adding to the noise’. Usefully modelling disease spread within complex societies is not easy to do – which essentially means there are two options:

  1. Model it in a fairly abstract manner to explore ideas and mechanisms, but without the empirical grounding and validation needed to reliably support policy making.
  2. Model it in an empirically testable manner with a view to answering some specific questions and possibly inform policy in a useful manner.

Which one does depends on the modelling purpose one has in mind (Edmonds et al. 2019). Both routes are legitimate as long as one is clear as to what it can and cannot do. The dangers come when there is confusion –  taking the first route whilst giving policy actors the impression one is doing the second risks deceiving people and giving false confidence (Edmonds & Adoha 2019, Elsenbroich & Badham 2020). Here I am only discussing the second, empirically ambitious route.

Some of the questions that policy-makers might want to ask, include, what might happen if we: close the urban parks, allow children of a specific range of ages go to school one day a week, cancel 75% of the intercity trains, allow people to go to beauty spots, visit sick relatives in hospital or test people as they recover and give them a certificate to allow them to go back to work?

To understand what might happen in these scenarios would require an agent-based model where agents made the kind of mundane, every-day decisions of where to go and who to meet, such that the patterns and outputs of the model were consistent with known data (possibly following the ‘Pattern-Oriented Modelling’ of Grimm & Railsback 2012). This is currently lacking. However this would require:

  1. A long-term, iterative development (Bithell 2018), with many cycles of model development followed by empirical comparison and data collection. This means that this kind of model might be more useful for the next epidemic rather than the current one.
  2. A collective approach rather than one based on individual modellers. In any very complex model it is impossible to understand it all – there are bound to be small errors and programmed mechanisms will subtly interaction with others. As (Siebers & Venkatesan 2020) pointed out this means collaborating with people from other disciplines (which always takes time to make work), but it also means an open approach where lots of modellers routinely inspect, replicate, pull apart, critique and play with other modellers’ work – without anyone getting upset or feeling criticised. This does involve an institutional and normative embedding of good modelling practice (as discussed in Squazzoni et al. 2020) but also requires a change in attitude – from individual to collective achievement.

Both are necessary if we are to build the modelling infrastructure that may allow us to model policy options for the next epidemic. We will need to start now if we are to be ready because it will not be easy.

References

Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H., & Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3), 6. <http://jasss.soc.surrey.ac.uk/22/3/6.html> doi: 10.18564/jasss.3993

Elsenbroich, C. and Badham, J. (2020) Focussing on our Strengths. Review of Artificial Societies and Social Simulation, 12th April 2020. https://rofasss.org/2020/04/12/focussing-on-our-strengths/

Grimm, V., & Railsback, S. F. (2012). Pattern-oriented modelling: a ‘multi-scope’for predictive systems ecology. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1586), 298-310. doi:10.1098/rstb.2011.0180

Siebers, P-O. and Venkatesan, S. (2020) Get out of your silos and work together. Review of Artificial Societies and Social Simulation, 8th April 2020. https://rofasss.org/2020/0408/get-out-of-your-silos

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13th April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM

By Sebastian Achter, Melania Borit, Edmund Chattoe-Brown, Christiane Palaretti and Peer-Olaf Siebers

The initiative presented below arose from a Lorentz Center workshop on Integrating Qualitative and Quantitative Evidence using Social Simulation (8-12 April 2019, Leiden, the Netherlands). At the beginning of this workshop, the attenders divided themselves into teams aiming to work on specific challenges within the broad domain of the workshop topic. Our team took up the challenge of looking at “Rigour, Transparency, and Reuse”. The aim that emerged from our initial discussions was to create a framework for augmenting rigour and transparency (RAT) of data use in ABM when both designing, analysing and publishing such models.

One element of the framework that the group worked on was a roadmap of the modelling process in ABM, with particular reference to the use of different kinds of data. This roadmap was used to generate the second element of the framework: A protocol consisting of a set of questions, which, if answered by the modeller, would ensure that the published model was as rigorous and transparent in terms of data use, as it needs to be in order for the reader to understand and reproduce it.

The group (which had diverse modelling approaches and spanned a number of disciplines) recognised the challenges of this approach and much of the week was spent examining cases and defining terms so that the approach did not assume one particular kind of theory, one particular aim of modelling, and so on. To this end, we intend that the framework should be thoroughly tested against real research to ensure its general applicability and ease of use.

The team was also very keen not to “reinvent the wheel”, but to try develop the RAT approach (in connection with data use) to augment and “join up” existing protocols or documentation standards for specific parts of the modelling process. For example, the ODD protocol (Grimm et al. 2010) and its variants are generally accepted as the established way of documenting ABM but do not request rigorous documentation/justification of the data used for the modelling process.

The plan to move forward with the development of the framework is organised around three journal articles and associated dissemination activities:

  • A literature review of best (data use) documentation and practice in other disciplines and research methods (e.g. PRISMA – Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
  • A literature review of available documentation tools in ABM (e.g. ODD and its variants, DOE, the “Info” pane of NetLogo, EABSS)
  • An initial statement of the goals of RAT, the roadmap, the protocol and the process of testing these resources for usability and effectiveness
  • A presentation, poster, and round table at SSC 2019 (Mainz)

We would appreciate suggestions for items that should be included in the literature reviews, “beta testers” and critical readers for the roadmap and protocol (from as many disciplines and modelling approaches as possible), reactions (whether positive or negative) to the initiative itself (including joining it!) and participation in the various activities we plan at Mainz. If you are interested in any of these roles, please email Melania Borit (melania.borit@uit.no).

Acknowledgements

Chattoe-Brown’s contribution to this research is funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5 (PI: Professor Bruce Edmonds, Centre for Policy Modelling, Manchester Metropolitan University: https://gtr.ukri.org/projects?ref=ES%2FS015159%2F1).

References

Grimm, V., Berger, U., DeAngelis, D. L., Polhill, J. G., Giske, J. and Railsback, S. F. (2010) ‘The ODD Protocol: A Review and First Update’, Ecological Modelling, 221(23):2760–2768. doi:10.1016/j.ecolmodel.2010.08.019


Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. and Siebers, P.-O.(2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)