Tag Archives: transparency

Make some noise! Why agent-based modelers should embrace the power of randomness

By Peter Steiglechner1, Marijn Keijzer2

1 Complexity Science Hub, Austria; steiglechner@csh.ac.at
2 Institute for Advanced Study in Toulouse, France

Abstract

‘Noisy’ behavior, belief updating, or decision-making is universally observed, yet typically treated superficially or not even accounted for at all by social simulation modelers. Here, we show how noise can affect model dynamics and outcomes, argue why injecting noise should become a central part of model analyses, and how it can help our understanding of (our mathematical models for) social behavior. We formulate some general lessons from the literature around noise, and illustrate how considering a more complete taxonomy of different types of noise may lead to novel insights.

‘Flooding the zone’ with noise

In his inaugural address in January 2025, US president Trump announced that he would “tariff and tax foreign countries to enrich [US] citizens”. Since then, Trump has flooded the world news with a back-and-forth of threatening, announcing, and introducing tariffs, only to pause, halt, or even revoke them within a matter of days. Trump’s statements on tariffs are just one (albeit rather extreme) example of how noisy and ambiguous political signaling can be. Ambiguity in politics can be strategic (Page, 1976), but it can also simply result from a failure to accurately describe one’s position. Most of us are probably familiar with examples of noise in our own personal lives as well—we may wholeheartedly support one thing, and take a skeptical stance in the next discussion. People have always been inherently noisy (Vul & Pashler, 2008; Kahneman, Sibony, & Sunstein, 2021). But the pervasiveness of noise has become particularly evident in recent years, as social media have made it easier to frequently signal our political opinions (e.g. through like-buttons) and to track the noisy or inconsistent behaviors of others.

Noise can flip model dynamics

As quantitative scientists, most of us are not aware of how important noise can be. Conventional statistical models used in the social sciences typically assume noise away. This is because unexplained variance in simple regression models—if not too abnormally distributed—should not affect the validity of the results; so why should we care? Social simulation models play by different rules. With strictly operating behavioral rules on the micro-level and strong interdependence, noise on the individual level plays a pivotal role in shaping collective outcomes. The importance of noise contrasts with the fact that many models still assume that individual-level properties and actions are fully deterministic, consistent, accurate, and certain.

For example, opinion dynamics models like the Bounded Confidence model (Deffuant et al., 2000; Hegselmann & Krause, 2002) and the Dissemination of Culture model (Axelrod, 1997), both illustrate how global diversity (or ‘polarization’) emerges because of local homogeneity (or ‘consensus’). But this core principle is highly dependent on the absence of noise! The persistence of different cultural areas completely collapses under even the smallest probability of differentiation (Klemm et al., 2003; Flache & Macy, 2011), and fragmentation and polarization become unlikely when agents sometimes independently change their opinions (Pineda, Toral, & Hernández-García, 2011). Similarly, adding noise to the topology of connections can drastically change the dynamics of diffusion and contagion (Centola & Macy, 2007). In computational, agent-based models of social systems, noise does not necessarily cancel out. Many social processes are complex, path-dependent, computationally irreducible and highly non-linear. As such, noise can trigger cascades of ‘errors’ that lead to statistically and qualitatively different behaviors (Macy & Tsvetkova, 2015).

What is noise? What is it not?

There is no one way to introduce noise, or to dedicate and define a source of noise. Noise comes in different shapes and forms. When introducing noise into a model of social phenomena, there are some important lessons to consider:

  1. Noise is not just unpredictable randomness. Instead, noise often represents uncertainty (Macy & Tsvetkova, 2015), which can mean a lack of precision in measurements, ambiguity, or inconsistency. Heteroskedasticity—or the fact that the variance of noise depends on the context—is more than a nuisance in statistical estimation. In ABM research in particular, the variance of uncertainty can be a source of nonlinearity. As such, introducing noise into a model should not be equated merely with ‘running the simulations with different random seeds’ or ‘drawing agent attributes from a normal distribution’.
  2. Noise enters social systems in many ways and in every aspect of the system. This includes noisy observations of others or the environment (Gigerenzer & Brighton, 2009), noisy transmission of signals (McMahan & Evans, 2018), noisy application of heuristics (Mäs & Nax, 2016), noisy interaction patterns (Lloyd-Smith et al., 2005), heterogeneity across societies and across individuals (Kahneman, Sunstein, & Sibony, 2021), and inconsistencies over time (Vul & Pashler, 2008). This is crucial because noise representing different forms and different sources of uncertainty or randomness can affect social phenomena such as social influence, consensus, and polarization in quite distinct ways (as we will outline in the next section).
  3. Noise can be adaptive and heterogeneous across individuals. Noise is not a passive property of a system, but can be a context-dependent, dynamically adapted strategy (Frankenhuis, Panchanathan, & Smaldino, 2022). For example, some individuals tend to be more covert and less precise than others for instance when they perceive themselves to be in a minority (Smaldino & Turner, 2022). Some situations require individuals to be more noisy or unpredictable, like when taking a penalty in soccer, other situations less so, such as writing an online dating ad. People need to decide and adapt the degree of noise in their social signals. Smaldino et al. (2023) highlighted that all strategies that lead collectives to perform well in solving complex tasks depend in some way on maintaining (but also adapting to) a certain level of transient noise.
  4. Noise is itself a signal. There are famous examples of institutions or individuals using noise signaling to spread doubt and uncertainty in debates about climate change or the health effects of smoking (see ‘Merchants of Doubt’ by Oreskes & Conway, 2010). Such actors signal noise to diffuse and discredit valuable information. One could certainly argue that Trump’s noisy stance on tariffs also falls into this category. 

In short, noise represents meaningful, multi-faceted, adaptive, and strategic aspects of a system. In social systems—which are, by definition, systems of interdependence—noise is essential to understanding that system. As Macy & Tsvetkova put it: ‘strip away the noise and you may strip away the explanation’ (2015).

A taxonomy of noise in opinion dynamics

In our paper ‘Noise and opinion dynamics’ published last year in Royal Society of Open Science, we reviewed and examined if and how different sources of noise affect the results in a model of opinion dynamics (Steiglechner et al., 2024). The model builds on the bounded confidence model by Deffuant et al. (2000), calibrated on a survey measuring environmental attitudes. We identified at least four different types of noise in a system of opinion formation through dyadic social influence: exogenous noise, selectivity noise, adaptation noise and ambiguity noise.

Figure 1. Sources of noise in social influence models for opinion dynamics (adapted from Steiglechner et al., 2024)

Each type of noise in our taxonomy enters at a different stage of the interaction process (as shown in Figure 1). Ambiguity and adaptation noise both depend on the current attitudes of the sender and the receiver, respectively, whereas selectivity noise acts on the connections between individuals. Exogenous noise is a ‘catch-all’ category of noise added to the agent’s attributes regardless of the (success of) interaction. Some of these types of noise may have similar effects on population-level opinion dynamics in the thermodynamic limit (Nugent , Gomes, & Wolfram; 2024). But they can lead to quite different trajectories and conclusions about the noise effect when we look at real cases of finite-size and finite-time simulations.

Previous work had established that even small amounts of noise can affect the tendency of the bounded confidence model to produce complete consensus or multiple fragmented, internally coherent groups, but our results highlight that different types of noise can have quite distinct signatures. For example, while selectivity noise always increases the coherence of groups, only intermediate levels of exogenous noise unite individuals. Moreover, exogenous noise leads to convergence because it destroys the internal coherence within the fragmented groups, whereas selectivity noise leads to convergence because it connects polarized individuals across these groups. Ambiguity noise has yet another signature. For example, while low levels of ambiguity have no effect on fragmentation (similar to exogenous and adaptation noise), intermediate and even high levels of ambiguity can produce a somewhat coherent majority-group (similar to selectivity noise). More importantly, ambiguity noise also produces drift: a gradual shift in the average opinion toward a more extreme position (Steiglechner et al., 2024). This is a remarkable result, because not only does ambiguous messaging alter the robustness of the clean, noiseless model, it actually produces a novel type of extremization using only positive influence!

Make some noise!

The above taxonomy is, of course, only a starting point for further discussion: It is not comprehensive and does not take into account adaptiveness or strategy. However, already this variety of effects of the different types of noise on consensus, polarization, and social influence should make us more aware of noise in general—not just as an ‘afterthought’ or a robustness check, but as a modeling choice that represents a critical component of the model. Many modeling studies do consider how noise can affect the model outputs, but it matters—a lot—where and how they introduce noise (see also De Sanctis & Galla, 2009).

Noise is an essential aspect of human behavior, social systems, and politics, as Trump’s back-and-forth on tariffs illustrates quite effectively these days. When studying social phenomena such as opinion formation and polarization, we should take the effects of noise as seriously as the effects of behavioral biases or heuristics (Kahneman, Sunstein, & Sibony, 2021). That is, while we social systems modelers tend to spend a lot of time to formulate, justify, and analyze behavioral rules of individuals—generally considered the core of the model—, we should devote more time to formalize what kind of noise enters the modeled system where and how and analyze how this affects the dynamics (as also argued in the exchange of letters between Kahneman et al. and Krakauer & Wolpert, 2022). Noise is a meaningful, multi-faceted, adaptive, and strategic component of social systems. Rather than ‘just a robustness check’, it is a fundamental ingredient of the modeled system—a type of behavior in itself—and, thus, an object of study on its own. This is a call to all modelers (in the house) to make some noise!

Acknowledgments

We thank Victor Møller Poulsen and Paul E. Smaldino for their feedback.

References

Axelrod, R. (1997). The dissemination of culture: A model with local convergence and global polarization. Journal of Conflict Resolution, 41(2), 203–226. https://doi.org/10.1177/0022002797041002001

Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties. American Journal of Sociology, 113(3), 702–734. https://doi.org/10.1086/521848

Deffuant, G., Neau, D., Amblard, F., & Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 3(1–4), 87–98. https://doi.org/10.1142/S0219525900000078

De Sanctis, L., & Galla, T. (2009). Effects of noise and confidence thresholds in nominal and metric Axelrod dynamics of social influence. Physical Review E, 79(4), 046108. https://doi.org/10.1103/PhysRevE.79.046108

Flache, A., & Macy, M. W. (2011). Local convergence and global diversity: From interpersonal to social influence. Journal of Conflict Resolution, 55(6), 970–995. https://doi.org/10.1177/0022002711414371

Frankenhuis, W. E., Panchanathan, K., & Smaldino, P. E. (2023). Strategic ambiguity in the social sciences. Social Psychological Bulletin, 18, e9923. https://doi.org/10.32872/spb.9923

Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus: Why biased minds make better inferences. Topics in Cognitive Science, 1(1), 107–143. https://doi.org/10.1111/j.1756-8765.2008.01006.x

Hegselmann, R., & Krause, U. (2002). Opinion dynamics and bounded confidence models, analysis and simulation. Journal of Artificial Societies and Social Simulation, 5(3). http://jasss.soc.surrey.ac.uk/5/3/2.html

Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A flaw in human judgment. New York: Little, Brown Spark.

Kahneman, D., Krakauer, D.C., Sibony, O., Sunstein, C. and Wolpert, D. (2022) ‘An exchange of letters on the role of noise in collective intelligence’, Collective Intelligence, 1(1), p. 26339137221078593. doi: https://doi.org/10.1177/26339137221078593.

Klemm, K., Eguíluz, V. M., Toral, R., & Miguel, M. S. (2003). Global culture: A noise-induced transition in finite systems. Physical Review E, 67(4), 045101. https://doi.org/10.1103/PhysRevE.67.045101

Lloyd-Smith, J. O., Schreiber, S. J., Kopp, P. E., & Getz, W. M. (2005). Superspreading and the effect of individual variation on disease emergence. Nature, 438(7066), 355–359. https://doi.org/10.1038/nature04153

Macy, M., & Tsvetkova, M. (2015). The signal importance of noise. Sociological Methods & Research, 44(2), 306–328. https://doi.org/10.1177/0049124113508093

Mäs, M., & Nax, H. H. (2016). A behavioral study of “noise” in coordination games. Journal of Economic Theory, 162, 195–208. https://doi.org/10.1016/j.jet.2015.12.010

McMahan, P., & Evans, J. (2018). Ambiguity and engagement. American Journal of Sociology, 124(3), 860–912. https://doi.org/10.1086/701298

Nugent, A., Gomes, S. N., & Wolfram, M.-T. (2024). Bridging the gap between agent based models and continuous opinion dynamics. Physica A: Statistical Mechanics and its Applications, 651, 129886. https://doi.org/10.1016/j.physa.2024.129886

Oreskes, N., & Conway, E. M. (2010). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. New York: Bloomsbury Press.

Page, B. I. (1976). The theory of political ambiguity. American Political Science Review, 70(3), 742–752. https://doi.org/10.2307/1959865

Pineda, M., Toral, R. & Hernández-García, E. (2011) ‘Diffusing opinions in bounded confidence processes’, The European Physical Journal D, 62(1), pp. 109–117. doi: 10.1140/epjd/e2010-00227-0.

Smaldino, P. E., & Turner, M. A. (2022). Covert signaling is an adaptive communication strategy in diverse populations. Psychological Review, 129(4), 812–829. https://doi.org/10.1037/rev0000344

Smaldino, P. E., Moser, C., Pérez Velilla, A., & Werling, M. (2023). Maintaining transient diversity is a general principle for improving collective problem solving. Perspectives on Psychological Science, Advance online publication. https://doi.org/10.1177/17456916231180100

Steiglechner, P., Keijzer, M. A., Smaldino, P. E., Moser, D., & Merico, A. (2024). Noise and opinion dynamics: How ambiguity promotes pro-majority consensus in the presence of confirmation bias. Royal Society Open Science, 11(4), 231071. https://doi.org/10.1098/rsos.231071

Vul, E., & Pashler, H. (2008). Measuring the crowd within: Probabilistic representations within individuals. Psychological Science, 19(7), 645–647. https://doi.org/10.1111/j.1467-9280.2008.02136.x


Steiglechner, P. & Keijzer, M.(2025) Make some noise! Why agent-based modelers should embrace the power of randomness. Review of Artificial Societies and Social Simulation, 30 May 2025. https://rofasss.org/2025/05/31/noise


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Towards an Agent-based Platform for Crisis Management

By Christian Kammler1, Maarten Jensen1, Rajith Vidanaarachchi2 Cezara Păstrăv1

  1. Department of Computer Science, Umeå University, Sweden
    Transport, Health, and Urban Design (THUD)
  2. Research Lab, The University of Melbourne, Australia

Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” — John Woods

1       Introduction

Agent-based modelling can be a valuable tool for gaining insight into crises [3], both, during and before to increase resilience. However, in the current state of the art, the models have to build up from scratch which is not well suitable for a crisis situation as it hinders quick responses. Consequently, the models do not play the central supportive role that they could. Not only is it hard to compare existing models (given the absence of existing standards) and asses their quality, but also the most widespread toolkits, such as Netlogo [6], MESA (Python) [4], Repast (Java) [1,5], or Agents.jl (Julia) [2], are specific for the modelling field and lack the platform support necessary to empower policy makers to use the model (see Figure 1).

Fig. 1. Platform in the middle as a connector between the code and the model and interaction point for the user. It must not require any expert knowledge.

While some of these issues are systemic within the field of ABM (Agent-Based Modelling) itself, we aim to alleviate some of them in this particular context by using a platform purpose-built for developing and using ABM in a crisis. To do so, we view the problem through a multi-dimensional space which is as follows (consisting of the dimensions A-F):

  • A: Back-end to front-end interactivity
  • B: User and stakeholder levels
    – Social simulators to domain experts to policymakers
    – Skills and expertise in coding, modelling and manipulating a model
  • C: Crisis levels (Risk, Crisis, Resilience – also identified as – Pre Crisis, In Crisis, Post Crisis)
  • D: Language specific to language independent
  • E: Domain specific to domain-independent (e.g.: flooding, pandemic, climate change, )
  • F: Required iteration level (Instant, rapid, slow)

A platform can now be viewed as a vector within this space. While all of these axes require in-depth research (for example in terms of correlation or where existing platforms fit), we chose to focus on the functionalities we believe would be the most relevant in ABM for crises.

2       Rapid Development

During a crisis, time is compressed, and fast iterations are necessary (mainly focusing on axes C and F), making instant and rapid/fast iterations necessary while slow iterations are not suitable. As the crisis develops, the model may need to be adjusted to quickly absorb new data, actors, events, and response strategies, leading to new scenarios that need to be modelled and simulated. In this environment, models need to be built with reusability and rapid versioning in mind from the beginning, otherwise every new change makes the model more unstable and less trustworthy.

While a suite of best practices exists in general Software Development, they are not widely used in the agent-based modelling community. The platform needs a coding environment that favors modular reusable code, easy storage and sharing of such modules in well-organized libraries and makes it easy to integrate existing modules with new code.

Having this modularity is not only helping with the right side of Figure 1, we can also use it to help with the left side of the Figure at the same time. Meaning that the conceptual model can be part of the respective module, allowing to quickly determine if a module is relevant and understanding what the module is doing. Furthermore, it can be used to create a top-level drag and drop like model building environment to allow for rapid changes without having to write code (given that we take of the interface properly).

Having the code and the conceptual model together would also lower the effort required to review these modules. The platform can further help with this task by keeping track of which modules have been reviewed, and with versioning of the modules, as they can be annotated accordingly. It has to be noted however,

that such as system does not guarantee a trustworthy model, even though it might be up to date in terms of versioning.

3       Model transparency

Another key factor we want to focus on is the stakeholder dimension (axis B). These people are not experts in terms of models, mainly the left side of Figure 1, and thus need extensive support to be empowered to use the simulation in a – for them  – meaningful  way. While for  the visualization side  (the how? )  we can use insights from Data Visualization, for the why side it is not that easy.

In a crisis, it is crucial to quickly determine why the model behaves in a certain way in order to interpret the results. Here, the platform can help by offering tools to build model narratives (at agent, group, or whole population level), to detect events and trends, and to compare model behavior between runs. We can take inspiration from the larger software development field for a few useful ideas on how to visually track model elements, log the behavior of model elements, or raise flags when certain conditions or events are detected. However, we also have to be careful here, as we easily move towards the technical solution side and away from the stakeholder and policy maker. Therefore, more research has to be done on what support policy makers actually need. An avenue here can be techniques from data story-telling.

4       The way forward

What this platform will look like depends on the approaches we take going forward. We think that the following two questions are central (also to prompt further research):

  1. What are relevant roles that can be identified for a platform?
  2. Given a role for the platform, where should it exist within the space de- scribed, and what attributes/characteristics should it have?

While these questions are key to identify whether or not existing platforms can be extended and shaped in the way we need them or if we need to build a sandbox from scratch, we strongly advocate or an open source approach. An open source approach can not only help to allow for the use of the range of expertise spread across the field, but also alleviate some of the trust challenges. One of the main challenges is that  a  trustworthy,  well-curated  model  base with different modules does not yet exist. As such, the platform should aim first to aid in building this shared resource and add more related functionality as it becomes relevant. As for model tracking tools, we should aim for simple tools first and build more complex functionality on top of them later.

A starting point can be to build modules for existing crises, such as earth- quakes or floods where it is possible to pre-identify most of the modelling needs, the level of stakeholder engagement, the level of policymaker engagement, etc.

With this we can establish the process of open-source modelling and learn how to integrate new knowledge quickly, and be potentially better prepared for unknown crises in the future.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise.

References

  1. Collier, N., North, M.: Parallel agent-based simulation with repast for high per- formance computing. SIMULATION 89(10), 1215–1235 (2013), https://doi.org/10. 1177/0037549712462620
  2. Datseris, G., Vahdati, A.R., DuBois, T.C.: Agents.jl: a performant and feature-full agent-based modeling software of minimal code complexity. SIMULATION 0(0), 003754972110688 (2022), https://doi.org/10.1177/00375497211068820
  3. Dignum, F. (ed.): Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer International Publishing, Cham (2021)
  4. Kazil, J., Masad, D., Crooks, A.: Utilizing python for agent-based modeling: The mesa framework. In: Thomson, R., Bisgin, H., Dancy, C., Hyder, A., Hussain, M. (eds.) Social, Cultural, and Behavioral Modeling. pp. 308–317. Springer Interna- tional Publishing, Cham (2020)
  5. North, M.J., Collier, N.T., Ozik, J., Tatara, E.R., Macal, C.M., Bragen, M., Sydelko, P.: Complex adaptive systems modeling with Repast Simphony. Complex Adaptive Systems Modeling 1(1), 3 (March 2013), https://doi.org/10.1186/2194-3206-1-3
  6. Wilensky, U.: Netlogo. http://ccl.northwestern.edu/netlogo/, Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL (1999), http://ccl.northwestern.edu/netlogo/

Kammler, C., Jensen, M., Vidanaarachchi, R. and Păstrăv, C. (2023) Towards an Agent-based Platform for Crisis Management. Review of Artificial Societies and Social Simulation, 10 May 2023. https://rofasss.org/2023/05/10/abm4cm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Benefits of Open Research in Social Simulation: An Early-Career Researcher’s Perspective

By Hyesop Shin

Research Associate at the School of Geographical and Earth Sciences, University of Glasgow, UK

In March 2017, in my first year of PhD, I attended a talk at the Microsoft Research Lab in Cambridge UK. It was about the importance of reproducibility and replicability in science. Inspired by the talk, I redesigned my research beyond my word processer and hard disk to open repositories and social media. Through my experience, there have been some challenges to learn other people’s work and replicate them to my project, but I found it more beneficial to share my problem and solutions for other people who may have encountered the same problem.

Having spoken to many early career researchers (ECRs) regarding the need for open science, specifically whether sharing codes is essential, the consensus was that it was not an essential component for their degree. A few answered that they were too embarrassed to share their codes online because their codes were not well coded enough. I somewhat empathised with their opinions, but at the same time, would insist that open research can gain more benefits than shame.

I wrote this short piece to openly discuss the benefits of conducting open research and suggest some points that ECRs should keep in mind. During the writing, there are some screenshots taken from my PhD work (Shin, 2021). I conclude my writing by accommodating personal experiences or other thoughts that might give more insights to the audience.

Benefits of Aiming an Open Project

I argue here that being transparent and honest about your model development strengthens the credibility of the research. In doing so, my thesis shared the original data, the scripts with annotations that are downloadable and executable, and wiki pages to summarise the outcomes and interpretations (see Figure 1 for examples). This evidence enables scholars and technicians to visit the repository if they are interested in the source codes or outcomes. Also, people can comment if any errors or bugs are identified, or the model is not executing on their machine or may suggest alternative ways to tackle the same problem. Even during the development, many developers share their work via online repositories (e.g. Github, Gitlab), and social media to ask for advice. Agent-based models are mostly uploaded on CoMSeS.net (previously named OpenABM). All of this can improve the quality of research.

Picture 1

Figure 1 A screenshot of a Github page showing how open platforms can help other people to understand the outcomes step by step

More practically, one can learn new ideas by helping each other. If there was a technical issue that can’t be solved, the problem should not be kept hidden, but rather be opened and solved together with experts online and offline. Figure 2 is a pragmatic example of posing questions to a wide range of developers on Stackoverflow – an online community of programmers to share and build codes. Providing my NetLogo codes, I asked how to send an agent group from one location to the other. The anonymous person, whose ID was JenB, kindly responded to me with a new set of codes, which helped me structure the codes more effectively.

Picture 2

Figure 2 Raising a question about sending agents from one location to another in NetLogo

Another example was about the errors I had encountered whilst I was running NetLogo with an R package “nlrx” (Salecker et al., 2019). Here, R was used as a compiler to submit iterative NetLogo jobs on the HPC (High Performance Computing) cluster to improve the execution speed. However, much to my surprise, I received error messages due to early terminations of failed HPC jobs. Not knowing what to do, I posed a question to the developer of the package (see Figure 3) and luckily got a response that the R ecosystem stores all the assigned objects in the RAM, but even with gigabytes of RAM, it struggles to write 96,822 patches over 8764 ticks on a spreadsheet.

Stackoverflow has kindly informed that NetLogo has a memory ceiling of 1GB[i] and keeps each run in the memory before it shuts down. Thus, if the model is huge and requires several iterations, then it is more likely that the execution speed will decrease after a few iterations. Before this information was seen, it was not understood why the model took 1 hour 20 minutes to finish the first run but struggled to maintain that speed on the twentieth run. Hence, sharing technical obstacles that occur in the middle of research can save a lot of time even for those who are contemplating similar research.

Picture 3

Figure 3 Comments posted on an online repository regarding the memory issue that NetLogo and R encountered

The Future for Open Research

For future quantitative studies in social simulation, this paper suggests students and researchers in their early careers should acclimatise themselves to using open-source platforms to conduct sustainable research. As clarity, conciseness, and coherence are featured as the important C’s for writing skills, good programming should take into consideration the following points.

First is clarity and conciseness (C&C). Here, clarity means that the scripts should be neatly documented. The computer does not know whether the codes are dirty or neat, it only cares whether it is syntactically correct, but it matters when other people attempt to understand the task. If the outcome produces the same results, it is always better to write clearer and simpler codes for other people and future upgrades. Thus, researchers should refer to other people’s work and learn how to code effectively. Another way to maintain clarity in coding is to keep descriptive and distinctive names for new variables. This statement might seem contradictory to the conciseness issue, but this is important as one of the common mistakes users make is to assign variables with abstract names such as LP1, LP2…LP10, which seems clear and concise for the model builder, but is even harder for the others when reviewing the code. The famous quote from Einstein, “Everything should be made as simple as possible, but not simpler.” is the appropriate phrase that model builders should always keep in mind. Hence, instead of coding LP9, names such as LandPriceIncreaseRate2009 (camel cases) or landprice_incrate_2009 (snake cases) can be more effective for the reviewers to understand the model.

Second is reproducibility and replicability (R&R). To be reproducible and replicable, initially, no errors should occur when others execute the script, and possible errors or bugs should be reported. It will also be more useful to document the libraries and the dependencies required. This is quite important as different OSs (operating systems) have different behaviours to install packages. For instance, the sf package in R has slightly different ways to install the package between OSs where Windows and MacOSX can be installed from the binary package while Linux needs to separately install GDAL (to read and write vector and raster data), Proj (which deals with projection), and GEOS (which provides geospatial functions) prior to the package installation. Finally, it would be very helpful if unit testing is included in the model. While R and Python provide splendid examples in their vignettes, NetLogo remains to offer the library models but goes no further than that. Offering unit testing examples can give a better understanding when the whole model is too complicated for others to comprehend. It can also give the impression that the modeller has full control of the model because without the unit test the verification process becomes error-prone. The good news is that NetLogo has most recently released the Beginner’s Interactive Dictionary with friendly explanations with videos and code examples[ii].

Third is to maintain version control. In terms of sustainability, researchers should be aware of software maintenance. Much programming software relies on libraries and packages that are built on a particular version. If the software is upgraded and no longer accepts the previous versions, then the package developers need to keep updating to run it on a new version. For example, NetLogo 6.0 experienced a significant change compared to versions 5.X. The biggest change was the replacement of tasks[iii] by anonymous procedures (Wilensky, 1999). This means that tasks are no longer primitives but are converted to arrow syntax. For example, if there is a list of [a b c], the previous task is asked to add the first, second, and third element as foreach [a b c] [ ?a+?b+?c ], while the new version does the same job as foreach [a b c][ add_all → a + b + c]. If the models haven’t converted to a new version it can be viewable as a read-only model but can’t be executed. Other geospatial packages in R such as rgdal and sf, have also struggled whenever a major update was made on their own packages or on the R version itself due to a lot of dependencies. Even ArcGIS, a UI (User Interface) software, had issues when they upgraded it from version 9.3 to 10. The projects that were coded under the VBA script in 9.3 were broken because it was not recognised as a correct function in the new version based on Python. This is also another example that backward compatibility and deprecation mechanisms are important.

Lastly, for more advanced users, it is also recommended to use a collaborative platform that executes every result from the codes with the exact version. One of the platforms is Codeocean. The Nature research team has recently chosen the platform to peer-review the codes (Perkel, 2019). The Nature editors and peer-reviewers strongly believed that coding has become a norm across many disciplines, and hence have asserted that the model process including the quality of data, conciseness, reproducibility, and documentation of the model should be placed as a requirement. Although the training procedure can be difficult at first, it will lead researchers to conduct themselves with more responsibility.

Looking for Opinions

With the advent of the era of big data and data science where people collaborate online and the ‘sharing is caring’ atmosphere has become a norm (Arribas-Bel et al., 2021; Lovelace, 2021), I insist that open research should no longer be an option. However, one may argue that although open research is by far an excellent model that can benefit many of today’s projects, there are certain types of risks that might concern ECRs such as intellectual property issues, code quality and technical security. Thus, if you have had different opinions regarding this issue, or simply favour adding your experiences during your PhD in social simulation, please add your thoughts via a thread.

Notes

[i] http://ccl.northwestern.edu/netlogo/docs/faq.html#how-big-can-my-model-be-how-many-turtles-patches-procedures-buttons-and-so-on-can-my-model-contain

[ii] https://ccl.northwestern.edu/netlogo/bind/

[iii] Tasks can be equations, x + y, or a set of lists [1 2 3 4 5]

References

Arribas-Bel, D., Alvanides, S., Batty, M., Crooks, A., See, L., & Wolf, L. (2021). Urban data/code: A new EP-B section. Environment and Planning B: Urban Analytics and City Science, 23998083211059670. https://doi.org/10.1177/23998083211059670

Lovelace, R. (2021). Open source tools for geographic analysis in transport planning. Journal of Geographical Systems, 23(4), 547–578. https://doi.org/10.1007/s10109-020-00342-2

Perkel, J. M. (2019). Make code accessible with these cloud services. Nature, 575(7781), 247. https://doi.org/10.1038/d41586-019-03366-x

Salecker, J., Sciaini, M., Meyer, K. M., & Wiegand, K. (2019). The nlrx r package: A next-generation framework for reproducible NetLogo model analyses. Methods in Ecology and Evolution, 10(11), 1854–1863. https://doi.org/10.1111/2041-210X.13286

Shin, H. (2021). Assessing Health Vulnerability to Air Pollution in Seoul Using an Agent-Based Simulation. University of Cambridge. https://doi.org/https://doi.org/10.17863/CAM.65615

Wilensky, U. (1999). Netlogo. Northwestern University: Evanston, IL, USA. https://ccl.northwestern.edu/netlogo/


Shin, H. (2021) Benefits of Open Research in Social Simulation: An Early-Career Researcher’s Perspective. Review of Artificial Societies and Social Simulation, 24th Nov 2021. https://rofasss.org/2021/11/23/benefits-open-research/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

What more is needed for Democratically Accountable Modelling?

By Bruce Edmonds

(A contribution to the: JASSS-Covid19-Thread)

In the context of the Covid19 outbreak, the (Squazzoni et al 2020) paper argued for the importance of making complex simulation models open (a call reiterated in Barton et al 2020) and that relevant data needs to be made available to modellers. These are important steps but, I argue, more is needed.

The Central Dilemma

The crux of the dilemma is as follows. Complex and urgent situations (such as the Covid19 pandemic) are beyond the human mind to encompass – there are just too many possible interactions and complexities. For this reason one needs complex models, to leverage some understanding of the situation as a guide for what to do. We can not directly understand the situation, but we can understand some of what a complex model tells us about the situation. The difficulty is that such models are, themselves, complex and difficult to understand. It is easy to deceive oneself using such a model. Professional modellers only just manage to get some understanding of such models (and then, usually, only with help and critique from many other modellers and having worked on it for some time: Edmonds 2020) – politicians and the public have no chance of doing so. Given this situation, any decision-makers or policy actors are in an invidious position – whether to trust what the expert modellers say if it contradicts their own judgement. They will be criticised either way if, in hindsight, that decision appears to have been wrong. Even if the advice supports their judgement there is the danger of giving false confidence.

What options does such a policy maker have? In authoritarian or secretive states there is no problem (for the policy makers) – they can listen to who they like (hiring or firing advisers until they get advice they are satisfied with), and then either claim credit if it turned out to be right or blame the advisers if it was not. However, such decisions are very often not value-free technocratic decisions, but ones that involve complex trade-offs that affect people’s lives. In these cases the democratic process is important for getting good (or at least accountable) decisions. However, democratic debate and scientific rigour often do not mix well [note 1].

A Cautionary Tale

As discussed in (Adoha & Edmonds 2019) Scientific modelling can make things worse, as in the case of the North Atlantic Cod Fisheries Collapse. In this case, the modellers became enmeshed within the standards and wishes of those managing the situation and ended up confirming their wishful thinking. An effect of technocratising the decision-making about how much it is safe to catch had the effect of narrowing down the debate to particular measurement and modelling processes (which turned out to be gravely mistaken). In doing so the modellers contributed to the collapse of the industry, with severe social and ecological consequences.

What to do?

How to best interface between scientific and policy processes is not clear, however some directions are becoming apparent.

  • That the process of developing and giving advice to policy actors should become more transparent, including who is giving advice and on what basis. In particular, any reservations or caveats that the experts add should be open to scrutiny so the line between advice (by the experts) and decision-making (by the politicians) is clearer.
  • That such experts are careful not to over-state or hype their own results. For example, implying that their model can predict (or forecast) the future of complex situations and so anticipate the effects of policy before implementation (de Matos Fernandes and Keijzer 2020). Often a reliable assessment of results only occurs after a period of academic scrutiny and debate.
  • Policy actors need to learn a little bit about modelling, in particular when and how modelling can be reliably used. This is discussed in (Government Office for Science 2018, Calder et al. 2018) which also includes a very useful checklist for policy actors who deal with modellers.
  • That the public learn some maturity about the uncertainties in scientific debate and conclusions. Preliminary results and critiques tend to be jumped on too early to support one side within polarised debate or models rejected simply on the grounds they are not 100% certain. We need to collectively develop ways of facing and living with uncertainty.
  • That the decision-making process is kept as open to input as possible. That the modelling (and its limitations) should not be used as an excuse to limit what the voices that are heard, or the debate to a purely technical one, excluding values (Aodha & Edmonds 2017).
  • That public funding bodies and journals should insist on researchers making their full code and documentation available to others for scrutiny, checking and further development (readers can help by signing the Open Modelling Foundation’s open letter and the campaign for Democratically Accountable Modelling’s manifesto).

Some Relevant Resources

  • CoMSeS.net — a collection of resources for computational model-based science, including a platform for publicly sharing simulation model code and documentation and forums for discussion of relevant issues (including one for covid19 models)
  • The Open Modelling Foundation — an international open science community that works to enable the next generation modelling of human and natural systems, including its standards and methodology.
  • The European Social Simulation Association — which is planning to launch some initiatives to encourage better modelling standards and facilitate access to data.
  • The Campaign for Democratic Modelling — which campaigns concerning the issues described in this article.

Notes

note1: As an example of this see accounts of the relationship between the UK scientific advisory committees and the Government in the Financial Times and BuzzFeed.

References

Barton et al. (2020) Call for transparency of COVID-19 models. Science, Vol. 368(6490), 482-483. doi:10.1126/science.abb8637

Aodha, L.Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822. (see also http://cfpm.org/discussionpapers/236)

Calder, M., Craig, C., Culley, D., de Cani, R., Donnelly, C.A., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N. Hargrove, C., Hinds, D., Lane, D.C., Mitchell, D., Pavey, G., Robertson, D., Rosewell, B., Sherwin, S., Walport, M. & Wilson, A. (2018) Computational modelling for decision-making: where, why, what, who and how. Royal Society Open Science,

Edmonds, B. (2020) Good Modelling Takes a Lot of Time and Many Eyes. Review of Artificial Societies and Social Simulation, 13th April 2020. https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/

de Matos Fernandes, C. A. and Keijzer, M. A. (2020) No one can predict the future: More than a semantic dispute. Review of Artificial Societies and Social Simulation, 15th April 2020. https://rofasss.org/2020/04/15/no-one-can-predict-the-future/

Government Office for Science (2018) Computational Modelling: Technological Futures. https://www.gov.uk/government/publications/computational-modelling-blackett-review

Squazzoni, F., Polhill, J. G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F. and Gilbert, N. (2020) Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action. Journal of Artificial Societies and Social Simulation, 23(2):10. <http://jasss.soc.surrey.ac.uk/23/2/10.html>. doi: 10.18564/jasss.4298


Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM

By Sebastian Achter, Melania Borit, Edmund Chattoe-Brown, Christiane Palaretti and Peer-Olaf Siebers

The initiative presented below arose from a Lorentz Center workshop on Integrating Qualitative and Quantitative Evidence using Social Simulation (8-12 April 2019, Leiden, the Netherlands). At the beginning of this workshop, the attenders divided themselves into teams aiming to work on specific challenges within the broad domain of the workshop topic. Our team took up the challenge of looking at “Rigour, Transparency, and Reuse”. The aim that emerged from our initial discussions was to create a framework for augmenting rigour and transparency (RAT) of data use in ABM when both designing, analysing and publishing such models.

One element of the framework that the group worked on was a roadmap of the modelling process in ABM, with particular reference to the use of different kinds of data. This roadmap was used to generate the second element of the framework: A protocol consisting of a set of questions, which, if answered by the modeller, would ensure that the published model was as rigorous and transparent in terms of data use, as it needs to be in order for the reader to understand and reproduce it.

The group (which had diverse modelling approaches and spanned a number of disciplines) recognised the challenges of this approach and much of the week was spent examining cases and defining terms so that the approach did not assume one particular kind of theory, one particular aim of modelling, and so on. To this end, we intend that the framework should be thoroughly tested against real research to ensure its general applicability and ease of use.

The team was also very keen not to “reinvent the wheel”, but to try develop the RAT approach (in connection with data use) to augment and “join up” existing protocols or documentation standards for specific parts of the modelling process. For example, the ODD protocol (Grimm et al. 2010) and its variants are generally accepted as the established way of documenting ABM but do not request rigorous documentation/justification of the data used for the modelling process.

The plan to move forward with the development of the framework is organised around three journal articles and associated dissemination activities:

  • A literature review of best (data use) documentation and practice in other disciplines and research methods (e.g. PRISMA – Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
  • A literature review of available documentation tools in ABM (e.g. ODD and its variants, DOE, the “Info” pane of NetLogo, EABSS)
  • An initial statement of the goals of RAT, the roadmap, the protocol and the process of testing these resources for usability and effectiveness
  • A presentation, poster, and round table at SSC 2019 (Mainz)

We would appreciate suggestions for items that should be included in the literature reviews, “beta testers” and critical readers for the roadmap and protocol (from as many disciplines and modelling approaches as possible), reactions (whether positive or negative) to the initiative itself (including joining it!) and participation in the various activities we plan at Mainz. If you are interested in any of these roles, please email Melania Borit (melania.borit@uit.no).

Acknowledgements

Chattoe-Brown’s contribution to this research is funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5 (PI: Professor Bruce Edmonds, Centre for Policy Modelling, Manchester Metropolitan University: https://gtr.ukri.org/projects?ref=ES%2FS015159%2F1).

References

Grimm, V., Berger, U., DeAngelis, D. L., Polhill, J. G., Giske, J. and Railsback, S. F. (2010) ‘The ODD Protocol: A Review and First Update’, Ecological Modelling, 221(23):2760–2768. doi:10.1016/j.ecolmodel.2010.08.019


Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. and Siebers, P.-O.(2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)