Tag Archives: social simulation

Modelling Deep Structural Change in Agent-Based Social Simulation

By Thorid Wagenblast1, Nicholas Roxburgh2 and Alessandro Taberna3

1 Delft University of Technology, 0009-0003-5324-3778
2 The James Hutton Institute, 0000-0002-7821-1831
3 CMCC Foundation – Euro-Mediterranean Center on Climate Change, RFF-CMCC European Institute on Economics and the Environment, 0000-0002-0207-4148

Introduction

Most agent-based models (ABMs) are designed around the assumption of a broadly stable system architecture. Whether exploring emergent dynamics or testing the effects of external interventions or stressors, such models typically operate with a fixed ontology – predefined agent types, attribute classes, behavioural repertoires, processes, and social and institutional structures. While this can allow rich exploration of dynamics within the given configuration, it limits the model’s possibility space by excluding forms of change that would require the structure itself to evolve.

Some of the most consequential forms of real-world change involve shifts in the system architecture itself. These forms of change – what we refer to here as deep structural change – reconfigure the underlying logic and potentialities of the system. This may involve, for example, dramatic shifts in the environment in which agents operate, the introduction of novel technologies, or reshaping of the roles and categories through which agents understand and act in the world. Such transformations pose a fundamentally different challenge from those typically addressed in most agent-based modelling studies to date – one that pushes beyond parameter tuning or rule adjustment, and calls for new approaches to ontology design, model construction, and the conceptualisation of structural transformation and uncertainty in simulation.

Various theoretical lenses can be applied to this topic. The concepts of transformations or regime shifts seem particularly pertinent. Transformations, in contrast to incremental or minor changes, are changes that are large-scale and significant, but apart from that do not seem to consist of any specific features (Feola, 2015). The changes we explore here are more closely linked to regime shifts, which are characterised by structural changes, but with a notion of abruptness. Methods to detect and understand these regime shifts and the structural changes in relation to social simulation have been discussed for some time (Filatova, Polhill & van Ewijk, 2016). Nonetheless, there is still a lack of understanding around what this structural change entails and how this applies in social simulation, particularly ABMs.

To explore these issues, the European Social Simulation Association (ESSA) Special Interest Group on Modelling Transformative Change (SIG-MTC) organised a dedicated session at the Social Simulation Fest 2025. The session aimed to elicit experiences, ideas, and emerging practices from the modelling community around how deep structural change is understood and approached in agent-based simulation. Participants brought perspectives from a wide range of modelling contexts – including opinion dynamics, energy systems, climate adaptation, food systems, and pandemic response – with a shared interest in representing deep structural change. A majority of participants (~65%) reported that they were already actively working on, or thinking about, aspects of deep structural change in their modelling practice.

The session was framed as an opportunity to move beyond static ontologies and explore how models might incorporate adaptive structures or generative mechanisms capable of capturing deep structural shifts. As described in the session abstract:

We will discuss what concepts related to deep structural change we observe and how models can incorporate adaptive ontologies or generative mechanisms to capture deep structural shifts. Furthermore, we want to facilitate discussion on the challenges we face when trying to model these deep changes and what practices are currently used to overcome these.

This article reflects on key insights from that session, offering a synthesis of participant definitions, identified challenges, and promising directions for advancing the modelling of deep structural change in agent-based social simulation.

Defining deep structural change

Participant perspectives


To explore how participants understood deep structural change and its characteristics, we used both a pre-workshop survey (N=20) and live group discussion activities (N ≈ 20; divided into six discussion groups). The survey asked participants to define “deep structural change” in the context of social systems or simulations, and to explain how it differs from incremental change. During the workshop, groups expanded on these ideas using a collaborative Miro board, where they responded to three prompts: “What is deep structural change?”, “How does it differ from incremental change?”, and they were asked to come up with a “Group definition”. The exercises benefited from the conceptual and disciplinary diversity of participants. Individuals approached the prompts from different angles – shaped by their academic backgrounds and modelling traditions – resulting in a rich and multifaceted view of what deep structural change can entail.

Across the different exercises, a number of common themes emerged. One of the most consistent themes was the idea that deep structural change involves a reconfiguration of the system’s architecture – a shift in its underlying mechanisms, causal relationships, feedback loops, or rules of operation. This perspective goes beyond adjusting parameters; it points to transformations in what the system is, echoing the emphasis in our introductory framing on changes to the system’s underlying logic and potentialities. Participants described this in terms such as “change in causal graph”, “drastic shift in mechanisms and rules”, and “altering the whole architecture”. Some also emphasised the outcomes of such reconfigurations – the emergence of a new order, new dominant feedbacks, or a different equilibrium. As one participant put it, deep structural change is “something that brings out new structure”; others described “profound, systemic shifts that radically reshape underlying structures, processes and relationships”.

Another frequently discussed theme was the role of social and behavioural change in structural transformation – particularly shifts in values, norms, and decision-making. Several groups suggested that changes in attitudes, awareness, or shared meanings could contribute to or signal deeper structural shifts. In some cases, these were framed as indicators of transformation; in others, as contributing factors or intended outcomes of deliberate change efforts. Examples included evolving diets, institutional reform, and shifts in collective priorities. Participants referred to “behavioural change coming from a change in values and/or norms” and “a fundamental shift in values and priorities”.
Furthermore, participants discussed how deep structural change differs from incremental change. They described deep structural change as difficult to reverse and characterised by discontinuities or thresholds that shift the system into a new configuration, compared to slow, gradual incremental change. While some noted that incremental changes might accumulate and contribute to structural transformation, deep structural change was more commonly seen as involving a qualitative break from previous patterns. Several responses highlighted periods of instability or disruption as part of this process, in which the system may reorder around new structures or priorities.

Other topics emerging in passing included the distinction between scale and depth, the role of intentionality, and the extent to which a change must be profound or radical to qualify as deeply structural. This diversity of thought reflects both the complexity of deep structural change as a phenomenon and the range of domains in which it is seen as relevant. Rather than producing a single definition, the session surfaced multiple ways in which change can be considered structural, opening up productive space for further conceptual and methodological exploration.

A distilled definition

Drawing on both existing literature and the range of perspectives shared by participants, we propose the following working definition. It aims to clarify what is meant by deep structural change from the standpoint of agent-based modelling, while acknowledging its place within broader discussions of transformative change.

Deep structural change is a type of transformative change: From an agent-based modelling perspective, it entails an ontological reconfiguration. This reconfiguration is related to the emergence, disappearance, or transformation of entities, relationships, structures, and contextual features. While transformative change can occur within a fixed model ontology, deep structural change entails a revision of the ontology itself.

Challenges in modelling deep structural change

To understand the challenges modellers face when trying to incorporate deep structural change in ABMs or social simulations in general, we again asked participants in the pre-conference survey and had them brainstorm using a Miro board. We asked them about the “challenges [they] have encountered in this process” and “how [they] would overcome these challenges”. The points raised by the participants can roughly be grouped into: theory and data, model complexity, definition and detection.

The first challenge relates to availability of data on deep structural change and formalisation of related theory. Social simulations are increasingly based on empirical data to be able to model real-world phenomena more realistically. However, the data is often not good at capturing structural system changes, reflecting the status quo rather than the potential. While there are theories describing change, formalising this qualitative process comes with its own challenges, leading to hypothesising of the mechanisms and large uncertainties about model accuracy.

Second, a fine line has to be struck between keeping the model simple and understandable, while making it complex enough to allow for ontologies to shift and deep structural change to emerge. Participants highlighted the need for flexibility in the model structures, to allow new structures to develop. On the other hand, there is a risk of imposing transformation paths, so basically “telling” the model how to transform. In other words, it is often unclear how to make sure the necessary conditions for modelling deep structural change are there, without imposing the pathway of change.

The final challenge concerns the definition and detection of deep structural change. This article begins to address the question of definition, but detection remains difficult — even with greater conceptual clarity. How can one be confident that an observed change is genuinely deep and structural, and that the system has entered a new regime? This question touches on our ability to characterise system states, dominant feedbacks, necessary preconditions, and the timescales over which change occurs.

Closing remarks

Understanding transformative change in general, but increasingly so with the use of social simulation, is gaining attention to provide insights into complex issues. For social simulation modellers, it is therefore important to model deep structural changes. This workshop serves as a starting point for hopefully a wider discussion within the ESSA community on how to model transformative change. Bringing together social simulation researchers showed us that this is tackled from different angles. The definition provided above is a first attempt to combine these views, but key challenges remain. Thus far, people have approached this in a case-by-case manner; it would be useful to have a set of more systematic approaches.

The SIG-MTC will continue to examine questions around how we might effectively model deep structural change over the coming months and years, working with the ABM community to identify fruitful routes forward. We invite readers to comment  below on any further approaches to modelling deep structural change that they view as promising and to provide their own reflections on the topics discussed above. If you are interested in this topic and would like to engage further, please check out our ESSA Special Interest Group on Modelling Transformative Change or reach out to any one of us.

Acknowledgements

We would like to thank the participants of the SimSocFest 2025 Workshop on Modelling Deep Structural Change for their engagement in the workshop and the willingness to think along with us.

References

Feola, G. (2015). Societal transformation in response to global environmental change: A review of emerging concepts. Ambio, 44(5), 376–390. https://doi.org/10.1007/s13280-014-0582-z

Filatova, T., Polhill, J. G., & van Ewijk, S. (2016). Regime shifts in coupled socio-environmental systems: Review of modelling challenges and approaches. Environmental Modelling & Software, 75, 333–347. https://doi.org/10.1016/j.envsoft.2015.04.003


Wagenblast, T., Roxburgh, N. and Taberna, A. (2025) Modelling Deep Structural Change in Agent-Based Social Simulation. Review of Artificial Societies and Social Simulation, 8 Aug 2025. https://rofasss.org/2025/08/08/structch


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Make some noise! Why agent-based modelers should embrace the power of randomness

By Peter Steiglechner1, Marijn Keijzer2

1 Complexity Science Hub, Austria; steiglechner@csh.ac.at
2 Institute for Advanced Study in Toulouse, France

Abstract

‘Noisy’ behavior, belief updating, or decision-making is universally observed, yet typically treated superficially or not even accounted for at all by social simulation modelers. Here, we show how noise can affect model dynamics and outcomes, argue why injecting noise should become a central part of model analyses, and how it can help our understanding of (our mathematical models for) social behavior. We formulate some general lessons from the literature around noise, and illustrate how considering a more complete taxonomy of different types of noise may lead to novel insights.

‘Flooding the zone’ with noise

In his inaugural address in January 2025, US president Trump announced that he would “tariff and tax foreign countries to enrich [US] citizens”. Since then, Trump has flooded the world news with a back-and-forth of threatening, announcing, and introducing tariffs, only to pause, halt, or even revoke them within a matter of days. Trump’s statements on tariffs are just one (albeit rather extreme) example of how noisy and ambiguous political signaling can be. Ambiguity in politics can be strategic (Page, 1976), but it can also simply result from a failure to accurately describe one’s position. Most of us are probably familiar with examples of noise in our own personal lives as well—we may wholeheartedly support one thing, and take a skeptical stance in the next discussion. People have always been inherently noisy (Vul & Pashler, 2008; Kahneman, Sibony, & Sunstein, 2021). But the pervasiveness of noise has become particularly evident in recent years, as social media have made it easier to frequently signal our political opinions (e.g. through like-buttons) and to track the noisy or inconsistent behaviors of others.

Noise can flip model dynamics

As quantitative scientists, most of us are not aware of how important noise can be. Conventional statistical models used in the social sciences typically assume noise away. This is because unexplained variance in simple regression models—if not too abnormally distributed—should not affect the validity of the results; so why should we care? Social simulation models play by different rules. With strictly operating behavioral rules on the micro-level and strong interdependence, noise on the individual level plays a pivotal role in shaping collective outcomes. The importance of noise contrasts with the fact that many models still assume that individual-level properties and actions are fully deterministic, consistent, accurate, and certain.

For example, opinion dynamics models like the Bounded Confidence model (Deffuant et al., 2000; Hegselmann & Krause, 2002) and the Dissemination of Culture model (Axelrod, 1997), both illustrate how global diversity (or ‘polarization’) emerges because of local homogeneity (or ‘consensus’). But this core principle is highly dependent on the absence of noise! The persistence of different cultural areas completely collapses under even the smallest probability of differentiation (Klemm et al., 2003; Flache & Macy, 2011), and fragmentation and polarization become unlikely when agents sometimes independently change their opinions (Pineda, Toral, & Hernández-García, 2011). Similarly, adding noise to the topology of connections can drastically change the dynamics of diffusion and contagion (Centola & Macy, 2007). In computational, agent-based models of social systems, noise does not necessarily cancel out. Many social processes are complex, path-dependent, computationally irreducible and highly non-linear. As such, noise can trigger cascades of ‘errors’ that lead to statistically and qualitatively different behaviors (Macy & Tsvetkova, 2015).

What is noise? What is it not?

There is no one way to introduce noise, or to dedicate and define a source of noise. Noise comes in different shapes and forms. When introducing noise into a model of social phenomena, there are some important lessons to consider:

  1. Noise is not just unpredictable randomness. Instead, noise often represents uncertainty (Macy & Tsvetkova, 2015), which can mean a lack of precision in measurements, ambiguity, or inconsistency. Heteroskedasticity—or the fact that the variance of noise depends on the context—is more than a nuisance in statistical estimation. In ABM research in particular, the variance of uncertainty can be a source of nonlinearity. As such, introducing noise into a model should not be equated merely with ‘running the simulations with different random seeds’ or ‘drawing agent attributes from a normal distribution’.
  2. Noise enters social systems in many ways and in every aspect of the system. This includes noisy observations of others or the environment (Gigerenzer & Brighton, 2009), noisy transmission of signals (McMahan & Evans, 2018), noisy application of heuristics (Mäs & Nax, 2016), noisy interaction patterns (Lloyd-Smith et al., 2005), heterogeneity across societies and across individuals (Kahneman, Sunstein, & Sibony, 2021), and inconsistencies over time (Vul & Pashler, 2008). This is crucial because noise representing different forms and different sources of uncertainty or randomness can affect social phenomena such as social influence, consensus, and polarization in quite distinct ways (as we will outline in the next section).
  3. Noise can be adaptive and heterogeneous across individuals. Noise is not a passive property of a system, but can be a context-dependent, dynamically adapted strategy (Frankenhuis, Panchanathan, & Smaldino, 2022). For example, some individuals tend to be more covert and less precise than others for instance when they perceive themselves to be in a minority (Smaldino & Turner, 2022). Some situations require individuals to be more noisy or unpredictable, like when taking a penalty in soccer, other situations less so, such as writing an online dating ad. People need to decide and adapt the degree of noise in their social signals. Smaldino et al. (2023) highlighted that all strategies that lead collectives to perform well in solving complex tasks depend in some way on maintaining (but also adapting to) a certain level of transient noise.
  4. Noise is itself a signal. There are famous examples of institutions or individuals using noise signaling to spread doubt and uncertainty in debates about climate change or the health effects of smoking (see ‘Merchants of Doubt’ by Oreskes & Conway, 2010). Such actors signal noise to diffuse and discredit valuable information. One could certainly argue that Trump’s noisy stance on tariffs also falls into this category. 

In short, noise represents meaningful, multi-faceted, adaptive, and strategic aspects of a system. In social systems—which are, by definition, systems of interdependence—noise is essential to understanding that system. As Macy & Tsvetkova put it: ‘strip away the noise and you may strip away the explanation’ (2015).

A taxonomy of noise in opinion dynamics

In our paper ‘Noise and opinion dynamics’ published last year in Royal Society of Open Science, we reviewed and examined if and how different sources of noise affect the results in a model of opinion dynamics (Steiglechner et al., 2024). The model builds on the bounded confidence model by Deffuant et al. (2000), calibrated on a survey measuring environmental attitudes. We identified at least four different types of noise in a system of opinion formation through dyadic social influence: exogenous noise, selectivity noise, adaptation noise and ambiguity noise.

Figure 1. Sources of noise in social influence models for opinion dynamics (adapted from Steiglechner et al., 2024)

Each type of noise in our taxonomy enters at a different stage of the interaction process (as shown in Figure 1). Ambiguity and adaptation noise both depend on the current attitudes of the sender and the receiver, respectively, whereas selectivity noise acts on the connections between individuals. Exogenous noise is a ‘catch-all’ category of noise added to the agent’s attributes regardless of the (success of) interaction. Some of these types of noise may have similar effects on population-level opinion dynamics in the thermodynamic limit (Nugent , Gomes, & Wolfram; 2024). But they can lead to quite different trajectories and conclusions about the noise effect when we look at real cases of finite-size and finite-time simulations.

Previous work had established that even small amounts of noise can affect the tendency of the bounded confidence model to produce complete consensus or multiple fragmented, internally coherent groups, but our results highlight that different types of noise can have quite distinct signatures. For example, while selectivity noise always increases the coherence of groups, only intermediate levels of exogenous noise unite individuals. Moreover, exogenous noise leads to convergence because it destroys the internal coherence within the fragmented groups, whereas selectivity noise leads to convergence because it connects polarized individuals across these groups. Ambiguity noise has yet another signature. For example, while low levels of ambiguity have no effect on fragmentation (similar to exogenous and adaptation noise), intermediate and even high levels of ambiguity can produce a somewhat coherent majority-group (similar to selectivity noise). More importantly, ambiguity noise also produces drift: a gradual shift in the average opinion toward a more extreme position (Steiglechner et al., 2024). This is a remarkable result, because not only does ambiguous messaging alter the robustness of the clean, noiseless model, it actually produces a novel type of extremization using only positive influence!

Make some noise!

The above taxonomy is, of course, only a starting point for further discussion: It is not comprehensive and does not take into account adaptiveness or strategy. However, already this variety of effects of the different types of noise on consensus, polarization, and social influence should make us more aware of noise in general—not just as an ‘afterthought’ or a robustness check, but as a modeling choice that represents a critical component of the model. Many modeling studies do consider how noise can affect the model outputs, but it matters—a lot—where and how they introduce noise (see also De Sanctis & Galla, 2009).

Noise is an essential aspect of human behavior, social systems, and politics, as Trump’s back-and-forth on tariffs illustrates quite effectively these days. When studying social phenomena such as opinion formation and polarization, we should take the effects of noise as seriously as the effects of behavioral biases or heuristics (Kahneman, Sunstein, & Sibony, 2021). That is, while we social systems modelers tend to spend a lot of time to formulate, justify, and analyze behavioral rules of individuals—generally considered the core of the model—, we should devote more time to formalize what kind of noise enters the modeled system where and how and analyze how this affects the dynamics (as also argued in the exchange of letters between Kahneman et al. and Krakauer & Wolpert, 2022). Noise is a meaningful, multi-faceted, adaptive, and strategic component of social systems. Rather than ‘just a robustness check’, it is a fundamental ingredient of the modeled system—a type of behavior in itself—and, thus, an object of study on its own. This is a call to all modelers (in the house) to make some noise!

Acknowledgments

We thank Victor Møller Poulsen and Paul E. Smaldino for their feedback.

References

Axelrod, R. (1997). The dissemination of culture: A model with local convergence and global polarization. Journal of Conflict Resolution, 41(2), 203–226. https://doi.org/10.1177/0022002797041002001

Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties. American Journal of Sociology, 113(3), 702–734. https://doi.org/10.1086/521848

Deffuant, G., Neau, D., Amblard, F., & Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 3(1–4), 87–98. https://doi.org/10.1142/S0219525900000078

De Sanctis, L., & Galla, T. (2009). Effects of noise and confidence thresholds in nominal and metric Axelrod dynamics of social influence. Physical Review E, 79(4), 046108. https://doi.org/10.1103/PhysRevE.79.046108

Flache, A., & Macy, M. W. (2011). Local convergence and global diversity: From interpersonal to social influence. Journal of Conflict Resolution, 55(6), 970–995. https://doi.org/10.1177/0022002711414371

Frankenhuis, W. E., Panchanathan, K., & Smaldino, P. E. (2023). Strategic ambiguity in the social sciences. Social Psychological Bulletin, 18, e9923. https://doi.org/10.32872/spb.9923

Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus: Why biased minds make better inferences. Topics in Cognitive Science, 1(1), 107–143. https://doi.org/10.1111/j.1756-8765.2008.01006.x

Hegselmann, R., & Krause, U. (2002). Opinion dynamics and bounded confidence models, analysis and simulation. Journal of Artificial Societies and Social Simulation, 5(3). http://jasss.soc.surrey.ac.uk/5/3/2.html

Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A flaw in human judgment. New York: Little, Brown Spark.

Kahneman, D., Krakauer, D.C., Sibony, O., Sunstein, C. and Wolpert, D. (2022) ‘An exchange of letters on the role of noise in collective intelligence’, Collective Intelligence, 1(1), p. 26339137221078593. doi: https://doi.org/10.1177/26339137221078593.

Klemm, K., Eguíluz, V. M., Toral, R., & Miguel, M. S. (2003). Global culture: A noise-induced transition in finite systems. Physical Review E, 67(4), 045101. https://doi.org/10.1103/PhysRevE.67.045101

Lloyd-Smith, J. O., Schreiber, S. J., Kopp, P. E., & Getz, W. M. (2005). Superspreading and the effect of individual variation on disease emergence. Nature, 438(7066), 355–359. https://doi.org/10.1038/nature04153

Macy, M., & Tsvetkova, M. (2015). The signal importance of noise. Sociological Methods & Research, 44(2), 306–328. https://doi.org/10.1177/0049124113508093

Mäs, M., & Nax, H. H. (2016). A behavioral study of “noise” in coordination games. Journal of Economic Theory, 162, 195–208. https://doi.org/10.1016/j.jet.2015.12.010

McMahan, P., & Evans, J. (2018). Ambiguity and engagement. American Journal of Sociology, 124(3), 860–912. https://doi.org/10.1086/701298

Nugent, A., Gomes, S. N., & Wolfram, M.-T. (2024). Bridging the gap between agent based models and continuous opinion dynamics. Physica A: Statistical Mechanics and its Applications, 651, 129886. https://doi.org/10.1016/j.physa.2024.129886

Oreskes, N., & Conway, E. M. (2010). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. New York: Bloomsbury Press.

Page, B. I. (1976). The theory of political ambiguity. American Political Science Review, 70(3), 742–752. https://doi.org/10.2307/1959865

Pineda, M., Toral, R. & Hernández-García, E. (2011) ‘Diffusing opinions in bounded confidence processes’, The European Physical Journal D, 62(1), pp. 109–117. doi: 10.1140/epjd/e2010-00227-0.

Smaldino, P. E., & Turner, M. A. (2022). Covert signaling is an adaptive communication strategy in diverse populations. Psychological Review, 129(4), 812–829. https://doi.org/10.1037/rev0000344

Smaldino, P. E., Moser, C., Pérez Velilla, A., & Werling, M. (2023). Maintaining transient diversity is a general principle for improving collective problem solving. Perspectives on Psychological Science, Advance online publication. https://doi.org/10.1177/17456916231180100

Steiglechner, P., Keijzer, M. A., Smaldino, P. E., Moser, D., & Merico, A. (2024). Noise and opinion dynamics: How ambiguity promotes pro-majority consensus in the presence of confirmation bias. Royal Society Open Science, 11(4), 231071. https://doi.org/10.1098/rsos.231071

Vul, E., & Pashler, H. (2008). Measuring the crowd within: Probabilistic representations within individuals. Psychological Science, 19(7), 645–647. https://doi.org/10.1111/j.1467-9280.2008.02136.x


Steiglechner, P. & Keijzer, M.(2025) Make some noise! Why agent-based modelers should embrace the power of randomness. Review of Artificial Societies and Social Simulation, 30 May 2025. https://rofasss.org/2025/05/31/noise


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Quantum computing in the social sciences

By Emile Chappin and Gary Polhill

The dream

What could quantum computing mean for the computational social sciences? Although quantum computing is at an early stage, this is the right time to dream about precisely that question for two reasons. First, we need to keep the computational social sciences ‘in the conversation’ about use cases for quantum computing to ensure our potential needs are discussed. Second, thinking about how quantum computing could affect the way we work in the computational social sciences could lead to interesting research questions, new insights into social systems and their uncertainties, and form the basis of advances in our area of work.

At first glance, quantum computing and the computational social sciences seem unrelated. Computational social science uses computer programs written in high-level languages to explore the consequences of assumptions as macro-level system patterns based on coded rules for micro-level behaviour (e.g., Gilbert, 2007). Quantum computing is in an early phase, with the state-of-the-art being in the order of 100s of qubits [1],[2], and a wide range of applications are envisioned (Hassija, 2020), e.g., in the areas of physics (Di Meglio et al., 2024) and drug discovery (Blunt et al., 2022). Hence, the programming of quantum computers is also in an early phase. Major companies (e.g., IBM, Microsoft, Alphabet, Intel, Rigetti Computing) are investing heavily and have put out high expectations – though how much of this is hyperbole to attract investors and how much it is backed up by substance remains to be seen. This means it is still hard to comprehend what opportunities may come from scaling up.

Our dream is that quantum computing enables us to represent human decision-making on a much larger scale, do more justice to how decisions come about, and embrace the influences people have on each other. It would respect that people’s actual choices are undetermined until they have to show behaviour. On a philosophical level, these features are consistent with how quantum computation operates. Applying quantum computing to decision-making with interactions may help us inform or discover behavioural theory and contribute to complex systems science.

The mysticism around quantum computing

There is mysticism around what qubits are. To start thinking about how quantum computing could be relevant for computational social science, there is no direct need to understand the physics of how qubits are physically set up. However, it is necessary to understand the logic and how quantum computers operate. At the logical level, there are similarities between quantum and traditional computers.

The main similarity is that the building blocks are bits and that they are either 0 or 1, but only when you measure them. A second similarity is that quantum computers work with ‘instructions’. Quantum ‘processors’ alter the state of the bits in a ‘memory’ using programs that comprise sequences of ‘instructions’ (e.g., Sutor, 2019).

There are also differences. They are: 1) qubits are programmed to have probabilities of being a zero or a one, 2) qubits have no determined value until they are measured, and 3) multiple qubits can be entangled. The latter means the values (when measured) depend on each other.

Operationally speaking, quantum computers are expected to augment conventional computers in a ‘hybrid’ computing environment. This means we can expect to use traditional computer programs to do everything around a quantum program, not least to set up and analyse the outcomes.

Programming quantum computers

Until now, programming languages for quantum computing are low-level; like assembly languages for regular machines. Quantum programs are therefore written very close to ‘the hardware’. Similarly, in the early days of electronic computers, instructions for processors to perform directly were programmed directly: punched cards contained machine language instructions. Over time, computers got bigger, more was asked of them, and their use became more widespread and embedded in everyday life. At a practical level, different processors, which have different instruction sets, and ever-larger programs became more and more unwieldy to write in machine language. Higher-level languages were developed, and reached a point where modellers could use the languages to describe and simulate dynamic systems. Our code is still ultimately translated into these lower-level instructions when we compile software, or it is interpreted at run-time. The instructions now developed for quantum computing are akin to the early days of conventional computing, but development of higher-level programming languages for quantum computers may happen quickly.

At the start, qubits are put in entangled states (e.g., Sutor, 2019); the number of qubits at your disposal makes up the memory. A quantum computer program is a set of instructions that is followed. Each instruction alters the memory, but only by changing the probabilities of qubits being 0 or 1 and their entanglement. Instruction sets are packaged into so-called quantum circuits. The instructions operate on all qubits at the same time, (you can think of this in terms of all probabilities needing to add up to 100%). This means the speed of a quantum program does not depend on the scale of the computation in number of qubits, but only depends on the number of instructions that one executes in a program. Since qubits can be entangled, quantum computing can do calculations that take too long to run on a normal computer.

Quantum instructions are typically the inverse of themselves: if you execute an instruction twice, you’re back at the state before the first operation. This means you can reverse a quantum program simply by executing the program again, but now in reverse order of the instructions. The only exception to this is the so-called ‘read’ instruction, by which the value is determined for each qubit to either be 1 or 0. This is the natural end of the quantum program.

Recent developments in quantum computing and their roadmaps

Several large companies such as Microsoft, IBM and Alphabet are investing heavily in developing quantum computing. The route currently is to move up in the scale of these computers with respect to the number of qubits they have and the number of gates (instructions) that can be run. IBM’s roadmap they suggest growing to 7500 instructions, as quickly as 2025[3]. At the same time, programming languages for quantum computing are being developed, on the basis of the types of instructions above. At the moment, researchers can gain access to actual quantum computers (or run quantum programs on simulated quantum hardware). For example, IBM’s Qiskit[4] is one of the first open-source software developing kit for quantum computing.

A quantum computer doing agent-based modelling

The exponential growth in quantum computing capacity (Coccia et al., 2024) warrants us to consider how it may be used in the computational social sciences. Here is a first sketch. What if there is a behavioural theory that says something about ‘how’ different people decide in a specific context on a specifical behavioural action. Can we translate observed behaviour into the properties of a quantum program and explore the consequences of what we can observe? Or, in contrast, can we unravel the assumptions underneath our observations? Could we look at alternative outcomes that could also have been possible in the same system, under the same conceptualization? Given what we observe, what other system developments could have had emerged that also are possible (and not highly unlikely)? Can we unfold possible pathways without brute-forcing a large experiment? These questions are, we believe, different when approached from a perspective of quantum computing. For one, the reversibility of quantum programs (until measuring) may provide unique opportunities. This also means, doing such analyses may inspire new kinds of social theory, or it may give a reflection on the use of existing theory.

One of the early questions is how we may use qubits to represent modelled elements in social simulations. Here we sketch basic alternative routes, with alternative ideas. For each strain we include a very rudimentary application to both Schelling’s model of segregation and the Traffic Basic model, both present in NetLogo model library.

Qubits as agents

A basic option could be to represent an agent by a qubit. Thinking of one type of stylized behaviour, an action that can be taken, then a quantum bit could represent whether that action is taken or not. Instructions in the quantum program would capture the relations between actions that can be taken by the different agents, interventions that may affect specific agents. For Schelling’s model, this would have to imply to show whether segregation takes place or not. For Traffic Basic, this would be what the probability is for having traffic jams. Scaling up would mean we would be able to represent many interacting agents without the simulation to slow down. This is, by design, abstract and stylized. But it may help to answer whether a dynamic simulation on a quantum computer can be obtained and visualized.

Decision rules coded in a quantum computer

A second option is for an agent to perform a quantum program as part of their decision rules. The decision-making structure should then match with the logic of a quantum computer. This may be a relevant ontological reference to how brains work and some of the theory that exists on cognition and behaviour. Consider a NetLogo model with agents that have a variety of properties that get translated to a quantum program. A key function for agents would be that the agent performs a quantum calculation on the basis of a set of inputs. The program would then capture how different factors interact and whether the agent performs specific actions, i.e., show particular behaviour. For Schelling’s segregation model, it would be the decision either to move (and in what direction) or not. For Traffic Basic it would lead to a unique conceptualization of heterogeneous agents. But for such simple models it would not necessarily take benefit of the scale-advantage that quantum computers have, because most of the computation occurs on traditional computers and the limited scope of the decision logic of these models. Rather, it invites to developing much more rich and very different representations of how decisions are made by humans. Different brain functions may all be captured: memory, awareness, attitudes, considerations, etc. If one agent’s decision-making structure would fit in a quantum computer, experiments can already be set up, running one agent after the other (just as it happens on traditional computers). And if a small, reasonable number of agents would fit, one could imagine group-level developments. If not of humans, this could represent companies that function together, either in a value chain or as competitors in a market. Because of this, it may be revolutionary:  let’s consider this as quantum agent-based modelling.

Using entanglement

Intuitively one could consider the entanglement if qubits to be either represent the connection between different functions in decision making, the dependencies between agents that would typically interact, or the effects of policy interventions on agent decisions. Entanglement of qubits could also represent the interaction of time steps, capturing path dependencies of choices, limiting/determining future options. This is the reverse of memory: what if the simulation captures some form of anticipation by entangling future options in current choices. Simulations of decisions may then be limited, myopic in their ability to forecast. By thinking through such experiments, doing the work, it may inspire new heuristics that represent bounded rationality of human decision making. For Schelling’s model this could be the local entanglement restricting movement, it could be restricting movement because of future anticipated events, which contributes to keep the status quo. For Traffic Basic, one could forecast traffic jams and discover heuristics to avoid them which, in turn may inspire policy interventions.

Quantum programs representing system-level phenomena

The other end of the spectrum can also be conceived. As well as observing other agents, agents could also interact with a system in order to make their observations and decisions where the system with which they interact with itself is a quantum program. The system could be an environmental, or physical system, for example. It would be able to have the stochastic, complex nature that real world systems show. For some systems, problems could possibly be represented in an innovative way. For Schelling’s model, it could be the natural system with resources that agents benefit from if they are in the surroundings; resources having their own dynamics depending on usage. For Traffic Basic, it may represent complexities in the road system that agents can account for while adjusting their speed.

Towards a roadmap for quantum computing in the social sciences

What would be needed to use quantum computation in the social sciences? What can we achieve by taking the power of high-performance computing combined with quantum computers when the latter scale up? Would it be possible to reinvent how we try to predict the behaviour of humans by embracing the domain of uncertainty that also is essential in how we may conceptualise cognition and decision-making? Is quantum agent-based modelling at one point feasible? And how do the potential advantages compare to bringing it into other methods in the social sciences (e.g. choice models)?

A roadmap would include the following activities:

  • Conceptualise human decision-making and interactions in terms of quantum computing. What are promising avenues of the ideas presented here and possibly others?
  • Develop instruction sets/logical building blocks that are ontologically linked to decision-making in the social sciences. Connect to developments for higher-level programming languages for quantum computing.
  • Develop a first example. One could think of reproducing one of the traditional models. Either an agent-based model, such as Schelling’s model of segregation or Basic Traffic, or a cellular automata model, such as game-of-life. The latter may be conceptualized with a relatively small number of cells and could be a valuable demonstration of the possibilities.
  • Develop quantum computing software for agent-based modelling, e.g., as a quantum extension for NetLogo, MESA, or for other agent-based modelling packages.

Let us become inspired to develop a more detailed roadmap for quantum computing for the social sciences. Who wants to join in making this dream a reality?

Notes

[1] https://newsroom.ibm.com/2022-11-09-IBM-Unveils-400-Qubit-Plus-Quantum-Processor-and-Next-Generation-IBM-Quantum-System-Two

[2] https://www.fastcompany.com/90992708/ibm-quantum-system-two

[3] https://www.ibm.com/roadmaps/quantum/

[4] https://github.com/Qiskit/qiskit-ibm-runtime

References

Blunt, Nick S., Joan Camps, Ophelia Crawford, Róbert Izsák, Sebastian Leontica, Arjun Mirani, Alexandra E. Moylett, et al. “Perspective on the Current State-of-the-Art of Quantum Computing for Drug Discovery Applications.” Journal of Chemical Theory and Computation 18, no. 12 (December 13, 2022): 7001–23. https://doi.org/10.1021/acs.jctc.2c00574.

Coccia, M., S. Roshani and M. Mosleh, “Evolution of Quantum Computing: Theoretical and Innovation Management Implications for Emerging Quantum Industry,” in IEEE Transactions on Engineering Management, vol. 71, pp. 2270-2280, 2024, https://doi: 10.1109/TEM.2022.3175633.

Di Meglio, Alberto, Karl Jansen, Ivano Tavernelli, Constantia Alexandrou, Srinivasan Arunachalam, Christian W. Bauer, Kerstin Borras, et al. “Quantum Computing for High-Energy Physics: State of the Art and Challenges.” PRX Quantum 5, no. 3 (August 5, 2024): 037001. https://doi.org/10.1103/PRXQuantum.5.037001.

Gilbert, N., Agent-based models. SAGE Publications Ltd, 2007. ISBN 978-141-29496-44

Hassija, V., Chamola, V., Saxena, V., Chanana, V., Parashari, P., Mumtaz, S. and Guizani, M. (2020), Present landscape of quantum computing. IET Quantum Commun., 1: 42-48. https://doi.org/10.1049/iet-qtc.2020.0027

Sutor, R. S. (2019). Dancing with Qubits: How quantum computing works and how it can change the world. Packt Publishing Ltd.


Chappin, E. & Polhill, G (2024) Quantum computing in the social sciences. Review of Artificial Societies and Social Simulation, 25 Sep 2024. https://rofasss.org/2024/09/24/quant


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Exascale computing and ‘next generation’ agent-based modelling

By Gary Polhill, Alison Heppenstall, Michael Batty, Doug Salt, Ricardo Colasanti, Richard Milton and Matt Hare

Introduction

In the past decade we have seen considerable gains in the amount of data and computational power that are available to us as scientific researchers.  Whilst the proliferation of new forms of data can present as many challenges as opportunities (linking data sets, checking veracity etc.), we can now begin to construct models that are capable of answering ever more complex and interrelated questions.  For example, what happens to individual health and the local economy if we pedestrianize a city centre?  What is the impact of increasing travel costs on the price of housing? How can we divert economic investment to places in economic decline from prosperous cities and regions. These advances are slowly positioning agent-based modelling to support decision-makers to make informed evidence-based decisions.  However, there is still a lack of ABMs being used outside of academia and policy makers find it difficult to mobilise and apply such tools to inform real world problems: here we explore the background in computing that helps address the question why such models are so underutilised in practice.

Whilst reaching a level of maturity (defined as being an accepted tool) within the social sciences, agent-based modelling still has several methodological barriers to cross.  These were first highlighted by Crooks et al. (2008) and revisited by Heppenstall et al. (2020) and include robust validation, elicitation of behaviour from data and scaling up.  Whilst other disciplines, such as meteorology, are able to conduct large numbers of simulations (ensemble modelling) using high-performance computing, there is a relative absence of this capability within agent-based modelling. Moreover, many different kinds of agent-based models are being devised, and key issues concern the number and type of agents and these are reflected in the whole computational context in which such models are developed. Clearly there is potential for agent-based modelling to establish itself as a robust policy tool, but this requires access to large-scale computing.

Exascale high-performance computing is defined with respect to speed of calculation with orders of magnitude defined as 10^18 (a billion-billion) floating point operations per second (flops). That is fast enough to calculate the ratios of the ages of each of every possible pair of people in China in roughly a second. By comparison, modern-day personal computers are around 10^9 flops (gigascale) – a billion times slower. The same rather pointless calculation of age ratios of the Chinese would take just over thirty years on a standard laptop at the time of writing (2023). Though agent-based modellers are more interested in instructions incorporating the rules operated by each agent executed per second than in floating-point operations, the speed of the two is approximately the same.

Anecdotally, the majority of simulations of agent-based models are on personal computers operating on the desktop. However, there are examples of the use of high-performance computing environments such as computing clusters (terascale) and cloud services such as Microsoft’s Azure, Amazon’s AWS or Google Cloud (tera- to peta-scale). High-performance computing provides the capacity to do more of what we already do (more runs for calibration, validation and sensitivity analysis) and/or at a larger scale (regional or sub-national scale rather than local scale) with the number of agents scaled accordingly. As a rough guide, however, since terascale computing is a million times slower than exascale computing, an experiment that currently takes a few days or weeks in a high-performance computing environment could be completed in a fraction of a second at exascale.

We are all familiar with poor user interface design in everyday computing, and in particular the frustration of waiting for the hourglasses, spinning wheels and progress bars to finish so that we can get on with our work. In fact, the ‘Doherty Threshold’ (Yablonski 2020) stipulates 400ms interaction time between human action and computer response for best productivity. If going from 10^9 to 10^18 flops is simply a case of multiplying the speed of computation by a billion, the Doherty threshold is potentially feasible with exascale computing when applied to simulation experiments that now require very long wait times for completion.

The scale of performance of exascale computers means that there is scope to go beyond doing-more-of-what-we-already-do to thinking more deeply about what we could achieve with agent-based modelling. Could we move past some of these methodological barriers that are characteristic of agent-based modelling? What could we achieve if we had appropriate software support, and how this would affect the processes and practices by which agent-based models are built? Could we move agent-based models to having the same level of ‘robustness’ as climate models, for example? We can conceive of a productivity loop in which an empirical agent-based model is used for sequential experimentation with continual adaptation and change, continued experiment with perhaps a new model emerging from these workflows to explore tangential issues. But currently we need to have tools that help us build empirical agent-based models much more rapidly, and critically, to find, access and preprocess empirical data that the model will use for initialisation, then finding and affirming parameter values.

The ExAMPLER project

The ExAMPLER (Exascale Agent-based Modelling for PoLicy Evaluation in Real-time) project is an eighteen-month project funded by the Engineering and Physical Sciences Research Council to explore the software, data and institutional requirements to support agent-based modelling at exascale.

With high-performance computing use not being commonplace in the agent-based modelling community, we are interested in finding out what the state-of-the-art is in high-performance computing use by agent-based modellers, undertaking a systematic literature review to assess the community’s ‘exascale-readiness’. This is not just a question of whether the community has the necessary technical skills to use the equipment. It is also a matter that covers whether the hardware is appropriate to the computational demands that agent-based modellers have, whether the software in which agent-based models are built can take advantage of the hardware, and whether the institutional processes by which agent-based modellers access high-performance computing – especially with respect to information requested of applicants – is aware of their needs.

We will then benchmark the state-of-the-art against high-performance computing use in other domains of research: ecology and microsimulation, which are comparable to agent-based social simulation (ABSS); and fields such as transportation, land use and urban econometric  modelling that are  not directly comparable to ABSS, but have similar computational challenges (e.g. having to simulate many interactions, needing to explore a vast uncharted parameter space, containing multiple qualitatively different outcomes from the same initial conditions, and so on). Ecology might not simulate agents with decision-making algorithms as computationally demanding as some of those used by agent-based modellers of social systems, while a crude characterisation of microsimulation work is that it does not simulate interactions among heterogeneous agents, which affects the parallelisation of simulating them. Land use and transport models usually rely on aggregates of agents but increasingly there are being disaggregated to finer and fine spatial units with these units themselves being treated more like agents. The ‘discipline-to-be-decided’ might have a community with generally higher technical computing skills than would be expected among social scientists. Benchmarking would allow us to gain better insights into the specific barriers faced by social scientists in accessing high-performance computing.

Two other strands of work in ExAMPLER feature significant engagement with the agent-based modelling community. The project’s imaginary starting point is a computer powerful enough to experiment with an agent-based model which run in fractions of a second. With a pre-existing agent-based model, we could use such a computer in a one-day workshop to enable a creative discussion with decision-makers about how to handle problems and policies associated with an emerging crisis. But what if we had the tools at our disposal to gather and preprocess data and build models such that these activities could also be achievable in the same day? or even the same hour? Some of our land use and transportation models are already moving in this direction (Horni, Nagel, and Axhausen, 2016). Agent-based modelling would thus become a social activity that facilitates discussion and decision-making that is mindful of complexity and cascading consequences. The practices and procedures associated with building an agent-based model would then have evolved significantly from what they are now, as have the institutions built around accessing and using high-performance computing.

The first strand of work co-constructs with the agent-based modelling community various scenarios by which agent-based modelling is transformed by the dramatic improvements in computational power that exascale computing entails. These visions will be co-constructed primarily through workshops, the first of which is being held at the Social Simulation Conference in Glasgow – a conference that is well-attended by the European (and wider international) agent-based social simulation community. However, we will also issue a questionnaire to elicit views from the wider community of those who cannot attend one of our events. There are two purposes to these exercises: to understand the requirements of the community and their visions for the future, but also to advertise the benefits that exascale computing could have.

In a second series of workshops, we will develop a roadmap for exascale agent-based modelling that identifies the institutional, scientific and infrastructure support needed to achieve the envisioned exascale agent-based modelling use-cases. In essence, what do we need to have in place to make exascale a reality for the everyday agent-based modeller? This activity is underpinned by training ExAMPLER’s research team in the hardware, software and algorithms that can be used to achieve exascale computation more widely. That knowledge, together with the review of the state-of-the-art in high-performance computing use with agent-based models, can be used to identify early opportunities for the community to make significant gains (Macal, and North, 2008)

Discussion

Exascale agent-based modelling is not simply a case of providing agent-based modellers with usernames and passwords on an exascale computer and letting them run their models on it. There are many institutional, scientific and infrastructural barriers that need to be addressed.

On the scientific side, exascale agent-based modelling could be potentially revolutionary in transforming the practices, methods and audiences for agent-based modelling. As a highly diverse community, methodological development is challenged both by the lack of opportunity to make it happen, and by the sheer range of agent-based modelling applications. Too much standardization and ritualized behaviour associated with ‘disciplining’ agent-based modelling risks some of the creative benefits of having the cross-disciplinary discussions that agent-based modelling enables us to have. Nevertheless, it is increasingly clear that off-the-shelf methods for designing, implementing and assessing models are ill-suited to agent-based modelling, or – especially in the case of the last of these – fail to do it justice (Polhill and Salt 2017, Polhill et al. 2019). Scientific advancement in agent-based modelling is predicated on having the tools at our disposal to tell the whole story of its benefits, and enabling non-agent-based modelling colleagues to understand how to work with the ABM community.

Hence, hardware is only a small part of the story of the infrastructure supporting exascale agent-based modelling. Exascale computers are built using GPUs (Graphical Processing Units) – which, bluntly-speaking, are specialized computing engines for performing matrix calculations and ‘drawing millions of triangles as quickly as possible’ – they are, in any case, different from CPU-based computing. In Table 4 of Kravari and Bassiliades’ (2015) survey of agent-based modelling platforms, only two of the 24 platforms reviewed (Cormas – Bommel et al. 2016 and GAMA – Taillandier et al. 2019) are not listed as involving Java and/or the Java Virtual Machine. (As it turns out, GAMA does use Java.) TornadoVM (Papadimitriou et al. 2019) is one tool allowing Java Virtual Machines to run on GPUs. Even if we can then run NetLogo on a GPU, specialist GPU-based agent-based modelling platforms such as Richmond et al.’s (2010, 2022) FLAME GPU may be preferable in order to make best use of the highly parallelized computing environment on GPUs.

Such software simply achieves getting an agent-based model running on an exascale computer. Realizing some of the visions of future exascale-enabled agent-based modelling means rather more in the way of software support. For example, the one-day workshop in which an agent-based modelling is co-constructed with stakeholders asks either a great deal of the developers in terms of building a bespoke application in tens of minutes, or many stakeholders trusting pre-constructed modular components that can be brought together rapidly using a specialist software tool.

As has been noted (e.g. Alessa et al. 2006, para 3.4), agent-based modelling is already challenging for social scientists without programming expertise, and GPU programming is a highly specialized domain in the world of software environments. Exascale computing intersects GPU programming with high-performance computing; issues with the ways in which high-performance computing clusters are typically administered make access to them a significant obstacle for agent-based modellers (Polhill 2022). There are therefore institutional barriers that need to be broken down for the benefits of exascale agent-based modelling to be realized in a community primarily interested in the dynamics of social and/or ecological complexity, and rather less in the technology that enables them to pursue that interest. ExAMPLER aims to provide us with a voice that gets our requirements heard so that we are not excluded from taking best advantage of advanced development in computing hardware.

Acknowledgements

The ExAMPLER project is funded by the EPSRC under grant number EP/Y008839/1.  Further information is available at: https://exascale.hutton.ac.uk

References

Alessa, L. N., Laituri, M. and Barton, M. (2006) An “all hands” call to the social science community: Establishing a community framework for complexity modeling using cyberinfrastructure. Journal of Artificial Societies and Social Simulation 9 (4), 6. https://www.jasss.org/9/4/6.html

Bommel, P., Becu, N., Le Page, C. and Bousquet, F. (2016) Cormas: An agent-based simulation platform for coupling human decisions with computerized dynamics. In Kaneda, T., Kanegae, H., Toyoda, Y. and Rizzi, P. (eds.) Simulation and Gaming in the Network Society. Translational Systems Sciences 9, pp. 387-410. doi:10.1007/978-981-10-0575-6_27

Crooks, A. T., C. J. E. Castle, and M. Batty. (2008). “Key Challenges in Agent-Based Modelling for Geo-spatial Simulation.” Computers, Environment and Urban Systems  32(6),  417– 30.

Heppenstall A, Crooks A, Malleson N, Manley E, Ge J, Batty M. (2020). Future Developments in Geographical Agent-Based Models: Challenges and Opportunities. Geographical Analysis. 53(1): 76 – 91 doi:10.1111/gean.12267

Horni, A, Nagel, K and Axhausen, K W. (eds)(2016) The Multi-Agent Transport Simulation MATSim, Ubiquity Press, London, 447–450

Kravari, K. and Bassiliades, N. (2015) A survey of agent platforms. Journal of Artificial Societies and Social Simulation 18 (1), 11. https://www.jasss.org/18/1/11.html

Macal, C. M., and North, M. J. (2008) Agent-Based Modeling And Simulation for EXASCALE Computing, http://www.scidac.org

Papadimitriou, M., Fumero, J., Stratikopoulos, A. and Kotselidis, C. (2019) Towards prototyping and acceleration of Java programs onto Intel FPGAs. Proceedings of the 2019 IEEE 27th Annueal International Symposium on Field-Programmable Custom Computing Machines (FCCM). doi:10.1109/FCCM.2019.00051

Polhill, G. (2022) Antisocial simulation: using shared high-performance computing clusters to run agent-based models. Review of Artificial Societies and Social Simulation, 14 Dec 2022. https://rofasss.org/2022/12/14/antisoc-sim

Polhill, G. and Salt, D. (2017) The importance of ontological structure: why validation by ‘fit-to-data’ is insufficient. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity (2nd edition), pp. 141-172. doi:10.1007/978-3-319-66948-9_8

Polhill, J. G., Ge, J., Hare, M. P., Matthews, K. B., Gimona, A., Salt, D. and Yeluripati, J. (2019) Crossing the chasm: a ‘tube-map’ for agent-based simulation of policy scenarios in spatially-distributed systems. Geoinformatica 23, 169-199. doi:10.1007/s10707-018-00340-z

Richmond, P., Chisholm, R., Heywood, P., Leach, M. and Kabiri Chimeh, M. (2022) FLAME GPU (2.0.0-rc). Zenodo. doi:10.5281/zenodo.5428984

Richmond, P., Walker, D., Coakley, S. and Romano, D. (2010) High performance cellular level agent-based simulation with FLAME for the GPU. Briefings in Bioinformatics 11 (3), 334-347. doi:10.1093/bib/bbp073

Taillandier, P., Gaudou, B., Grignard, A.,Huynh, Q.-N., Marilleau, N., P. Caillou, P., Philippon, D. and Drogoul, A. (2019). Building, composing and experimenting complex spatial models with the GAMA platform. Geoinformatica 23 (2), 299-322, doi:10.1007/s10707-018-00339-6

Yablonski, J. (2020) Laws of UX. O’Reilly. https://www.oreilly.com/library/view/laws-of-ux/9781492055303/


Polhill, G., Heppenstall, A., Batty, M., Salt, D., Colasanti, R., Milton, R. and Hare, M. (2023) Exascale computing and ‘next generation’ agent-based modelling. Review of Artificial Societies and Social Simulation, 9 Mar 2023. https://rofasss.org/2023/09/29/exascale-computing-and-next-gen-ABM


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Agent-based Modelling as a Method for Prediction for Complex Social Systems – a review of the special issue

International Journal of Social Research Methodology, Volume 26, Issue 2.

By Oswaldo Terán

Escuela de Ciencias Empresariales, Universidad Católica del Norte, Coquimbo, Chile

This special issue appeared following a series of articles in RofASSS regarding the polemic around Agent-Based Modelling (ABM) prediction (https://rofasss.org/tag/prediction-thread/).  As expected, the articles in the special issue complement and expand upon the initial RofASSS’s discussion.

The goal of the special issue is to explore a wide range of positions regarding ABM prediction, encompassing methodological, epistemic and pragmatic issues. Contributions go from moderately sceptic and pragmatic positions to strongly sceptic positions. Moderately sceptic views argue that ABM can cautiously be employed for prediction, sometimes as a complement to other approaches, acknowledging its somewhat peripheral role in social research. Conversely, strongly sceptic positions contend that, in general, ABM can not be utilized for prediction. Several factors are instrumental in distinguishing and understanding these positions with respect to ABM prediction, especially the following:

  • the conception of prediction.
  • the complexity of modelled systems and models: this encompasses factors such as multiple views (or perspectives), uncertainty, auto-organization, self-production, emergence, structural change, and data incompleteness. These complexities are associated with the limitations of our language and tools to comprehend and symmetrically model complex systems.

Considering these factors, we will summarize the diverse positions presented in this special issue. Then, we will delve into the notions of prediction and complexity and briefly situate each position within the framework provided by these definitions

Elsebroich and Polhill (2023) (Editorial) summarizes the diverse positions in the special issue regarding prediction, categorizing them into three groups: 1) Positive, a position that assumes that “all we need for prediction is to have the right data,  methods and mechanism” (p. 136); 2) pragmatic, a position advocate to for cautious use of ABM to attempt prediction, often to compliment other approaches and avoid exclusive reliance on them; and 3) sceptic, a position arguing that ABM can not be used for prediction but can serve other purposes.  The authors place this discussion in a broader context, considering other relevant papers on ABM prediction. The authors acknowledge the challenge of prediction in complex systems, citing factors such as multiple perspectives, asynchronous agent actions, emergence, nonlinearity, non-ergodicity, evolutionary dynamics and heterogeneity. They indicate that some of these factors are well managed in ABM, but not others, noticeably “multiple perspectives/views”. Uncertainty is another critical element affecting ABM prediction, along with the relationship between prediction and explanation. The authors proved a summary of the debate surrounding the possibilities of prediction and its relation with explanation, incorporating insightful views from external sources (e.g., Thompson & Derr, 2009; Troitzsch, 2009). They also highlight recent developments in this debate, noticing that ABM has evolved into a more empirical and data-driven approach, deeply focused on modelling complex social and ecological systems, including Geographical Information Systems data and real time data integration, leading to a more contentious discussion regarding empirical data-driven ABM prediction.

Chattoe-Brown (2023) supports the idea that ABM prediction is possible. He argues for the utility of using AMB not only to predict real world outcomes but also to predict models. He also advocates for using prediction for predictive failure and assessing predictions. His notion of prediction finds support on by key elements of prediction in social science derived from real research across disciplines. For instance, the need of adopting a conceptual approach to enhance our comprehension of the various facets of prediction, the functioning of diverse prediction approaches, and the need for clear thinking about temporal logic. Chattoe-Brown argues that he attempts to make prediction intelligible rather than seen if it is successful. He support the idea that ABM prediction is useful for coherent social science. He contrasts ABM to other modelling methods that predict on trend data alone, underscoring the advantages of ABM. From his position, ABM prediction can add value to other research, taking a somewhat secondary role.

Dignum (2023) defends the ability of ABM to make prediction while distinguishing the usefulness of a prediction from the truth of a prediction. He argues in favour of limited prediction in specific cases, especially when human behaviour is involved. He shows prediction alongside explanations of the predicted behaviour, which arise under specific constrains that define particular scenarios. His view is moderately positive, suggesting that prediction is possible under certain specific conditions, including a stable environment and sufficient available data.

Carpentras and Quayle (2023) call for improved agent specification to reduce distortions when using psychometric instruments, particularly in measurements of political opinion within ABM. They contend that the quality of prediction and validation depends on the scale of the system but acknowledges the challenges posed by the high complexity of the human brain, which is central to their study. Furthermore, they raise concerns about representativeness, especially considering the discrepancy between certain theoretical frameworks (e.g., opinion dynamics) and survey data.

Anzola and García-Díaz (2023) advocate for better criteria to judge prediction and a more robust framework for the practice of prediction to better coordinate efforts within the research community (helping to better contextualize needs and expectations). They hold a somewhat sceptic position, suggesting that prediction typically serve an instrumental role in scientific practices, subservient to other epistemic goals.

Elsenbroich and Badham (2023) adopt a somewhat negative and critical stance toward using ABM for prediction, asserting that ABM can improve forecasting but not provide definite predictions of specific future events. ABM can only generate coherent extrapolations from a certain initialization of the ABM and a set of assumptions. They argue that ABM generates “justified stories” based on internal coherence, mechanisms and consistency  with empirical evidence, but these can not be confused with precise predictions. They ask for the combined support of ABM on theoretical developments and external data.

Edmonds (2023) is the most sceptical regarding the use of ABM for prediction, contending that the motivation for prediction in ABM is a desire without evidence of its achievability. He highlights inherent reasons for preventing prediction in complex social and ecological systems, including incompleteness, chaos, context specificity, and more. In his perspective, it is essential to establish the distinction between prediction and explanation. He advocates for recognizing the various potential applications of AMB beyond prediction, such as description, explanation, analogy, and more. For Edmonds, prediction should entail generating data that is unknown to the modellers. To address the ongoing debate and the weakness of the practices in ABM prediction, Edmonds proposes a process of iterative and independent verification. However, this approach faces limitations due to the incomplete understanding of the underlying process that should be included into the requirement for high-quality, relevant data. Despite these challenges, Edmonds suggest that prediction could prove valuable in meta-modelling, particularly to comprehend better our own simulation models.

The above summarized diverse positions on ABM prediction within the reviewed articles can be better understood through the lenses of Troitzsch’s notion of prediction and McNabb’s descriptions of complex and complicated systems. Troitzsch (2009) distinguishes the difference between prediction and explanation by using three possible conceptions of predictions. The typical understanding of ABM prediction closely aligns with Troitzsch’s third definition of prediction, which answer to the following question:

Which state will the target system reach in the near future, again given parameters and previous states which may or may not have been precisely measured?

The answer to this question results in a prediction, which can be either stochastic or deterministic. In our view, explanations encompass broader range of statements than predictions. An explanation entails a wider scope, including justifications, descriptions, and reasons for various real or hypothetical scenarios. Explanation is closely tied to a fundamental aspect of human communication capacity signifying the act of making something plain, clear or comprehensible by elaborating its meaning. But, what precisely does it expand or elaborate?. It expands a specific identification, opinion, judgement or belief. In general, a prediction implies a much narrower and more precise statement than an explanation, often hinting at possibilities regarding future events.

Several factors influence complex systems, including self-organization, multiple views, and dynamic complexity as defined by McNabb (2023a-c). McNabb contend that in complex systems the interaction among components and between the system as a whole and its environment transcend the insights derived from a mere components analysis. Two central characteristics of complex systems are self-organization and emergence. It is important to distinguish between complex systems and complicated systems: complex systems are organic systems (comprising biological, psychological and social systems), whereas complicated systems are mechanical systems (e.g., air planes, a computer, and ABM models). The challenge of agency arises primarily in complex systems, marked by highly uncertain behaviour. Relationships within self-organized system exhibit several noteworthy properties, although, given the need for a concise discussion regarding ABM prediction, we will consider here only a few of them (McNabb, 2023a-c):

  1. Multiple views,
  2. Dynamic interactions (connexion among components changes over time),
  3. Non-linear interaction (small causes can lead to unpredictable effects),
  4. The system lacks static equilibrium (instead, it maintains a dynamic equilibrium and remains unstable),
  5. Understanding the current state necessitates examining Its history (a diachronic, not synchronic study, is essential)

Given the possibility of multiple views, a complex systems are prone to significant structural change due to  dynamic and non-linear interactions, dynamic equilibrium  and diachronic evolution. Additionally, the probability of possessing both the right change mechanism (the logical process) and complete data (addressing the challenge of data incompleteness) required to initialize the model and establish necessary assumptions is excessively low. Consequently, predicting outcomes in complex systems (defined as organic systems) whether using AMB or alternative mechanisms, becomes nearly impossible. If such prediction does occur, it typically happens under highly specific conditions, such as within a brief time frame and controlled settings, often amounting to a form of coincidental success. Only after the expected event or outcomes materializes can we definitely claim that it was predicted. Although prediction remains a challenging endeavour in complex systems, it remains viable in complicated systems. In complicated systems, prediction serves as an answer to Troitzsch’s aforementioned question.

Taking into account Troitzsch’s notion of prediction and McNabb’s ideas on complex systems and complicated systems, let’s briefly revisit the various positions presented in this special issue.

Chattoe-Brown (2023) suggests using models to predict models. Models are considered complicated rather than complex systems, so it this case, we would be predicting a complicated system rather than a complex one. This represents a significant reduction.

Dignum (2023) argues that prediction is possible in cases where there is a stable environment (conditions) and sufficient available data. However, this generally is not the case, making it challenging to meet the requirements for prediction when considering complex (organic) systems.

Carpentras and Quayle (2023) themselves acknowledge the difficulties of prediction in ABM when studying issues related to psychological systems involving psychometric measures, which are a type of organic system, aligning with our argument.

Elsenbroich and Badham (2023), Elsebroich and Polhill (2023), and Edmonds (2023) maintain a strongly sceptic position regarding ABM prediction. They argue that AMBs yield coherent extrapolations based on a specific initialization of the model and a set of assumptions, but these extrapolations are not necessarily grounded in reality. According to them, complex systems exhibit properties such as information incompleteness, multiple perspectives, emergence, evolutionary dynamics, and context specificity. In this respect, their position aligns with the stance we are presenting here.

Finally, Anzola and García-Díaz (2023) advocate for a more robust framework for prediction and recognizes the ongoing debate on prediction, an stance that closely resonates with our own.

In conclusion, Troitzsch notion of prediction and McNabb descriptions of complex systems and complicated systems have helped us better understand the diverse positions on ABM prediction in the reviewed issue. This exemplifies how a good conceptual framework, in this
case offered by appropriate notions of prediction and complexity, can
contribute to reducing the controversy surrounding ABM prediction.

References

Anzola D. and García-Díaz C. (2023). What kind of prediction? Evaluating different facets of prediction in agent-based social simulation International Journal of Social Research Methodology, 26(2), pp. 171-191. https://doi.org/10.1080/13645579.2022.2137919

Carpentras D. and Quayle M. (2023). The psychometric house-of-mirrors: the effect of measurement distortions on agent-based models’ predictions. International Journal of Social Research Methodology, 26(2), pp. 215-231. https://doi.org/10.1080/13645579.2022.2137938

Chattoe-Brown E. (2023). Is agent-based modelling the future of prediction International Journal of Social Research Methodology, 26(2), pp. 143-155. https://doi.org/10.1080/13645579.2022.2137923

Dignum F. (2023). Should we make predictions based on social simulations?}. International Journal of Social Research Methodology, 26(2), pp. 193-206. https://doi.org/10.1080/13645579.2022.2137925

Edmonds B. (2023). The practice and rhetoric of prediction – the case in agent-based modelling. International Journal of Social Research Methodology, 26(2), pp. 157-170. https://doi.org/10.1080/13645579.2022.2137921

Edmonds, B., Polhill, G., & Hales, D. (2019). Predicting Social Systems – A Challenge. https://rofasss.org/2019/11/04/predicting-social-systems-a-challenge/

Elsenbroich C. and Polhill G. (2023) Editorial: Agent-based modelling as a method for prediction in complex social systems. International Journal of Social Research Methodology, 26/2, 133-142. https://doi.org/10.1080/13645579.2023.2152007

Elsenbroich C. and Badham J. (2023). Negotiating a Future that is not like the Past. International Journal of Social Research Methodology, 26(2), pp. 207-213. https://doi.org/10.1080/13645579.2022.2137935

McNabb D. (2023a, September 20). El Paradigma de la complejidad (1/3) [Video]. YouTube. https://www.youtube.com/watch?app=desktop&v=Uly1n6tOOlA&ab_channel=DarinMcNabb

McNabb D. (2023b, September 20). El Paradigma de la complejidad (2/3) [Video]. YouTube. https://www.youtube.com/watch?v=PT2m9lkGhvM&ab_channel=DarinMcNabb

McNabb D. (2023c, September 20). El Paradigma de la complejidad (3/3) [Video]. YouTube. https://www.youtube.com/watch?v=25f7l6jzV5U&ab_channel=DarinMcNabb

Troitzsch, K. G. (2009). Not all explanations predict satisfactorily, and not all good predictions explain. Journal of Artificial Societies and Social Simulation, 12(1), 10. https://www.jasss.org/12/1/10.html


Terán, O. (2023) Agent-based Modelling as a Method for Prediction for Complex Social Systems - a review of the special issue. Review of Artificial Societies and Social Simulation, 28 Sep 2023. https://rofasss.org/2023/09/28/review-ABM-for-prediction


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Teaching highly intelligent primary school kids energy system complexity

By Emile Chappin

An energy system complexity lecture for kids?

I was invited to open the ‘energy theme’ at a primary school with a lecture on energy and wanted to give it a complexity and modelling flavour. And I wondered… can you teach this to a large group of 7-to-12-year-old children, all highly intelligent but so far apart in their development? What works in this setting, and what doesn’t? How long should I make such a lecture? Can I explain and let them feel what research is? Can I do some experiments? Can I share what modelling is? What concepts should I include? What are such kids interested in? What do they know? What would they expect? Many of these questions haunted me for some time, and I thought it would be nice to share my observations from simply going for it.

I outline my learning goals, observations from the first few minutes, approach, some later observations, and main takeaways. I end with a plea for teaching social simulation at primary schools. This initiative is part of the Special Interest Group on Education (http://www.essa.eu.org/sig/sig-education/) of the European Social Simulation Association.

Learning goals

I wanted to provide the following insights to these kids:

  • Energy is everywhere; you can feel, hear, and see it all around you. Even from outer space, you can see where cities are when you look at the earth. All activities you do require some form of energy.
  • Energy comes in different forms and can be converted into other forms.
  • Everyone likes to play games, and we can use games even to do research and perform experiments.
  • Doing research/being a researcher involves asking (sometimes odd) questions, looking very carefully at things, studying how the world works and why and solving problems.
  • You can use computers to perform social simulations that help us think. Not necessarily to answer questions but as tools that help us think about the world, do many experiments and study their implications.

First observations

It is easy to notice that this is a rather ambitious plan. Nevertheless, I learnt very quickly that these kids knew a lot! And that they (may) question everything from every angle. They are keen to understand and eager to share what they know. I was happy I could connect with them quickly by helping them get seated, chit chatting before the start.

My approach

I used symbols/analogies to explain deep concepts and layered the meaning, deepening the understanding layer by layer. I came back to and connected all these layers. This enables kids from different age groups to understand the lecture on their level. An example is that I mentioned early on how I was interested in as a kid in black holes. I explained that black holes were discovered by thinking carefully about how the universe works and that theoretical physicists concluded there might be something like a black hole. It was decades later before a real black hole was photographed. The fact that you can imagine and reason how something may exist that you cannot (yet) observe… that much later has been proven to exist. This is what research can be; it is incredible how this happened. Much later in the talk, I connected this to how you can use the computer to imagine, dream up, and test ideas because, in many cases, it is tough to do in real life.

I asked many questions and listened carefully to the answers. Some answers are way off-topic, and it is essential to guide these kids enough so the story continues, but at the same time, the kids stay on board. An early question was… do you like to play games? It is so lovely to have a group of kids cheering that they want to play games! It provides a connection. Another question I asked was, what is the similarity between a wind turbine and a sheep? Kids laughed at the funny question and picture but also came up with the desired answer (they both need/convert energy). Other creative solutions were that the colours were similar, and the shape had similarities. These are fun answers and also correct!

Because of these questions, kids came up with many great insights and good observations. This was astonishing. Research is looking at something carefully, like a snail. A black hole comes from a collapsing star, and our sun will collapse at some point in time. One kid knew that the object I brought was a kazoo… so I invited him to try imitating the sound of Max Verstappen’s Formula One car. And, of course, I had a few more kazoos, so we made a few reasonable attempts. I went back to 5+ times during the next hour to some of these kids’ great remarks: it helped to keep connected to the kids.

I played the ‘heroes and cowards’ game (similar to the ‘heroes and cowards’ model from the Netlogo library). This was a game as well as an experiment. I announced that it only works if we all follow the rules carefully. I made the kids silently think about what would happen. It worked reasonably well: they could observe the emergent phenomenon of the group cluttering and exploding, although it went somewhat rough.

A fantastic moment was to explain the concept of validity to young kids simply by experiencing it. I pressed on the fact that following the rules was crucial for our experiment to be valid and that stumbling and running was problematic for our outcomes. It was amazing that this landed so well; I was fortunate that the circumstances allowed this.

After playing this game a couple of times, with hilarious moments and chaos, I showed how you could replicate what happened in a simulation in Netlogo. I showed that you could repeat it rapidly and do variations that would be hard to do with the kids. I even showed the essential piece of code. And I remarked that the kids on the computer did listen better to me.

Later observations

We planned to take 60 minutes, observe how far we could go, and adapt. I noticed I could stretch it to 75 minutes, far longer than I thought was possible. I used less material than I thought I would use for 60 minutes. I started relatively slow and with a personal touch. I was happy I had flexible material and could adapt to what the kids shared. I used my intuition and picked up objects that were around that I could use to tell the story.

Some sweet things happened. When I first arrived, one kid played the piano in the general area. He played with much possess, small but intense. I said in the lecture that I heard him play and that I was also into music. Raised hand: ‘Will you play something for us at the end’? Of course, I promised this! During the lecture… I repeatedly promised I would; the question came back many times. I played a song the young piano player liked to hear.

These children were very open and direct. I had expected that but was still surprised by how honest and straightforward. ‘Ow, now I lost my question, this happens to me all the time’. I said: do you know I also have this quite often? It is perfectly normal. It doesn’t matter. If the question comes back, you can raise your hand again. If it doesn’t, then that is also just fine.

My takeaways

  • Do fun things, even if it is not perfectly connected. It helps with the attention span and provides a connection. Using humour helps us all to be open to learning.
  • Ask many questions, and use your intuition when asking questions. Listen to the answers, remember important ones (and who gave them), and refer back to them. If something is off-topic, you can ‘park’ that question and remark or answer it politely without dismissing it.
  • Act things out very dramatically. I acted very brave and very cowardly when introducing the game. I used two kids to show the rules and kept referring to them using their names.
  • Don’t overprepare but make the lecture flexible. Where can you expand? What do you need to do to make the connection, to make it stick?
  • I was happy that the class teachers helped me by asking a crucial question at the end, allowing me to close a couple of circles. Keep the teacher active and involved in the lecture. Invite them beforehand to do so.
  • A helpful hint I received afterwards was to use a whiteboard (or something similar) to develop a visual record of concepts and keywords raised by the kids, e.g., in the form of a mind map.
  • Kids keep surprising you all the way. One asked about NetLogo: ‘Can you install this software on Windows 8?’ It is free. You can try it out yourselves, I said. ‘Can you upgrade windows 8 to windows 10’. Well, this depends on your computer, I said. These kids keep surprising you!
  • You can teach complexity, emergence, and agent-based modelling without using words. But if kids use a term, acknowledge it. In this case: ‘But with AI….’ This is AI. It is worth exploring how to reach and teach children crucial complexity insights at a young age.

Teaching social simulation in primary schools

I plea that it is worth the effort to inspire children at a young age with crucial insights into what research is, into complexity, and into using social simulation. In this specific lecture, I only briefly touched on the use of social simulation (right at the end). It is a fantastic gift to help someone see complexity unfold before their eyes and to catch a glimpse of the tools that show the ingredients of this complexity. And it is a relatively small step towards unravelling social behaviour through social simulations. I’m tempted to conclude that you could teach young children a basic understanding of social simulation with relatively small educational modules. Even if it is implicit through games and examples, they may work effectively if placed carefully in the social environment that the different age groups typically face. Showing social structures emerging from behavioural rules. Illustrating different patterns emerging due to stochasticity and changes in assumptions. Dreaming up basic (but distinct) codified decision rules about actual (social) behaviour you see around you. If this becomes an immersive experience, such educational modules have the potential to contribute to an intuitive understanding of what social simulations are and how they can be used. Children may be inspired to learn to see and understand emergent phenomena around them from an early age; they may become the thinkers of tomorrow. And for the kids I met on this occasion: I’d be amazed if none of them became researchers one day. I hope that if you get the chance, you also give it a go and share your experience! I’m keen to hear and learn!


Chappin, E. (2023) Teaching highly intelligent primary school kids energy system complexity. Review of Artificial Societies and Social Simulation, 19 Apr 2023. https://rofasss.org/2023/04/19/teachcomplex


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Yes, but what did they actually do? Review of: Jill Lepore (2020) “If Then: How One Data Company Invented the Future”

By Nick Gotts

ngotts@gn.apc.org

Jill Lepore (2020) If Then: How One Data Company Invented the Future. John Murray. ISBN: 978-1-529-38617-2 (2021 pbk edition). [Link to book]

This is a most frustrating book. The company referred to in the subtitle is the Simulmatics Corporation, which collected and analysed data on public attitudes for politicians, retailers and the US Department of Defence between 1959 and 1970. Lepore says it carried out “simulation”, but is never very clear about what “simulation” meant to the founders of Simulmatics, what algorithms were involved, or how these algorithms used data. The history of Simulmatics is narrated along with that of US politics and the Vietnam War during its period of operation; the company worked for John Kennedy’s presidential campaign in 1960, although the campaign was shy about admitting this. There is much of interest in this historical context, but the book is marred by the apparent limitations of Lepore’s technical knowledge, her prejudices against the social and behavioural sciences (and in particular the use of computers within them), and irritating “tics” such as the frequent repetition of “If/Then”. There are copious notes, and an index, but no bibliography.

Lepore insists that human behaviour is not predictable, whereas both everyday observation and the academic study of human sciences and history show that on both individual and collective levels it is partially predictable – if it were not, social life would be impossible – and partially unpredictable; she also claims that there is a general repudiation of the importance of history among social and behavioural scientists and in “Silicon Valley”, and seems unaware that many historians and other humanities researchers use mathematics and even computers in their work.

Information about Simulmatics’ uses of computers is in fact available from contemporary documents which its researchers published. In the case of Kennedy’s presidential campaign (de Sola Pool and Abelson 1961, de Sola Pool 1963), the “simulation” involved was the construction of synthetic populations in order to amalgamate polling data from past (1952, 1954, 1956, 1958) American election campaigns. Americans were divided into 480 demographically defined “voter types” (e.g. “Eastern, metropolitan, lower-income, white, Catholic, female Democrats”), and the favourable/unfavourable/neither polling responses of members of these types to 52 specific “issues” (examples given include civil rights, anti-Communism, anti-Catholicism, foreign aid) were tabulated. Attempts were then made to “simulate” 32 of the USA’s 50 states by calculating the proportions of the 480 types in those states and assuming the frequency of responses within a voter type would be the same across states. This produced a ranking of how well Kennedy could be expected to do across these states, which matched the final results quite well. On top of this work an attempt was made to assess the impact of Kennedy’s Catholicism if it became an important issue in the election, but this required additional assumptions on how members of nine groups cross-classified by political and religious allegiance would respond. It is not clear that Kennedy’s campaign actually made any use of Simulmatics’ work, and there is no sense in which political dynamics were simulated. By contrast, in later Simulmatics work not dealt with by Lepore, on local referendum campaigns about water fluoridation (Abelson and Bernstein 1963), an approach very similar to current work in agent-based modelling was adopted. Agents based on the anonymised survey responses of individuals both responded to external messaging, and interacted with each other, to produce a dynamically simulated referendum campaign. It is unclear why Lepore does not cover this very interesting work. She does cover Simulmatics’ involvement in the Vietnam War, where their staff interviewed Vietnamese civilians and supposed “defectors” from the National Liberation Front of South Vietnam (“Viet Cong”) – who may in fact simply have gone back to their insurgent activity afterwards; but this work does not appear to have used computers for anything more than data storage.

In its work on American national elections (which continued through 1964) Simulmatics appears to have wildly over-promised given the data that it would have had available, subsequently under-performed, and failed as a company as a result; from this, indeed, today’s social simulators might take warning. Its leaders started out as “liberals” in American terms, but appear to have retained the colonialist mentality generally accompanying this self-identification, and fell into and contributed to the delusions of American involvement in the Vietnam War – although it is doubtful whether the history of this involvement would have been significantly different if the company had never existed. The fact that Simulmatics was largely forgotten, as Lepore recounts, hints that it was not, in fact, particularly influential, although interesting as the venue of early attempts at data analytics of the kind which may indeed now threaten what there is of democracy under capitalism (by enabling the “microtargeting” of specific lies to specific portions of the electorate), and at agent-based simulation of political dynamics. From a personal point of view, I am grateful to Lepore for drawing my attention to contemporary papers which contain far more useful information than her book about the early use of computers in the social sciences.

References

Abelson, R.P. and Bernstein, A. (1963) A Computer Simulation Model of Community Referendum Controversies. The Public Opinion Quarterly Vol. 27, No. 1 (Spring, 1963), pp. 93-122. Stable URL http://www.jstor.com/stable/2747294.

de Sola Pool, I. (1963) AUTOMATION: New Tool For Decision Makers. Challenge Vol. 11, No. 6 (MARCH 1963), pp. 26-27. Stable URL https://www.jstor.org/stable/40718664.

de Sola Pool, I. and Abelson, R.P. (1961) The Simulmatics Project. The Public Opinion Quarterly, Vol. 25, No. 2 (Summer, 1961), pp. 167-183. Stable URL https://www.jstor.org/stable/2746702.


Gotts, N. (2023) Yes, but what did they actually do? Review of: Jill Lepore (2020) "If Then: How One Data Company Invented the Future". Review of Artificial Societies and Social Simulation, 9 Mar 2023. https://rofasss.org/2023/03/09/ReviewofJillLepore


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Models in Social Psychology and Agent-Based Social simulation – an interdisciplinary conversation on similarities and differences

By Nanda Wijermans, Geeske Scholz, Rocco Paolillo, Tobias Schröder, Emile Chappin, Tony Craig, and Anne Templeton

Introduction

Understanding how individual or group behaviour are influenced by the presence of others is something both social psychology and agent-based social simulation are concerned with. However, there is only limited overlap between these two research communities, which becomes clear when terms such as “variable”, “prediction”, or “model” come into play, and we build on their different meanings. This situation challenges us when working together, since it complicates the uptake of relevant work from each community and thus hampers the potential impact that we could have when joining forces.

We[1] – a group of social psychologists and social simulation modellers – sought to clarify the meaning of models and modelling from an interdisciplinary perspective involving these two communities. This occurred while starting our collaboration to formalise ‘social identity approaches’ (SIA). It was part of our journey to learn how to communicate and understand each other’s work, insights, and arguments during our discussions.

We present a summary of our reflections on what we learned from and with each other in this paper, which we intend to be part of a conversation, complementary to existing readings on ABM and social psychology (e.g., Lorenz, Neumann, & Schröder, 2021; Smaldino, 2020; Smith & Conrey, 2007). Complementary, because one comes to understand things differently when engaging directly in conversation with people from other communities, and we hope to extend this from our network to the wider social simulation community.

What are variable- and agent-based models?

We started the discussion by describing to each other what we mean when we talk about “a model” and distinguishing between models in the two communities as variable-based models in social psychology and agent-based modelling in social simulation.

Models in social psychology generally come in two interrelated variants. Theoretical models, usually stated verbally and typically visualised with box-and-arrow diagrams as in Figure 1 (left), reflect assumptions of causal (but also correlational) relations between a limited number of variables. Statistical models are often based in theory and fitted to empirical data to test how well the explanatory variables predict the dependent variables, following the causal assumptions of the corresponding theoretical model. We therefore refer to social-psychological models as variable-based models (VBM). Core concepts are prediction and effect size. A prediction formulates whether one variable or combination of more variables causes an effect on an outcome variable. The effect size is the result of testing a prediction by indicating the strength of that effect, usually in statistical terms, the magnitude of variance explained by a statistical model.

It is good to realise that many social psychologists strive for a methodological gold standard using controlled behavioural experiments. Ideally, one predicts data patterns based on a theoretical model, which is then tested with data. However, observations of the real world are often messier. Inductive post hoc explanations emerge when empirical findings are unexpected or inconclusive. The discovery that much experimental work is not replicable has led to substantial efforts to increase the rigour of the methods, e.g., through the preregistration of experiments (Eberlen, Scholz & Gagliolo, 2017).

Models in Social Simulation come in different forms – agent-based models, mathematical models, microsimulations, system dynamic models etc – however here we focus on agent-based modelling as it is the dominant modelling approach within our SIAM network. Agent-based models reflect heterogeneous and autonomous entities (agents) that interact with each other and their environments over time (Conte & Paolucci, 2014; Gilbert & Troitzsch, 2005). Relationships between variables in ABMs need to be stated formally (equations or logical statements) in order to implement theoretical/empirical assumptions in a way that is understandable by a computer. An agent-based model can reflect assumptions about causal relations between as many variables as the modeller (team) intends to represent. Agent-based models are often used to help understand[2] why and how observed (macro) patterns arise by investigating the (micro/meso) processes underlying them (see Fig 1, right).

The extent to which social simulation models relate to data ranges from ‘no data used whatsoever’ to ‘fitting every variable value’ to empirical data. Put differently, the way one uses data does not define the approach. Note that assumptions based on theory and/or empirical observations do not suffice but require additional assumptions to make the model run.

Fig. 1: Visualisation of what a variable-based model in social psychology is (left) and what an agent-based model in social simulation is (right).

Comparing models

The discussion then moved from describing the meaning of “a model” to comparing similarities and differences between the concepts and approaches, but also what seems similar but is not…

Similar. The core commonalities of models in social psychology (VBM) and agent-based social simulation (ABM) are 1) the use of models to specify, test and/or explore (causal) relations between variables and 2) the ability to perform systematic experiments, surveys, or observations for testing the model against the real world. This means that words like ‘experimental design’, ‘dependent, independent and control variables’ have the same meaning. At the same time some aspects that are similar are labelled differently. For instance, the effect size in VBMs reflects the magnitude of the effect one can observe. In ABMs the analogy would be the sensitivity analysis, where one tests for the importance or role of certain variables on the emerging patterns in the simulation outcomes.

False Friends. There are several concepts that are given similar labels, but have different meanings. These are particularly important to be aware of in interdisciplinary settings as they can present “false friends”. The false friends we unpacked in our conversations are the following:

  • Model: whether the model is variable-based in social psychology (VBM) or agent-based in social simulation (ABM). The VBM focuses on the relation between two or a few variables typically in one snapshot of time, whereas the ABM focuses on the causal relations (mechanisms/processes) between (entities (agents) containing a number of) variables and simulates the resulting interactions over time.
  • Prediction: in VBMs a prediction is a variable-level claim, stating the expected magnitude of a  relation between two or few variables. In ABMs prediction would instead be a claim about the future real-world system-level developments on the basis of observed phenomena in the simulation outcomes. In case such prediction is not the model purpose (which is likely), each future simulated system state is sometimes labelled nevertheless as a prediction, though it doesn’t mean to be necessarily accurate as a prediction to the real-world future. Instead, it can for example be a full explanation of the mechanisms required to replicate the particular phenomenon or a possible trajectory of which reality is just one. 
  • Variable: here both types of models have variables (a label of some ‘thing’ that can have a certain ‘value’). In ABMs there can be many variables, some that have the same function as the variables in VBM (i.e., denoting a core concept and its value). Additionally, ABMs also have (many) variables to make things work.
  • Effect size: in VBM the magnitude of how much the independent variable can explain a dependent variable. In ABM the analogy would be sensitivity analysis, to determine the extent to which simulation outcomes are sensitive to changes in input settings. Note that, while effect size is critical in VBMs, in ABMs small effect sizes in micro interactions can lead toward large effects on the macro level.
  • Testing: VBMs usually test models using some form of hypothesis testing, whereas ABMs can be tested in very different ways (see David et al (2019)), depending on the purpose they have (e.g., explanation, theoretical exposition, prediction, see Edmonds et al. (2019)), and on different levels. For instance, testing can relate to the verification of the implementation of the model (software development specific), to make sure the model behaves as designed. However, testing can also relate to validation – checking whether the model lives up to its purpose – for instance testing the results produced by the ABM against real data if the aim is prediction of the real world-state.
  • Internal validity: in VBM this is to assure the causal relation between variables and their effect size. In ABMs it refers to the plausibility in assumptions and causal relations used in the model (design), e.g., by basing these on expert knowledge, empirical insights, or theory rather than on the modeller’s intuition only.

Differences. There are several differences when it comes to VBM and ABM. Firstly, there is a difference in what a model should replicate, i.e., the target of the model: in social psychology the focus tends to be on the relations between variables underlying behaviour, whereas in ABM it is usually on the macro-level patterns/structures that emerge. Also, the concept of causality differs in psychology, VBM models are predominantly built under the assumption of linear causality[3], with statistical models aiming to quantify the change in the dependent variable due to (associated) change in the independent variable. A causality or correlation often derived with “snapshot data”, i.e., one moment in time and one level of analysis. In ABMs, on the other hand, causality appears as a chain of causal relations that occur over time. Moreover, it can be non-linear (including multicausality, nonlinearity, feedback loops and/or amplifications of models’ outcomes). Lastly, the underlying philosophy can differ tremendously concerning the number of variables that are taken into consideration. By design, in social psychology one seeks to isolate the effects of variables, maintaining a high level of control to be confident about the effect of independent variables or the associations between variables. For example, by introducing control variables in regression models or assuring random allocation of participants in isolated experimental conditions. Whereas in ABMs, there are different approaches/preferences: KISS versus KIDS (Edmonds & Moss, 2004). KISS (Keep It Simple Stupid) advocates for keeping it simple as possible: only complexify if the simple model is not adequate. KIDS (Keep It Descriptive Stupid), on the other end of the spectrum, embraces complexity by relating to the target phenomenon as much as one can and only simplify when evidence justifies it. Either way, the idea of control in ABM is to avoid an explosion of complexity that impedes the understanding of the model, that can lead to e.g., causes misleading interpretations of emergent outcomes due to meaningless artefacts.

We summarise some core take-aways from our comparison discussions in Table 1.

Table 1. Comparing models in social psychology and agent-based social simulation

Social psychology (VBM)Social Simulation (ABM)
AimTheory development and prediction (variable level)Not predefined. Can vary widely purpose. (system level)
Model targetReplicate and test relations between variablesReproduce and/or explain a social phenomenon – the macro level pattern
Composed ofVariables and relations between themAgents, environment & interactions
Strive forHigh control, (low number of variables and relations ReplicationPurpose-dependent. Model complexity: represent what is needed, not more, not less.
TestingHypotheses testing using statistics, including possible measuring the effect size a relation to assess confidence in the variable’s importance’Purpose-dependent. Can refer to verification, validation, sensitivity analysis or all of them. See text and refs under false friends.
Causality(or correlation) between variables Linear representationBetween variables and/or model entities.
Non-linear representation
Theory developmentCritical reflection on theory through confirmation. Through hypothesis testing (a prediction) theory gets validated or (if not confirmed) input for reconsideration of the theory.IFF aim of model, ways of doing is not predefined. It can be reproducing the theory prediction with or without internal validity. ABMs can further help to identify gaps in existing theory.
DynamismLittle – often within snapshot causalityCore – within snapshot and over time causality
External validity(the ability to say something about the actual target/ empirical  phenomenon)VBM aims at generalisation and has predictive value for the phenomenon in focus. VBMs in lab experiments are often criticised for their weak external validity, considered high for field experiments.ABMs insights are about the model, not directly about the real world. Without making predictive claims, they often do aim to say something about the real world.

Beyond blind spots, towards complementary powers

We shared the result of our discussions, the (seemingly) communalities and differences between models in social psychology and agent-based social simulation. We allowed for a peek into the content of our interdisciplinary journey as we invested time, allowed for trust to grow, and engaged in open communication. All of this was needed in the attempt to uncover conflicting ways of seeing and studying the social identity approach (SIA). This investment was crucial to be able to make progress in formalising SIA in ways that enable for deeper insights – formalisations that are in line with SIA theories, but also to push the frontiers in SIA theory. Joining forces allows for deeper insights, as VBM and ABM complement and challenge each other, thereby advancing the frontiers in ways that cannot be achieved individually (Eberlen, Scholz & Gagliolo, 2017; Wijermans et al. 2022,). SIA social psychologists bring to the table the deep understanding of the many facets of SIA theories and can engage in the negotiation dance of the formalisation process adding crucial understanding of the theories, placed in their theoretical context. Social psychology in general can point to empirically supported causal relations between variables, and thereby increase the realism of the assumptions of agents (Jager, 2017; Templeton & Neville 2020). Agent-based social simulation, on the other hand, pushes for over-time causality representation, bringing to light (logical) gaps of a theory and providing explicitness and thereby adding to the development of testable (extended) forms of (parts of) a theory, including the execution of those experiments that are hard or impossible in controlled experiments. We thus started our journey, hoping to shed some light on blind spots and releasing our complementary powers in the formalisation of SIA.

To conclude, we felt that having a conversation together led to a qualitatively different understanding than would have been the case had we all ‘just’ reading informative papers. These conversations reflect a collaborative research process (Schlüter et al. 2019). In this RofASSS paper, we strive for widening this conversation to the social simulation community, connecting with others about our thoughts as well as hearing your experiences, thoughts and learnings while being on an interdisciplinary journey with minds shaped by variable-based or agent-based models, or both.

Acknowledgements

The many conversations we had in this stimulating scientific network since 2020 were funded by the the  Deutsche Forschungsgemeinschaft (DFG- 432516175)

References

Conte, R., & Paolucci, M. (2014). On agent-based modeling and computational social science. Frontiers in psychology, 5, 668. DOI:10.3389/fpsyg.2014.00668

David, N., Fachada, N., & Rosa, A. C. (2017). Verifying and validating simulations. In Simulating social complexity (pp. 173-204). Springer, Cham. DOI:10.1007/978-3-319-66948-9_9

Eberlen, J., Scholz, G., & Gagliolo, M. (2017). Simulate this! An introduction to agent-based models and their power to improve your research practice. International Review of Social Psychology, 30(1). DOI:10.5334/irsp.115/

Edmonds, B., & Moss, S. (2004). From KISS to KIDS–an ‘anti-simplistic’modelling approach. In International workshop on multi-agent systems and agent-based simulation (pp. 130-144). Springer, Berlin, Heidelberg. DOI:10.1007/978-3-540-32243-6_11

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H. and Squazzoni, F. (2019) ‘Different Modelling Purposes’ Journal of Artificial Societies and Social Simulation 22 (3) 6 <http://jasss.soc.surrey.ac.uk/22/3/6.html>. doi: 10.18564/jasss.3993

Gilbert, N., & Troitzsch, K. (2005). Simulation for the social scientist. McGraw-Hill Education (UK).

Jager, W. (2017). Enhancing the realism of simulation (EROS): On implementing and developing psychological theory in social simulation. Journal of Artificial Societies and Social Simulation, 20(3). https://jasss.soc.surrey.ac.uk/20/3/14.html

Lorenz, J., Neumann, M., & Schröder, T. (2021). Individual attitude change and societal dynamics: Computational experiments with psychological theories. Psychological Review, 128(4), 623-642.  https://doi.org/10.1037/rev0000291

Smaldino, P. E. (2020). How to Translate a Verbal Theory Into a Formal Model. Social Psychology, 51(4), 207–218. http://doi.org/10.1027/1864-9335/a000425

Schlüter, M., Orach, K., Lindkvist, E., Martin, R., Wijermans, N., Bodin, Ö., & Boonstra, W. J. (2019). Toward a methodology for explaining and theorizing about social-ecological phenomena. Current Opinion in Environmental Sustainability, 39, 44-53. DOI:10.1016/j.cosust.2019.06.011

Smith, E.R. & Conrey, F.R. (2007): Agent-based modeling: a new approach for theory building in social psychology. Pers Soc Psychol Rev, 11:87-104. DOI:10.1177/1088868306294789

Templeton, A., & Neville, F. (2020). Modeling collective behaviour: insights and applications from crowd psychology. In Crowd Dynamics, Volume 2 (pp. 55-81). Birkhäuser, Cham. DOI:10.1007/978-3-030-50450-2_4

Wijermans, N., Schill, C., Lindahl, T., & Schlüter, M. (2022). Combining approaches: Looking behind the scenes of integrating multiple types of evidence from controlled behavioural experiments through agent-based modelling. International Journal of Social Research Methodology, 1-13. DOI:10.1080/13645579.2022.2050120

Notes 

[1] Most VBMs are linear (or multilevel linear models), but not all.  In the case of non-normally distributed data changes the tests that are used.

[2] We are researchers keen to use, extend, and test the social identity approach (SIA) using agent-based modelling. We started from interdisciplinary DFG network project (SIAM: Social Identity in Agent-based Models, https://www.siam-network.online/) and now form a continuous special-interest group at the European Social Simulation Association (ESSA) http://www.essa.eu.org/.

[3] ABMs can cater to diverse purposes, e.g., description, explanation, prediction, theoretical exploration, illustration, etc. (Edmonds et al., 2019).


Wijermans, N., Scholz, G., Paolillo, R., Schröder, T., Chappin, E., Craig, T. and Templeton, A. (2022) Models in Social Psychology and Agent-Based Social simulation - an interdisciplinary conversation on similarities and differences. Review of Artificial Societies and Social Simulation, 4 Oct 2022. https://rofasss.org/2022/10/04/models-in-spabss/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Artificial Sociality Manifesto

By Gert Jan Hofstede1*, Christopher Frantz2, Jesse Hoey3, Geeske Scholz4, and Tobias Schröder5

*Corresponding author, 1Information Technology, Wageningen, 2Department of Computer Science, Norwegian University of Science and Technology, 3School of Computer Science, University of Waterloo, 4Institut für Umweltsystemforschung, Universität Osnabrück, 5Potsdam University of Applied Sciences

Table of Contents

Approach

Ambition

With this position paper the authors posit the need for a research area of Artificial Sociality. In brief this means “computational models of the essentials of human social behaviour”; we shall elaborate below. The need for artificial sociality is justified by the encroachment of simulations and knowledge technology, including Artificial Intelligence (AI), into the fabric of our societies. This includes smart devices, biosensors, facial recognition, coordination apps, surveillance apps, search engines, home and care robots, social media, machine learning modules, and agent-based simulation models of socio-ecological and socio-economic systems. It will include many more invasive technologies that will be invented in the coming decades. Artificial sociality is a way to connect human drives and emotions to the challenges our societies face, and the management and policy actions we need to take. In contrast to mainstream AI research, artificial sociality targets the social embeddedness of human behaviour and experience; we could say the collective intelligence of human societies rather than the individual intelligence of single agents. Human sociality has characteristics that differ from other varieties of sociality, while having variation across cultures (Henrich, 2016). In this piece, we concentrate on the incorporation of human sociality into agent-based computational social simulation models as a testbed for the integration of the various elements of artificial sociality.

The issue of artificial sociality is not new, as we’ll discuss below in the “State of the art” section. Our evolutionary perspective, we feel, offers new possibilities for integrating various strands of research. Our ambition is mainly to find a robust ontology for artificial human sociality, rooted in our actual evolutionary history and allowing to distinguish cultures. We hope that efforts at engineering computational agents and societies can benefit from this work.

Why is sociality so important?

Humans are eusocial

Sociality is a word used across various sciences. Neuroscientist Antonio Damasio makes it a central concept, arguing that it is present in all social creatures, even long predating multicellular organisms (Damasio, 2018). In agreement with Wilson & Holldöbler (Edward O. Wilson & Hölldobler, 2005), Wikipedia defines it in a biological way: “Sociality is the degree to which individuals in an animal population tend to associate in social groups (gregariousness) and form cooperative societies”. The site continues: “The highest degree of sociality recognized by sociobiologists is eusociality. A eusocial taxon is one that exhibits overlapping adult generationsreproductive division of labor, cooperative care of young, (…).” Obviously, this definition holds for humans. We are a eusocial primate species.

Why are we in the world?

A grand question in philosophy is “Why are we in the world?”. Evolutionary biology would answer “because our ancestors reproduced, ever since the beginnings of life”. The next question is “Why did our ancestors reproduce?” Well, they did so because “they were fit, and conquered natural and human-made hazards”. Thirdly, “Why were they fit?” This third “why” question takes us to sociality. Being eusocial gave our ancestors the fitness they needed. It allowed them to cooperate and divide tasks in groups. Millions of years ago, early hominins gathered, hunted, defended themselves, cared for the weak, exchanged goods and foods (G. Hofstede, Hofstede, & Minkov, 2010), chapter 12.

Sociality integrates elements of all possible sciences that are useful in comprehensively modelling human (or non-human) social behaviour, drives, and decision making. It spans from the “what” to the “why” to the “how”. The notion of sociality changes the meaning of the concept of intelligence into something that could be group-level, not individual-level. The most astounding fact about humans is the high degree of social or collective intelligence. Because of the protection it affords, collective intelligence even raises the tolerance for individual ineptness (Diamond, 1999).

Artificial sociality

Artificial sociality is the study of sociality by means of computational modelling. This could take many forms, e.g. social robotics, body-worn devices. In this paper we focus on computational social simulation with a particular focus on sociality. The application to computational social simulation sets purpose and limits to the selection of potentially relevant knowledge. Artificial sociality will be concerned with building blocks and primitives that are chosen so as to be reusable for a multitude of applications. In this sense it is a transformative endeavour. It offers a systematic integration of the existing insulated approaches sponsored by diverse disciplines to understand and analyse the human condition in all its facets. The primitives developed for artificial sociality should have the potential to be used by a great many applied scholars. More importantly, the dedicated integrated treatment of disciplines is increasingly recognised as necessary to produce sufficiently accurate insights, such as the impact of cultural aspects on the assessment of social policy outcomes (Diallo, Shults, & Wildman, 2020). Applications that benefit from a systematic consideration of artificial sociality include models of human collective action in society, in socio-environmental, socio-economical, or socio-technical systems. Typically, these models would be used to support policy making by achieving a better understanding of the dynamics of target systems.

The history of sociality

Early hominins were mentioned above. In the evolution of life, sociality is actually much older than that. To properly appreciate its importance, we’ll present a brief history of sociality.

Sociality is as old as slime moulds, primitive organisms (“Protista”) that are usually monocellular (e.g. Dictyostelium). Slime moulds know collective action and large-scale division of labour. Social insects such as bees and ants are a more familiar case of successful sociality. Among mammals, there are the burrowing mole rats who live in eusocial colonies. These, or similar, life forms are linked to us by an unbroken chain of life. Sociality has an ancient path dependency.

Hominins

Limiting ourselves to the last million years, our hominin ancestors have brought sociality to a new level. In contrast to other primates, humans have not radiated into distinct species, but merged into one genetically closely related pool, with tremendous cultural variation. They did this through a combination of migrating, fighting, spreading of diseases, cross-breeding, and massive copying of inventions. Some of the latter are mastery of fire, language, script, law, agriculture, religion, weapons and money. Our present-day sociality is the outcome of an unbroken chain of reproduction, all the way since the origins of life until today. At present, fission-fusion dynamics happen all the time in all human societies. Divisions between groups of people are deeply gut-felt. They range from stable across generations to ephemeral; but they are not genetically deep, nor absolute. Yet they matter greatly for the behaviour of our policy-relevant systems. Religions, political alliances, trade networks, but also social media hypes and terrorist movements are cases in point.

Victims of reason

In recent centuries, humans have tended to forget that for all our cleverness and symbolic intelligence, humans are also still social mammals with deep relational drives. Our relational drives tell our intelligence what to do, and do so generally without being transparent to us (Haidt, 2012; Kahneman, 2011). A purely cognitive or rational paradigm cannot capture all of these drives. Thus, when trying to understand our collective behaviours, we can be “victims of reason”. To quote Montesquieu: “Le Coeur a ses raisons que la raison ne connaît point” (‘the heart has its reasons unknown to reason’) (Montesquieu, 1979 [1742]). Artificial sociality goes beyond reason, identifying the unknowns of underlying relational motives. Yes, expected profit is an important motive; but it is relational profit that matters, influenced by gut feelings and emotions such as love, hate, pride, shame, envy, loyalties. Financial profit for the individual is just a special case. As theorized eloquently by Mercier and Sperber (Mercier & Sperber, 2017), reason is used by humans for social acceptance far more than it is used for accuracy. Basically, reason is used for arguing and justifying a position in a social group to enhance influence on, and acceptance by, the group.

Watch the lake, not just the ripples

When we create policy, we tend to run from incident to incident, often forgetting to consider the patterns of path dependence linking these incidents. Causal chains of things happening today run backwards into deep history. The French revolution for instance, while seemingly showing limited impact on life nowadays, has changed and shaped the conception of the nation state and of rights that modern citizens comfortably assume to be omnipresent. Similarly, present-day individualism can be traced back to the marriage policies of the medieval catholic church (Henrich, 2020). For both these examples, it stands to reason that even older sources exist, hidden on the unbroken path of history. Across undoubted and transformative change, there is a continuity to history, especially where sociality is concerned. Sociality is about understanding the lake of human nature, in order to better anticipate the ripples on it.

Why artificial sociality?

Fully understanding sociality is vital for our survival. Artificial sociality, by showing sociality in action, can help. Here we propose a list of principles that indicate how vital it is to understand sociality better. Therefore, they justify developing artificial sociality.

  • Systems over disciplines – The earth in the Anthropocene is one system, of which key aspects are ecology, economy, and technology. All of these are known by our intellect. Their development is driven by our sociality. To understand these systems, including human sociality, we need to integrate knowledge across disciplines. This includes both natural and social sciences.
  • Multi-level systems – Grand challenges are multi-level. They are about water, climate, contagious diseases, migration, peace. They involve people and groups in systems combined of natural, institutional and economic subsystems. They have dynamics and feedback cycles, often leading to unanticipated and undesired outcomes. They may or may not be subject to policy, but they are unavoidably subject to sociality.
  • Emotions AND Rationality – In disciplines concerned with modelling human behaviour, there is a tendency to work on the assumption that “we are our brains” (Swaab & Hedley-Prole, 2014). A broad cross-disciplinary perspective, as well as life experience, make it clear that this is not really the case. Sociality has reason for breakfast: we are subject to gut feelings, we are driven, or get carried away, by emotions. Artificial sociality can bring these things to life.
  • Interaction over Individuals – The behaviour of our systems strongly hinges on the sociality of the people in them. Key issues have to do with gut feelings, emotions, trust, communication, hierarchy, group affiliation, power, politics, geopolitics. All of these rest not in the individual but emerge from social interactions
  • Explainability over black boxes – While data-driven modelling experiences great popularity, models purely based on data render limited insight into the conceptual inner workings of a social system and its meaning for a target system (i.e., the social reality it represents). Artificial sociality needs to seek a balance of theory, data, and understanding. Analysing policy without understanding interaction effects limits scientific and practical value.

For whom?

  • Interdisciplinary researchers can use artificial sociality in models for understanding their target systems.
  • Policy makers can create better ideas and policies if they are helped by plausible systemic models of the issues they face and the dynamics those issues exhibit.
  • Citizens can act as policy makers, taking their fate into their hands.
  • Designers of intelligent systems can integrate knowledge about social dynamics.

With whom?

  • All disciplines in the social and life sciences. In order to articulate artificial sociality, all disciplines that study human life can potentially contribute. This ranges across levels of integration: anthropology, artificial intelligence, behavioural biology, behavioural economics, cultural psychology, evolutionary biology, history, neurosciences, psychology, small group behaviour, social geography, social psychology, sociology, system biology…the spirit is one of consilience (Edward O Wilson, 1999).
  • Non-academic stakeholders (e.g., governments, the general public). Not only can participatory approaches help uncover hidden rules and drivers of behaviour, but also can artificial sociality be an educational tool for an enlightened society to raise its self-reflection and awareness of its inner workings.

How?

  • We recognize the integration of various disciplines’ involvements, the diversity of their respective data, theories, concepts and methods, as a challenging endeavour. In many instances we are struck by gaps between involved disciplines, and the ability to integrate data and theory in a systematic manner. Just because one theory is right does not mean that another one is wrong; often, there is complementarity, if one is willing to search for it.
  • Simulation and levels of abstraction. To this end, simulation offers the necessary capabilities, since its approach has the potential to traverse disciplines by offering broad accessibility, modelling at abstraction levels that correspond to the analytical levels within different disciplines (e.g., micro, meso and macro level in sociological research). Its unique ability to afford the systematic integration of theory and data (Tolk, 2015), deductive and inductive reasoning has rendered social simulation as a “third way of doing science” (R. Axelrod, 1997), while available computational resources allow us to explore artificial sociality at scale.
  • Creative spark. Computational simulation requires a design effort that links its various contributions into mechanisms. These constitute an original, interdisciplinary contribution. They can themselves be validated.
  • Disciplinary contributions. Social simulation is conceptually a method embedded in life sciences, complexity, and social-scientific disciplines. Each of our models creates a miniature world. These worlds need all kinds of input from various sources and disciplines.
  • Practicable outputs. Agent-based social simulation typically intends to produce practicable outputs, using theory, data and intuitions as its inputs regardless of their origin (Tolk, 2015) (Edmonds & Moss, 2005). Therefore, social simulation, in particular agent-based modelling, and artificial sociality, should institutionally be fed by many disciplines. All researchers from all disciplines are welcome.
  • Dynamics. Agent-based models are eminently suitable to help understand the dynamics of systems. They allow one to investigate unintended collective patterns arising from individual motives, intra- and intergroup dynamics. In other words, they can link disciplines at different levels of aggregation, from the individual to the society. Sensitivity analysis of these dynamics is an integral part of the method.

State of the art on sociality across disciplines

Research into human behaviour has been carried out for a long time, and in many disciplines. Such research, usable or even intended for modelling, has been picking up in recent decades. It would be presumptuous to try and give a full review of developments. Yet we believe that it is useful to give a brief overview of what is happening in various disciplines. To avoid distracting from our purpose, the details are in the appendix.

What we need from the disciplines

Given our position that every living thing that exists today, has evolved and continues to evolve, we need contributions of various types for making sense of sociality. Let us, for one moment, consider life as a game of chess. In such a model, we need to know the what, the why and the how. In our proposal, these elements will become intertwined.

  • What: the constitutive elements (chessboard, pieces); the starting position, the rules of the game (formal and etiquette).
  • Why: the motivation of the players during a game: typically, this would be “capturing the enemy king”, but other motives could occur. For instance, I might want to lose, for motivating a junior opponent, or win, for challenging him or her.
  • How: the configurations that are meaningful, and sequences of moves that can make these configurations happen. Limitations in players’ skills can reduce the space of possibilities.

These questions also obtain for sociality:

  • What: medical- and neurosciences study our constitutive elements. History, institutional economics and anthropology study what collective behaviours occur in groups of people.
  • Why: evolutionary biology studies the origins of sociality. Psychology studies human motivation today, for instance in leadership, organizational behaviour, clinical -, social- and cultural psychology. Ethology does the same for non-human creatures.
  • How: Sociology tends to describe the how of sociality, for instance patterns, their causes and their sequences. Computational branches in biology, economics, and sociology construct artificial worlds. Simulation gaming, and experimental economics do the same with real people, in artificial incentive structures.

For computational modelling we will need input on all three of these questions. The models will require

  • A “what”: agents in an environment.
  • A “why”: motivation for the agents: drives, urges, goals.
  • A “how”: perceptions and actions for the agents, and coordination of these across space and time. This will lead to emergent pattern.

The three questions are really highly intertwined; we take them apart only for the sake of exposition. Also, the emergent pattern of one branch of science, or of a simulation model, can become the input, taken as given, of another. For instance, some models could investigate the emergence of institutions, norms, or culture; others could use such concepts as input variables.

The take-home message of this section is that our modelling efforts will be best served by an eclectic mode that draws from a broad variety of sciences.

What the disciplines tell us

We shall now attempt a synthesis of work on sociality across disciplines presented in the appendix that are important for the research agenda of artificial sociality. To structure it, we stick to our what, why and how questions. Admittedly, our synthesis is partial; this is done for the sake of purposefulness, not because other perspectives could not have merit.

What

Sociality is not a human invention. It is absolutely central to life on earth, and has been since billions of years, in an unbroken chain of reproduction. Sociality has served to preserve homeostasis in populations, enabling some to reproduce (Damasio, 2018). It is as old as monocellular organisms, many of which are known to coordinate their behaviours in response to external stimuli, particularly at the service of reproduction. Human sociality is special in a few ways (Henrich, 2016). We coordinate in many ways with many people we do not personally know. For achieving joint action, we have basically two mechanisms. In evolutionary terms these are prestige and dominance (Henrich, 2016).  In sociological terms:  status and power (Theodore D. Kemper, 2017). Also, groups of followers are able to curtail the power of leaders. For these functions we evolved intense emotional lives (J. E. Turner, 2007). Emotions are the proximate indicators of our sociality that our organisms provide to us. We’ll return to these issues under “how”.

Selective pressures do not just operate between individuals, but at many levels. There is selective pressure between individuals, human groups, forms of coordination, even ideas. Models can concentrate on any of these levels.

Why

Sociality, in terms of status and power motives in multiple, changing groups, and attending feelings and emotions, is necessary for solving coordination problems, e.g. dividing food, reproducing, bringing up children, or avoiding traffic congestion; and for solving collective action problems and social dilemmas, e.g. selecting a leader, disposing of a dysfunctional leader, or distributing resources across the citizens of a country. This holds in small groups and families with informal social bonds as well as in large groups or societies that rely on formal, depersonalised interaction patterns. Without sociality there can be neither Gemeinschaft (community) nor Gesellschaft (society). Sociality shapes our moral sense.

How

In Humans, sociality develops very early in life, preceding speech and walking. It requires intense care, play, and education during many years; we are a neotenous species, remaining juvenile for many years and even keeping some brain plasticity during adult life. For a baby, the organism has precedence. After just a few months, giving and conferring status becomes important. Between 11 and 19 months, power use develops (Eliot, 2009). During childhood, the social world grows, and various reference groups become distinguished. We learn the dynamics of prestige / status giving and claiming. At puberty, sociality more or less plateaus; just like we speak with the accent of our childhood, we act with its culture. Our hormonal systems are aligned with the dual nature of prestige / status and dominance / power; more on this in the appendix.

This phenomenon of a flexible beginning then stable existence also holds for groups of people. Once formed, societies, groups, organizations and companies, have cultures that tend to remain stable over time, despite many perturbations (Beugelsdijk & Welzel, 2018; G. Hofstede et al., 2010).

Sociality happens. Every action in which several people are present or imagined provides an instance to mutually imprint sociality through status-power dynamics in a world of groups. This ranges from glances and nonverbal involuntary movements, to explicit verbal communication, to social media posts and likes, to elaborate rituals involving prestige and social roles, to coercive acts involving life and death. All of these constitute as many claims for, and accord or refusals of, status; and some of them include power moves.

Groups in society are endlessly variable. They change at various timescales, from life-long to context-dependent and ephemeral. They can be nested or overlapping. Their salience is socially and situationally determined.

Collective results of social acts need not be intended. Much of our societies’ behaviour largely emerges unplanned. A few frequent, archetypical patterns can often be seen in this unplanned system-level behaviour. Agent-based modelling is privileged as a method by allowing to generate these unplanned patterns.

Key theories

There is such a wealth of theoretical work in so many disciplines that even the brief overview above may seem a bit unorganized. Therefore we briefly mention a few of the theories that we’ll most use in our proposal.

  • Kemper’s status – power – reference-group theory of relations. This comprehensive sociological theory also touches on neurobiology and psychology. This makes it compatible with evolutionary theories of human sociality. The appendix has a more elaborate treatment.
  • Heise’s Affect Control Theory (ACT). This theory shares a lot of elements with Kemper’s status – power dynamics but is targeted to small group interactions.
  • Tajfel & Turner’s Social Identity Approach (SIA). This theory elaborates on elements of group and intergroup dynamics, somewhat similar to Kemper’s reference groups.

Work to do in artificial sociality

The synthesis above suggests that sociality is about things that we do, and things that happen between people, in any of the contexts of their lives. Artificial sociality can reproduce sociality using modelling techniques that make life happen: “generative social science” (Epstein, 2006), or, with a newer word, computational social simulation. The task for artificial sociality is first and foremost a modelling task with the ambition to understand sociality-in-action better.

Principles

Ontologically, our perspective is one of consilience. Since there is only one world, findings that align across different sciences are particularly interesting to use in models of sociality. This is the case with the match between neurobiology, emotions, and the status-power theory of relations discussed in the appendix, for example.

Vocabulary

One of our tasks is to generate better understanding and a common vocabulary. At present, many modellers criss-cross the same conceptual space, but with different maps from different reference disciplines.

Open world hypothesis

In order to be able to talk with one another and build shared vocabulary, researchers should maintain an open world hypothesis: if your model differs from mine, then we can talk. What is the difference, is it really a difference, what does that allow or disable? Such discussion allows us to enrich our ontology. It is unrealistic, anyway, to expect everyone to agree. Artificial sociality is heavily loaded with worldview, and people disagree on worldviews. This is actually something that artificial society should help explain; unfortunately, we can predict that such an explanation will not please everyone.

Realms to model

Our sociality operates in a world with non-social elements such as space, time, objects. On a scale from content-based to relational, we can distinguish four realms that need to be modelled.

  1. This means the bio-physical and the institutional world, divorced from what people might feel about it.
  2. Cognitions about content. This includes knowledge, opinions, norms, and values that influences our perspectives on the content realm. They are partially conscious, the less so the more they are shared (and therefore cultural). This realm binds the relational to the non-relational world.
  3. Cognitions about relations. We have ideas about the status (“social importance”) and power that others have, about our own status and power in groups. These are normally unconscious.
  4. Cognitions about our own organism. This includes all kinds of organismic feelings, again often not fully conscious, and may include meta cognitions (e.g. “thinking about thinking”). Emotions link the organism with the relational world, often unconsciously. For instance, an insult is an attack on our status, and may bring the blood to our cheeks.

Artificial sociality requires considering all these elements. To which extent we consider each of them can be case-dependent. Depending on the application, some might have to be further elaborated. It is possible to model only one or several of these realms. For instance, Kemper’s theory posits the organism as one of the relevant reference groups, merging 3. and 4. Hofstede’s GRASP world has only sociality (3.) and no content (Gert Jan Hofstede & Liu, 2020). The general-purpose link from emotion as coherent dynamic social meaning, to content as objects and actions in institutional frames proposed in BayesACT may provide a link between (1.), (2.) and (3.) (Schröder, Hoey, & Rogers, 2016). Ultimately all of the realms will be needed in combination.

Theories and realms

Theories from the social sciences tend to concentrate on a subset of these realms. Table 1 indicates this.

Table 1: theories and realms to model (legend: from – not included, … to +++ central to this theory)

Theory modelled realms
  Content Cognitions
    on content on relations on organism
Affect Control Theory (Heise, 2013) + ++
Reasoned action approach (Fishbein & Ajzen, 2010) + +
Social identity approach (H Tajfel & Turner, 1986) + ++ ++
Status-power theory of relations (Theodore D. Kemper, 2017) +++ +
BayesACT (Schroeder et al., 2016) + + ++

Sources: theory, data, and experience

Models are integration devices, built from a variety of sources. Theory, data, and real-world experience all contribute to the usefulness of models that include artificial sociality (figure 1). The figure positions computational social simulation as a meeting place of these three elements. Different mixtures are possible, depending on the aim of modelling (Edmonds et al., 2019). Models range from purely theory-based ones that can illustrate core concepts, to models developed in participation with stakeholders that reflect real life, to highly complicated, data-fed models that can describe existing data or predict (generalize to) future measurements.

Artificial sociality as we propose it is, in the first instance, a theoretical concept. We believe that it has strong face validity in real life. This is by virtue of the empirical basis and broad scope of the theories involved. Integrating our concepts with data, for instance the never-ending stream of social media data, is a major challenge for the coming years.

Manif - Picture 1

Figure 1. Social simulation as a meeting place of theory, data and real life (Gert Jan Hofstede, 2018).

Model architectures

In artificial sociality we cannot get away with ideas only. Implementations are also needed, and functional computer code. In computer code, all the capabilities of our virtual world and of the agents that populate them, have to be unambiguously specified. This raises the issue of architecture. For instance, do agents have a body, a brain, and a soul? Do groups have common agency, or is that delegated to individuals? If the world is spatial, do we have instinctive reactions to moving objects? Is there “fast and slow” thinking as per many author’s writings  (e.g. (Kahneman, 2011) (Zhu & Thagard, 2002) (Glöckner & Witteman, 2010)?

Currently, a thousand flowers are blooming in the computational modelling of human behaviour. This is a good way to search. We believe that one architecture will not cover all needs; in all likelihood many streams of research will dry up, and we’ll be left with a limited number of rather general-purpose architectures for different purposes. Many existing models and architectures deserve to be taken into account.

State of the art

Artificial sociality, by design, is integrative across its contributing disciplines. Scientists have tried to integrate research on human behaviour and society across disciplines as long as we know. This has, however, become progressively harder as disciplines have branched. Aristotle was still a polymath, but today this is hardly possible any more.

Some attempts that are meaningful for artificial sociality in our view merit mention here.

Conte and Gilbert and their legacy

In social simulation, the concept of sociality was introduced in the nineteen nineties. Psychological computer scientists Kathleen Carley and Allen Newell published their extensive essay “The Nature of the Social Agent” in 1994, in which they proposed that compared with “omniscient” economic agents, social agents have more limited processing capabilities, but a richer social environment. They will turn to socio-cultural clues instead of raw data (Carley & Newell, 1994). Cognitive psychologist Rosaria Conte and sociologist Nigel Gilbert are founders of the notion of “artificial societies” (Gilbert & Conte, 1995). They set out to define artificial sociality as a challenge for computational social simulation. Their reflections were crowded out of the public eye by the advent of the Web, and the increasing ubiquity of data as sources for modelling. Yet computational social modelling has remained focused on human social behaviour.

Flache et al. in a position paper explicitly dedicate their work to Conte, who died prematurely in 2016 (Flache et al., 2017). They plead for more research on the question that Robert Axelrod posed in 1997: “If people tend to become more alike in their beliefs, attitudes, and behavior when they interact, why do not all such differences eventually disappear?“(Robert Axelrod, 1997). Flache et al. discuss several models, the currency of which is opinions.

Jager also builds on a statement by Conte when he pleads for “EROS”, or more attention to social psychology in computational social simulation (Wander Jager, 2017). He reviews a number of theories that have been used in social simulation, none of which includes emotions. The most generic of these might be Ajzen’s Theory of Planned Behaviour (the most recent version of which the author calls Reasoned Action Approach (Fishbein & Ajzen, 2010).

Other efforts

Work on active inference and a hierarchical (deep) Bayesian probabilistic view of the mind has led to more integrative models including of interpersonal inference (Moutoussis, Trujillo-Barreto, El-Deredy, Dolan, & Friston, 2014) and culture (Veissière, Constant, Ramstead, Friston, & Kirmayer, 2020). These models consider a long-standing view of human intelligence as being largely predictive rather than descriptive. That is, the mind is set up to seek information, and to interpret evidence, in ways that confirm prior beliefs.

A mid-range approach to sociality is taken by Shults and colleagues. They take domain-directed social scientific theory and develop agent-based models with agents embodying the theory. These tend to contain instantiated sociality elements such as fear. This includes terror management theory (Shults, Lane, et al., 2018) and intergroup dynamics under anxiety (Shults, Gore, et al., 2018).

Some computational modellers have built models of human behaviour suiting their purpose. This includes empathic agents, care robots, and the military. These models include some sociality, without necessarily using that term. Space forbids to deal with them at length. Interesting pointers are (Balke & Gilbert, 2014; Schlüter et al., 2017).

Consumat architecture

An example of an architecture that is appealing because of its simplicity, while including both content and a bit of sociality, is the Consumat (Wander Jager & Janssen, 2012; Wander  Jager, Janssen, & Vlek, 1999). Consumats live in one group or network, not necessarily but possibly in a spatial world, in which they have repeated decisions to take about which they are more or less certain. In addition, they are more or less “happy” based on the outcome of their previous decisions. “Happiness” and “uncertainty” combined determine what they will do: repeat, imitate someone else, deliberate on content issues, or do a more elaborate social comparison. The currency of “happiness” is not further specified, making the Consumat model quite flexible. Embedding fundamental concepts of sociality (e.g., allusions to reference groups and uncertainty), Consumat takes the individual as a unit of concern, rendering it a flexible starting point for richer developments of artificial sociality that have a stronger emphasis on the structure the agent is embedded in. It has found quite a few applications. A more elaborate follow-up effort on Consumat called Humat is now being developed into publications.

FAtiMA

An engineering approach to sociality with considerable fidelity is FAtiMA (Mascarenhas et al., 2021). This open-source toolkit for social agents and robots includes prestige / status dynamics and social emotions. Status dynamics are called “social importance” in FAtiMA (Mascarenhas, Prada, Paiva, & Hofstede, 2013).

GRASP

The GRASP meta-model for sociality (Gert Jan Hofstede, 2019) is an attempt at capturing the bare essentials of sociality: Groups, Rituals, Affiliation, Status, and Power. GRASP is deliberately content-free. Its relational currency is status and power. It is based on the works of Kemper mentioned here, and on Hofstede’s and Minkov’s work on national cultures. Culture modifies the rules of the status-power action choices (G. Hofstede et al., 2010; Gert Jan  Hofstede & Liu, 2019). A showcase model using GRASP, GRASP world (Gert Jan  Hofstede & Liu, 2019; Gert Jan Hofstede & Liu, 2020), pictures the longevity of social groups based on the ease with which agents can leave a group in which they are subjected to power or receive insufficient status. The resulting patterns resemble social dynamics in different cultural environments.

Contextual Action Framework (CAFCA)

The CAFCA meta-model (figure 2) allows to disentangle levels of sociality and context. It was created to add on to Homo economicus models, and allows to classify existing model ontologies. Sociality implies moving to the bottom right of the model. CAFCA shows how far we still have to travel. One could extend it: a relational perspective is not included so far, nor is a multi-group world.

Manif - Picture 2

Figure 2: CAFCA, the Contextual Action Framework (Elsenbroich & Verhagen, 2016).

We can conclude that in response to Conte’s and Gilbert’s challenge, explicit opinions have received a lot of attention in computational social simulation, but emotions and feelings have not. We believe that this still leaves some phenomena unexplained. Opinions need not always be taken at face value, but can be manifestations of social feelings and emotions, e.g. love for one’s group. Computational agents are still often “autistic”, whereas real people have sociality at their core (Dignum, Hofstede, & Prada, 2014). Sociality can give them “biases”, “perspectives”, or “relational rationality” (Gert Jan Hofstede, Jonker, Verwaart, & Yorke-Smith, 2019) that can be derived from various theories.

Bayesian Affect Control Theory (BayesACT)

BayesACT is a dual process model that unifies decision theoretic deliberative reasoning with intuitive reasoning based on shared cultural affective meanings in a single Bayesian sequential model (Hoey, Schröder, & Alhothali, 2016; Schröder et al., 2016).  Agents constructed according to this unified model are motivated by a combination of affective alignment (intuitive) and decision theoretic reasoning (deliberative), trading the two off as a function of the uncertainty or unpredictability of the situation. The model also provides a theoretical bridge between decision-making research and sociological symbolic interactionism. Bayes ACT is a promising new type of dual process model that explicitly and optimally (in the Bayesian sense) trades off motivation, action, beliefs and utility, and integrates cultural knowledge and norms with individual (rational) decision making processes. Hoey, et al. (in publication: Jesse Hoey, Neil J. MacKinnon, and Tobias Schroeder. Denotative and Connotative Management of Uncertainty: A Computational Dual-Process Model. To appear in Judgement and Decision Making, 16 (2), March 2021.2021) have shown how a component of the model is sufficient to account for some aspects of classic cognitive biases about fairness and dissonance, and have outlined how this new theory relates to parallel constraint satisfaction models.

Proposal: a relational world

We now put forward our own proposal for an architecture, not because we believe this is the only way to go, but in order to give an example of where a more radical take on sociality can lead.

Theory base

Theory versus data

We assume that data provide no more than a partial perspective on the phenomenon they are captured from. Only in concert with a theoretical concept will they attain meaning. For instance, consider today’s vast quantities of data on social media usage. Our communication on social media does not reflect all of our relations. Linking data and Kemper’s theory, we presume that people will use social media to claim status (e.g. show pictures of successes and important rituals), to confer status (e.g. like and follow others), and to use power (e.g. insult high-status others). There are also many relational motives that will not show in social media. People will hide shameful actions (e.g. failing, being exposed); they will protect some of their behaviours from some of their reference groups (e.g. their parents or spouses). People may fear the power of their own government, and stay away from some social media. Often people will seek information and interpret evidence in a way that confirms group acceptance, rather than in a way that confirms facts (Mercier and Sperber, 2009). Which members of a society go on which social media, and just how they select which things to show and which not to, are dependent upon relational dynamics that the data cannot show without help from theory. A theory is needed about the “why” of behaviour.

Building blocks: Complicated vs complex

We are aware of the tension between complicatedness of model structure and complexity of model outcomes (Sun et al., 2016). According to these authors, complex behaviour can be represented either by a model with few simple primitives, or by a very elaborate model. Our intuition is that a bottom-up approach with strong theory base and simple ontology is most promising. An analogy can illustrate this (figure 3). A complicated model architecture tends to be difficult to adapt. The price to pay for a simple, adaptive architecture is abstraction. To build a valid, versatile model with few primitives, just a few types of building blocks could suffice; only, one needs a great many of them.

Manif - Picture 3

Figure 3: giraffe models in Lego. From left to right: 1) Model that is valid but made of a complicated piece; 2) simple model with just 5 different rectangular shapes; 3) more complicated model with 15 different shapes of varied form; 4) simple model with few shapes but many pieces.

From theory to model

Implementing a theory from social science in a computational model is by no means straightforward. Typically, theories leave many elements unspecified. Model designers have to fill the gaps. For instance, the Social Identity Approach (SIA) has been used in computational modelling. It models agents as enacting a particular social role or identity that is context (institutionally) dependent and emotionally meaningful. From reviewing such papers, we learned how difficult it is to model a “complete” social world. We failed to find a single model yet that models SIA to its richness, and can actually be replicated. To accommodate this, a toolbox approach is used by the network project SIAM (SIAM: Social Identity in Agent-based Models, https://www.siam-network.online/), offering a set of formalizations that can then be specified for specific purposes/aims. Still, this is challenging. We believe that interdisciplinary work yields substantial benefits here.

Which theory

In selecting theories to work with, a thousand flowers can blossom. In our case, for creating models with relational agents that have simple ontology but great range, we believe that Kemper’s work, and SIA mentioned before could provide the Lego blocks. Both place individuals (called “agents” in what follows) in a rich world consisting of many groups with salience mechanisms. SIA gives agents both an individual and social identities. Kemper has no self but only reference groups, that is, groups existing in the mind of an individual, not necessarily in the outside world. For Kemper, the organism with its needs and urges is one reference group. Crucially, Kemper additionally gives the agents status and power motives; we believe this to be crucial for social agents. In SIA, agents act upon motives too (such as the need for positive distinctness and self-esteem), while status is achieved through comparison with outgroups. Heise’s Affect Control Theory (ACT) is similar to Kemper here, and more articulate for describing verbal communication; but it works for single groups only. Efforts at broadening ACT to multiple, overlapping and interacting groups, are currently underway  (Hoey & Schröder, 2015).

In what follows we mainly lean on our interpretation of Kemper, as the most generic and simplest of these theories.

What, how and why

The “what” of our relational world consists of individuals and groups. A person can belong to several groups, and not everyone necessarily has the same shared belief about who belongs to which group. Furthermore, there will be an environment with certain affordances; we will come to this later.

Basic rules for the “why” are:

  • What individuals do, is determined by the groups to which they affiliate. Those groups will act as reference groups.
  • People’s choices depend on what they believe their reference groups want them to do.
  • These beliefs are about status and power; they can be about individuals, or about groups.
  • Status beliefs are about the status worthiness of actions, people, and groups; and about appropriate ways of claiming and conferring status.
  • Power beliefs are about the power of people and groups; and about appropriate ways of using power.
  • For obtaining what they want, people can choose between status tactics and power tactics.
  • Status tactics involve claiming and conferring status. As long as conferrals exceed claims, they tend to be pleasant, and create trust. If claims exceed conferrals, people will feel insulted, and power tactics will be used.
  • Power tactics involve coercion and deceit, and tend to lead to resentment and repercussions, except where power is perceived as legitimate.
  • In practice, power use is often couched as status conferral; misunderstandings can also occur.

A model with these primitives would qualify as a GRASP model. The fine print of all of these rules – what is considered appropriate for whom, and in what circumstances – depends upon culture (Gert Jan Hofstede, 2013). This implies that the actual status-power game is quite complex and varied, even though there are few primitives.

The “How” would depend on the context, because the primitives need to be bound to instantiations. Here, the four “elementary forms of sociality” of anthropologist Alan Page Fiske could be useful. This may require a bit of introduction. Fiske, having carried out field studies in various civilizations, came up with four “ elementary forms of sociality” (Fiske, 1992). These are: communal sharing, authority ranking, equality matching and market pricing. Fiske aims with these elementary types to bring unity to the myriad of psychological theories. He says people use these four structures when they “transfer things”, and interestingly, they correspond with four sales in which “things” can be compared: nominal, ordinal, interval and ratio. He comes up with a wide range of issues and situations where the four forms obtain. These are not mutually exclusive: we might use communal sharing in one setting, authority ranking in another, and market pricing in yet another. The balance will depend on the issue or group and on culture.

If we assume that the thing to exchange is social importance or, in Kemper’s sense, status, then the following obtains:

  • Under communal sharing, it is the group, not the individual, that is the unit of status accordance, claiming, and worthiness
  • Under authority ranking, there is a clear hierarchy in social importance, and status accords, as well as power exertion, are asymmetric based on ascription. “Quod licet Iovi not licet bovi” (“What the god Jupiter may do, a cow may not”).
  • Under equality matching, each individual or group is equally worthy, should claim and be accorded the same amount of status.
  • Under market pricing, there is no need for a moral stance, since the market decides.

The likelihood of these four forms is obviously culture-related. In particular, two of Hofstede’s dimensions seem relevant (see figure 4). These forms could directly be used as model mechanisms, or their emergence in agents could be studied based on Hofstede “software of the mind”.

Manif - Picture 4

Figure 4: Likelihood of Fiske’s elementary forms (quadrants) across Hofstede’s dimensions of culture (axes). Market pricing is indifferent to power distance.

Readers are invited to consider current events in their lives, or in the political arena, through a relational lens. Once one distinguishes the silent voice of reference groups, and the dynamics of mutual status and power use, one can also see historical continuity within the relational lives of people, groups, companies, and nations.

Proposed architecture

Figure 4 shows what we propose are key ingredients of our relational architectures for artificial sociality. There is a correspondence between the concepts in the four columns, with the left column reflecting the micro level of individual operation on the level of the organism to operationalise emotions and related individual-centred concepts. To our mind – and put forth in this paper –  the most universal Lego blocks of artificial sociality are relational. In figure 5 we use Tönnies’ term Gemeinschaft for this. Figure 5 shows Kemper’s concepts of status, power and reference groups; but alternatives with similar relational content could be chosen. This relational column is always required. Depending on the application, the concepts in one or more of the other columns are needed. If they are included, they have to be mapped onto them, making status, power and reference groups the basic operational concepts for driving the model’s dynamics. For instance, emphatic agents need to feel and communicate emotions. Social robots need proxemics, i.e. to know the emotional impact of closeness, motion and posture; models that explain phenomena such as tribalism require individual-level concepts in addition to relational conceptions. Speaking to scale, simulations that model social complexity at the societal level, and are concerned with effects of policies require Gesellschaft concepts such as norms and institutions.

Examples for such models include the reaction to imposed behavioural constraints as part of the Covid-19 countermeasures employed throughout nation states – with vastly varying responses based on social structure and influence (expressed in the relational column) and individual motivations of various kinds, including perceived challenges to liberty, economic well-being, etc.  Whatever the variable configuration of sociality elements, we require a conceptual mapping to the physical world, such as the operationalisation in status and power in currencies relevant to the society of concern (e.g., status symbols).

Figure 5 is organised into columns. The leftmost column is organismic on an objective sense, but subjectively perceived. The middle two are intersubjective, continually construed by people in interactions, although things in the Gesellschaft column tend to be perceived by many as objective (Searle, 1995).  The rightmost column is about the physical world, considered objective but often perceived from a subjective, or rather intersubjective, stance.

The impact of this position is that a direct mapping from the physical world to emotions, or from money to behaviour, will not yield versatile models. Data based models without a strong social model of sound theoretical basis using e.g. financial actions to predict future economic behaviour, or past voting to predict future voting, might accommodate specific application cases, but their range of application across cases and time will be limited. More importantly, such models lack the explanatory potential that conceptual models of sociality can offer.

Manif - Picture 5

Figure 5: building block concepts for artificial sociality.

Conclusion

This position paper argues for a biological, relational turn in artificial human sociality. Such a turn will lay a foundation that can reconcile case-specific or discipline-specific model ontologies.

Artificial sociality has the potential to greatly enhance all knowledge technologies that impinge on the social world, including e.g. social robotics and body-worn AI devices.

In this paper we mainly aim to increase the usefulness of computational models of socio- ecological, -economical and -technical systems by tackling their social aspects on a par with the other ones, in a foundational, thorough way.

Many theories, in a great many disciplines, could possibly be used in constructing ontologies for artificial sociality. We provide some pointers and examples. We also present ideas for a “relational world” that could inspire modellers.

There is a lot of work to do.

Appendix: contributions to sociality from various disciplines

The appendix is sorted, admittedly somewhat arbitrarily, according to whether a field of research focuses more on the “What”, the “Why” or the “How” of behaviour. Within those three, the order is alphabetic.

Mainly the “what”

Anthropology

Computational simulations have been made of historic civilizations. In these, simulated populations live in a simulated environment. This requires a mix of historical data and assumptions, in particular about resources and / or social drives. If the various hypotheses that are implemented in the models hold, then the simulations could throw light on historical contingencies, or even reproduce the actual history. A famous example is the “artificial Anasazi” model by Epstein that ”replays” the rise and fall of the Anasazi civilization (Epstein & Axtell, 1996). The agents in this model have no sociality, but are constrained by resources. A recent example is e.g. a model of island colonization based on the concept of gregariousness (Fajardo, Hofstede, Vries, Kramer, & Bernal, 2020).

Another contribution from Anthropology is to study typical patterns of human social organization. The work of Alan Page Fiske is interesting in this respect. Fiske’s, four “ elementary forms of sociality” were mentioned before, in the context of figure 4 (Fiske, 1992). To repeat: communal sharing, authority ranking, equality matching and market pricing.

Institutional Economics

A fundamental feature of humans is our ability to coordinate – at scale, that is. Humans can coordinate on group, societal and global level, both towards shared interests (e.g., emergence of economic and personal liberties in the French revolution; international treaties such as the Whaling convention), but, at times, they also contradict those (e.g., climate change, e.g., (Shivakoti, Janssen, & Chhetri, 2019)). In an attempt to identify the cause of prosperity or demise of societies, New Institutional Economics (North, 1990) integrate the many strands of human behaviour – including the ones outlined above. Rooted in our biology and manifested in our psychology, as humans we possess “minds as social institutions” (Castelfranchi, 2014) that continuously exercise coordination activities. Institutions, here understood as the “integrated systems of rules that structure social interactions” (Hodgson, 2015), or simply “rules of the game” (North, 1990) are the catalysts. They include sophisticated constructs such as written contracts and courts, enabling cooperation at scale (Milgrom, North, & Weingast, 1990); (North, Wallis, & Weingast, 2009), but also informal arrangements for resource governance  (Ostrom, 1990), pointing to opportunities to address social dilemmas, such as the Tragedy of the Commons (Hardin, 1968).

Neurobiology and endocrinology

A model of sociality is more valid to the extent that it fits the evidence about our bodies. This includes the brain of course, with e.g. its mirror neurons that are a vehicle for empathy, but also older physiological systems such as the sympathetic (fear and anger) and parasympathetic (well-being) nerve system and the digestive system (all kinds of impulses, e.g. mediated by our gut microbiome). The recent semantic pointer theory of emotions (Kajic, Schroeder, Stewart, & Thagard, 2019) capitalizes on the mathematical apparatus of Affect Control Theory discussed above to embed the sociality of affective experience into neurobiological mechanisms through a neurocomputational simulation model.

Tönnies’ Gemeinschaft and Gesellschaft

A fundamental sociological theme that structures the arena of social behaviour is the dialectic between different forms of social organisation that represent anchor points for an integrated artificial sociality, namely Gemeinschaft (community) and Gesellschaft (society), introduced by (Tönnies, 1963 [1887]), and subsequently popularised by Weber. This distinction was part of an extended debate in early sociology about the core concepts of societal structure, where Gemeinschaft captures the characterisation of social ties observable in a social setting as primarily based on personal relationships, enacted roles and associated values as present in prototypical peasant societies prevalent at the time. Any interaction in those societies was based on what Tönnies referred to as natural will (“Wesenwille”) exhibited by members. Gesellschaft, in contrast, reflects the depersonalised counterpart in which individuals act in indirect form based on assigned roles, formal rules, processes and values, stereotypical structures associated with urban societies. Fittingly, Tönnies characterised motivations for any such interaction driven by rational will (“Kürwille”) encoded in the role individuals exhibit.

Likened to Durkheim’s differentiation between mechanical vs. organic solidarity (Durkheim, 1984), the concepts are stereotypical for the themes and worldviews that structured debate at the time. Instead of drawing on the particularities of either variant of this duality[1], they bear essentials that still apply to group dynamics found in modern societies.

Where behaviour is structured and planned, leading agents to create rules, react to imposed policy or enforce such, the representation of socio-institutional dynamics are of concern. While building and relying on concepts such as status and roles identified in the Gemeinschaft conception, concepts such as rules and governance structures extend beyond neurobiological and psychological bases of group formation, but are the mechanisms that lead to depersonalised coordination structures characteristic for the Gesellschaft interpretation of society. Doing so, models of artificial sociality can resemble the characteristics of real-world societies, including “growing” the complexity arising from systemic interdependencies of actors, roles and resources, and reflect the non-linearity of behavioural outcomes we can observe at scale.

Mainly the “why”

Behavioural biology

Behavioural biology has studied social behaviour of all kinds of animal, including those that resemble us very much. Frans de Waal stands out for his extensive studies about dominance, politics, reconciliation and pro-sociality among primate (Waal, 2009). Chimpanzees and bonobos in particular can teach us a lot about the sociality of Homo sapiens. Like chimps, we have bands of males fighting one another and dominating females. Like bonobos, we have female solidarity, social sexuality, and male reluctance to use their physical superiority.

Evolutionary biology

Our stress on the deep historic continuity of life in an unbroken chain of reproduction under variation implies that we see evolutionary biology as the mother of the social sciences. Our perspective owes to the work of authors such as De Waal, who concluded his discussion of morality in all kinds of animals, particularly primates, as follows: “We seem to be reaching a point at which science can wrest morality from the hands of philosophers” (Waal, 1996).

Evolutionary psychologist Turner argued that emotions have become much more important in humans than in other species, because we do not limit our contacts to either one predictable set of others, or an anonymous mass  (J. E. Turner, 2007). We needed to find a relational compass. Our expressive faces and gestures, and our open faces, developed for that purpose.

Clinical psychology

Clinical psychologist Abraham Maslow gave us the famous model of human needs, by observing his patients and seeing an overarching pattern (Maslow, 1970). This model is antithetical to Homo economicus. The problem with it is that it is hard to operationalise. A more proximate concept in human drives is emotions (Frijda, 1986). Emotions have been used quite a bit in computational social simulation, e.g. the cognitive synthesis of emotions in the OCC model (Ortony, Clore, & Collins, 1998). This has been used as underpinning of empathic computational agents (Dias, Mascarenhas, & Paiva, 2016).

Leadership psychology

The psychology of leadership naturally touches upon sociality. For instance, Van Vugt et al assert “leadership has been a powerful force in the biological and cultural evolution of human sociality” (Van Vugt & von Rueden, 2020). Human groups faced with problems of coordination and collective action turn to leadership for achieving collective agency. Different contexts have led to different leadership styles.  Leaders can base their role on dominance (coercion), or on prestige (voluntary deference), and people still turn to more dominant leaders in times of stress.

Cultural psychology

Cultural psychology adds a comparative perspective to leadership psychology, showing that leadership styles and follower styles are co-dependent and have historical continuity across generations (G. Hofstede et al., 2010). It is also a discipline in its own right, and it shows how all of social psychology is in fact culture-dependent (Smith, Bond, & Kagitcibasi, 2006).

Social Psychology: Social Identity approach

A set of theories useful for modelling group behaviour and intergroup relations are presented in the Social Identity approach (SIA). SIA refers to the combination of Social Identity Theory (H Tajfel, 1982; H Tajfel & Turner, 1986) and Self-Categorization Theory (Reicher, Spears, & Haslam, 2010; J. C. Turner, Hogg, Oakes, Reicher, & Wetherell, 1987).

SIA proposes that social identification is a fundamental basis for collective behaviour, as people derive a significant part of their concept of self from the social groups they belong to (H. Tajfel, 1978; J. C. Turner et al., 1987). When a person’s identity as a group member becomes salient in a particular context, this affects who is seen as being an ingroup member versus someone outside of the group. When a social identity is salient, group membership becomes an important factor for individual beliefs and behaviour – what is important for the group becomes important for the individual. Moreover, groups have their own social norms and expected behaviours. For instance, thinking as members of collectives changes helping behaviour, as we are more likely to provide help to ingroup members (Levine, Prosser, Evans, & Reicher, 2005).

We deem SIA particularly well suited to model sociality, as it spans from the why (motives) to the how (e.g., saliency of social identities that impact on behavior, dynamics between groups), and connects the micro level of individuals with the macro level of groups, groups in groups, all the way up to societies. SIA has been used in social simulation to address diverse research questions from Sociology, opinion dynamics, Environmental Sciences and more (for two qualitative reviews see (Kopecky, Bos, & Greenberg, 2010; Scholz, Eberhard, Ostrowski, & Wijermans, 2021 (in press)). However, up to now there is no standard formalization, and formalizations found vary widely.

Mainly the “how”

Computational biology

Simulations include work of emergent patterns occurring in swarms and fish schools, based on simple positioning rules that fish and birds use while moving. A seminal contribution in the field of behavioural biology was made by the DomWorld model that showed, among other things, how spatial configurations in primate groups could emerge from dominance interactions (Charlotte K. Hemelrijk, 2000; Charlotte K Hemelrijk, 2011). Here, the contribution of a behavioural theory involving dominance and fear was crucial. The swarm and Domworld models also are instances of agent-based models. i.e. computational simulation models in which individuals live in a spatial world. These models have heterogeneity and path dependence, just like real historical developments.

Computational sociology

Sociologists have been at the origin of artificial sociality – avant la lettre. In 1971, mathematical sociologists Sakoda and Schelling published models showing self-organization in societies resulting in unintended, but robust collective patterns. The history of these models was recently traced by (Hegselmann, 2017). Computational sociologists have followed in their tracks, helped by the advent of simulation software (Hegselmann & Flache, 1998) (Deffuant, Carletti, & Huet, 2013). Recent computational models of this kind include emotions and their spread (Schweitzer & Garcia, 2010).

Development psychology

Developmental psychologists show how, during infancy, childhood and puberty, people acquire a more varied concept of the social world. For instance, rough-and-tumble play peaks in boys at the onset of adolescence (G.J. Hofstede, Dignum, Prada, Student, & Vanhée, 2015); among Dutch adolescents a nested set of reference groups develops, and girls are more prosocial overall than boys in a dictator game (Groep, Zanolie, & Crone, 2019; Güroglu, Bos, & Crone, 2014).

Economics

Economics came up with the concept of the profit-maximizing Homo economicus, useful as a standard with which to compare actual human behaviour, in contexts where “profit” can be defined. Not all contexts are like that, which is why behavioural economist Richard Thaler predicted that “Homo economicus will become more emotional” (Thaler, 2000). Experiments in behavioural economics and game theory have now shown that people have relational motives that moderate their actions, and often lead to “non-rational” behaviour that may be heavily culturally biased (Henrich et al., 2005). This is an important finding, because if the pleasantly simple Homo economicus model does not hold in reality, then what is the alternative?

Human motivation: Heise’s Affect Control Theory

Sociologists have also studied universals of human social motivations, either in small groups (Heise, 2013) or more generically (Theodore D. Kemper, 2017).

Heise posited Affect Control Theory, a relational theory on how people in small groups maintain relations. According to Affect Control Theory, every concept has not only a denotative meaning but also an affective meaning, or connotation, that varies along three dimensions:[1] evaluation – goodness versus badness, potency – powerfulness versus powerlessness, and activity – liveliness versus torpidity. His work has recently been elaborated upon in social simulation (Heise, 2013) and combined with decision theoretic (rational) reasoning models (Hoey et al., 2018).

Human motivation: Kemper’s relational world

Kemper, who worked with Heise sometimes, developed a model of human drives that is similar but less operationalized, and wider in scope. It distinguishes two major dimensions, derived empirically, having to do with coerced versus voluntary compliance: power, and status. Kemper’s word “status” is thus not a measure of power, but in a sense the opposite: it is a measure of not needing power. It has been dubbed “social importance” which captures the meaning but is lengthy (Mascarenhas et al., 2013). Readers will recognize these dimensions as the leadership styles named dominance and prestige in the above, and the connotations of goodness and powerfulness in Heise’s theory. Kemper used these two concepts to underpin a generic theory of emotions, to be discussed further down. He extended his idea into a “status-power theory of relations” involving also group life (Theodore D Kemper, 2011). Recently, he wrote a concise version of his theories that is amenable to computational modelling (Theodore D. Kemper, 2017). In a nutshell, his theory posits that all people live in a status-power relational world. Status comes in many currencies. It implies love, respect, attention, applause, financial rewards, sexual favours, or a thousand other things large and small. People strive to attain these things by “claiming status”, through actions, nonverbal behaviours, clothes, appearance, hobbies, exploits, or vested in formal roles. This position paper, for instance, constitutes a status claim by its authors, in the currency of scientific credibility.

People thus strive for status. Yet they are not just selfish, but also motivated by love and affection to “confer status” upon others they deem worthy, or even upon heroes, symbols, deities, or groups. One person’s status worthiness is another one’s motive for conferring status. Status is thought to be a key driving factor in sustainable/durable inequality (Ridgeway, 2019).

When status claims fail, or when love is unrequited, people could respond by sadness, or by anger. In the latter case they might try to obtain the denied items by coercion, “power”. How to play the status-power game in life is something that people learn in their childhood, in a conjunction of “nature and nurture”. The fine print of the status-power game is cultural. For instance, some societies put a lot of value on power as a source of status, others do not; some societies divide status worthiness equally across people, others do not.

Two scientists who took their work and linked it to other disciplines could form an important source of inspiration for advances in sociality. They are Theodore Kemper and Antonio Damasio.

Socio-psycho-neurology: Kemper

Sociologist Theodore D. Kemper was mentioned above. He proposed a “Social interactional theory of emotions” that explicitly integrates socio-physiology of emotions, including work on the fit between neurophysiology and his own status-power model of relations (Theodore D Kemper, 1978). This is known in the literature as the “autonomic specificity hypothesis”, and Kemper’s theory supported it strongly, by linking neurotransmitters of the sympathetic nervous system with unpleasant events involving status loss (noradrenaline) and subjection to power (epinephrine). Acetylcholine, released by the parasympathetic nervous system, was associated with fulfilled status and power needs.

Kemper’s work was reviewed by sociologists with awe and admiration, but also with disbelief (Fine, 1981). It went largely forgotten. Recent work lends support to the specificity hypothesis once more, but without using Kemper’s theory, or integrating the findings across disciplines (McGinley & Friedman, 2017). Obviously, Kemper was ahead of his time. We believe his work is still innovative and important for the way in which it links neurobiology and sociology. According to Kemper, emotions tell their bearer whether survival is being facilitated (well-being signifies adequate status and power) or threatened (depression and fear signify reduced status or threat of others’ power) by events. Emotions are felt by individuals, carried by hormones, but induced by social situations involving relations between people. This is not to say that artificial sociality should include neurobiology. The importance of Kemper’s work is that it links disciplines operating at different levels of analysis, and shows the neurological roots of status and power motives.

Neuroscience: Damasio

Neuroscientist Damasio (2018) covers similar ground as Kemper does, but approaching from the opposite direction. Having noticed in his career that people are driven by more than their brains, he investigates the role of “feelings” in human cultural activity. Feelings, for Damasio, include avoidance of pain and suffering, and the pursuit of well-being and pleasure. They are more bodily, and less articulate, than emotions. For instance, “ache” is a feeling, “shame” is an emotion; feelings and emotions often co-occur. Damasio finds that feelings are not a new invention of evolutionary history, but are manifest in any single-cellular organism. He argues that any organism must maintain homeostasis of its inner environment in order to stay alive. “Feelings are the mental expressions of homeostasis” (ibid., p.6). Since all of our ancestors in the billion-years evolutionary history have had to maintain homeostasis in order to reproduce, “homeostasis, acting under the cover of feeling, is the functional thread that links early life-forms to the extraordinary partnership of bodies and nervous systems [of ourselves]”. Feelings are a primitive, powerful mechanism: we feel with our skins and our guts. Brains are just the latest addition to the organismic arsenal for maintaining homeostasis.

Damasio then turns to the social world: “In their need to cope with the human heart in conflict, in their desire to reconcile the contradictions posed by suffering, fear, anger, and the pursuit of well-being, humans turned to wonder and awe and discovered music making, painting, dancing and literature. They continued their efforts by creating the often beautiful and sometimes frayed epics that go by such names as religious belief, philosophical enquiry, and political governance.” (ibid., p. 8).

The impact of Damasio’s work is to downplay the role of intellect and mind in the shaping of collective behaviours, in favour of feelings. Damasio legitimizes gut feelings as motivators. It does not take much imagination to summarize his picture of feelings as a status-power world in the sense found by Kemper. Having adequate status causes well-being; being confronted with power causes fear. Since the world of feelings and emotions is less complex than the world of ideas, primacy of the former reduces the number of primitives required to model sociality, compared with a “brainy” world.

Damasio and Kemper together lay a strong foundation of consilience to the work of artificial sociality. Both give a central role to the organism, but not to the “self”. Kemper considers the “self” a superfluous notion; he considers the organism, with its feelings, as only one of the many reference groups that influence a person’s actions. Damasio shows that our organism has a life of its own, only some of which reaches our consciousness.

Acknowledgements

We thank the 150 attendants to the Artificial Sociality track at SocSimFest 2021, many of whom made valuable remarks that helped us.

[1] Durkheim puts a stronger emphasis on the stereotypical micro-level mechanisms in both forms of solidarity, such as enforcement mechanisms.

References

Axelrod, R. (1997). Advancing the Art of Simulation in the Social Sciences. 21-40.

Axelrod, R. (1997). The Dissemination of Culture; A model with local convergence and global polarization. Journal of Conflict Resolution, 41(2), 203-226. doi:10.1177/0022002797041002001

Balke, T., & Gilbert, N. (2014). How Do Agents Make Decisions? A Survey. Journal of Artificial Societies and Social Simulation, 17(4:13). doi:10.18564/jasss.2687

Beugelsdijk, S., & Welzel, C. (2018). Dimensions and Dynamics of National culture: Synthesizing Hofstede with Inglehart. Journal of Cross-Cultural Psychology, 49(10), 1469-1505. doi:10.1177/0022022118798505

Carley, K., & Newell, A. (1994). The nature of the social agent. Journal of Mathematical Sociology, 19(4), 221-262.

Castelfranchi, C. (2014). Minds as social institutions. Phenomenology and the Cognitive Sciences. doi:10.1007/s11097-013-9324-0

Damasio, A. R. (2018). The Strange Order of things: Life, Feeeling, and the Making of Cultures: Pantheon.

Deffuant, G., Carletti, T., & Huet, S. (2013). The Leviathan Model: Absolute Dominance, Generalised Distrust, Small Worlds and Other Patterns Emerging from Combining Vanity with Opinion Propagation. JASSS, 16(1), 5. http://jasss.soc.surrey.ac.uk/16/1/5.html

Diallo, S. Y., Shults, F. L. R., & Wildman, W. J. (2020). Minding morality: ethical artificial societies for public policy modeling. AI and Society. doi:10.1007/s00146-020-01028-5

Diamond, J. (1999). Guns, Germs and Steel: The fates of human societies: Norton & co.

Dias, J., Mascarenhas, S., & Paiva, A. (2016). FAtiMA Modular: Towards an Agent Architecture with a Generic Appraisal Framework. In Emotion Modeling (Vol. LNCS 8750, pp. 44-56): Springer.

Dignum, F., Hofstede, G. J., & Prada, R. (2014, 2014). From autistic to social agents. Paper presented at the 13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2014.

Durkheim, E. (1984). The Division of Labour in Society.

Edmonds, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., . . . Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3), 6. doi:10.18564/jasss.3993

Edmonds, B., & Moss, S. (2005). From KISS to KIDS — An `Anti-simplistic’ Modelling Approach. Lecture Notes in Computer Science, 3415, 130-144. doi:10.1007/978-3-540-32243-6_11

Eliot, L. (2009). brain, blue brain: How small differences grow into troublesome gaps – and what we can do about it. Boston: Mariner Books.

Elsenbroich, C., & Verhagen, H. (2016). The simplicity of complex agents: a Contextual Action Framework for Computational Agents. Mind & Society, 15(1), 131-143.

Epstein, J. M. (2006). Generative Social Science: Studies in Agent-Based Computational Modeling: Princeton University Press.

Epstein, J. M., & Axtell, R. (1996). Growing artificial societies: social science from the bottom up. Washington D.C.: The Brookings Institution.

Fajardo, S., Hofstede, G. J., Vries, M. d., Kramer, M. R., & Bernal, A. (2020). Gregarious Behavior, Human Colonization and Social Differentiation: An Agent-based Model. JASSS, 23(4), 11. http://jasss.soc.surrey.ac.uk/23/4/11.html

Fine, G. A. (1981). Book review: A Social Interactional Theory of Emotions. Social Forces, 59(4), 1332-1333.

Fishbein, M., & Ajzen, I. (2010). Predicting and Changing Behavior: the Reasoned Action Approach: Psychology Press, Taylor & Francis.

Fiske, A. P. (1992). The four elementary forms of sociality: Framework for a unified theory of social relations. Psychological Review, 99(4), 689-723. doi:10.1037%2F0033-295X.99.4.689

Flache, A., Michael Mäs, Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of Social Influence: Towards the Next Frontiers. Journal of Artificial Societies and Social Simulation, 20(4), 2. http://jasss.soc.surrey.ac.uk/20/4/2.html

Frijda, N. H. (1986). The Emotions. Cambridge: Cambridge University Press.

Gilbert, N., & Conte, R. (1995). Artificial Societies: The Computer Simulation of Social Life University College London Press.

Glöckner, A., & Witteman, C. (2010). Beyond dual-process models: A categorisation of processes underlying intuitive judgement and decision making. Thinking & Reasoning, 16(1), 1-25.

Groep, S. v. d., Zanolie, K., & Crone, E. A. (2019). Giving to Friends, classmates, and strangers in Adolescence. Journal of Research on Adolescence(online first), 1-8. doi:10.1111/jora.12491

Güroglu, B., Bos, W. v. d., & Crone, E. A. (2014). Sharing and giving across adolescence: an experimental study examining the development of prosocial behavior. Frontiers in Psychology, 5(291), 1-13. doi:10.3389/fpsyg.2014.00291

Haidt, J. (2012). The Righteous Mind: Why Good People are Divided by Politics and Religion: Penguin.

Hardin, G. (1968). The Tragedy of the Commons. Science, 162(3859), 1243-1248.

Hegselmann, R. (2017). Thomas C. Schelling and James M. Sakoda: The Intellectual, Technical, and Social History of a Model. JASSS, 20(3), 15. http://jasss.soc.surrey.ac.uk/20/3/15.html

Hegselmann, R., & Flache, A. (1998). Understanding Complex Social Dynamics: A Plea For CellularAutomata Based Modelling. JASSS, 1(3), 1. http://jasss.soc.surrey.ac.uk/1/3/1.html

Heise, D. R. (2013). Modeling Interactions in Small Groups. Social Psychology Quarterly, 76(1), 52-72. doi:10.1177/0190272512467654

Hemelrijk, C. K. (2000). Towards the integration of social dominance and spatial structure. Animal Behaviour, 59(5), 1035-1048. doi:http://dx.doi.org/10.1006/anbe.2000.1400

Hemelrijk, C. K. (2011). Simple Reactions to Nearby Neighbors and Complex social Behavior in Primates. In R. F. J. Menzel (Ed.), Animal Thinking: Comparative Issues in Comparative Cognition (pp. 223-238): MIT Press.

Henrich, J. (2016). The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter: Princeton University Press.

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., . . . Tracer, D. (2005). “Economic man” in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behavioral and brain sciences, 28, 795-855.

Hodgson, G. M. (2015). On defining institutions: Rules versus equilibria. Journal of Institutional Economics. doi:10.1017/S1744137415000028

Hoey, J., & Schröder, T. (2015). Bayesian affect control theory of self. Paper presented at the Proceedings of the AAAI Conference on Artificial Intelligence.

Hoey, J., Schröder, T., & Alhothali, A. (2016). Affect control processes: Intelligent affective interaction using a partially observable Markov decision process. Artificial Intelligence, 230, 134-172.

Hoey, J., Schröder, T., Morgan, J., Rogers, K. B., Rishi, D., & Nagappan, M. (2018). Artificial intelligence and social simulation: Studying group dynamics on a massive scale. Small Group Research, 49(6), 647-683.

Hofstede, G., Hofstede, G. J., & Minkov, M. (2010). Cultures and Organizations, software of the mind: McGraw-Hill.

Hofstede, G. J. (2013). Theory in social simulation: Status-Power theory, national culture and emergence of the glass ceiling. Paper presented at the Social Coordination: Principles, Artefacts and Theories, Exeter. http://www.scopus.com/inward/record.url?eid=2-s2.0-84894173186&partnerID=MN8TOARS

Hofstede, G. J. (2018). Social Simulation as a meeting place: report of SSC 2018 Stockholm. Review of Artificial Societies and social Simulation, 2018. https://rofasss.org/2018/09/19/gh/

Hofstede, G. J. (2019). GRASP agents: social first, intelligent later. Ai & Society, 34(3), 535-543. doi:https://doi.org/10.1007/s00146-017-0783-7

Hofstede, G. J., Dignum, F., Prada, R., Student, J., & Vanhée, L. (2015). Gender differences: The role of nature, nurture, social identity and self-organization (Vol. 9002).

Hofstede, G. J., Jonker, C. M., Verwaart, T., & Yorke-Smith, N. (2019). The Lemon Car Game Across Cultures: Evidence of Relational Rationality. Group Decision and Negotiation, 28(5), 849-877. doi:10.1007/s10726-019-09630-9

Hofstede, G. J., & Liu, C. (2019). GRASP world. https://www.comses.net OpenABM Computational library. Retrieved from https://www.comses.net/codebases/f1089671-38da-4cca-88c5-711e936b2ada/releases/1.0.0/.

Hofstede, G. J., & Liu, C. (2020). To Stay or Not to Stay? Artificial sociality in GRASP world. In H. Verhagen, M. Borit, G. Bravo, & N. Wijermans (Eds.), Advances in Social Simulation (pp. 217-231). Cham: Springer.

Jager, W. (2017). Enhancing the Realism of Simulation (EROS): On Implementing and Developing Psychological theory in Social Simulation. JASSS, 20(3). doi:10.18564/jasss.3522

Jager, W., & Janssen, M. A. (2012). An updated conceptual framework for integrated modelling of human decision making: The consumat II. Paper presented at the ECCS, Brussel.

Jager, W., Janssen, M. A., & Vlek, C. A. J. (1999). Consumats in a commons dilemma: Testing the behavioural rules of simulated consumers [Press release]. Retrieved from http://www.ppsw.rug.nl/cov/staff/jager/simpaper.pdf

Kahneman, D. (2011). Thinking, fast and slow: Macmillan.

Kajic, I., Schroeder, T., Stewart, T. C., & Thagard, P. (2019). The semantic pointer theory of emotion: Integrating physiology, appraisal, and construction. Cognitive Systems Research, 58(December), 35-53. doi:https://doi.org/10.1016/j.cogsys.2019.04.007

Kemper, T. D. (1978). A social interactional theory of emotions: John Wiley & Sons.

Kemper, T. D. (2011). Status, Power and Ritual Interaction: a Relational Reading of Durkheim, Goffman and Collins: Ashgate.

Kemper, T. D. (2017). Elementary Forms of social Relations: Status, power and reference groups Routledge.

Kopecky, J., Bos, N., & Greenberg, A. (2010). Social identity modeling: Past work and relevant issues for socio-cultural modeling. Paper presented at the 19th Conference on Behavior Representation in Modeling and Simulation, Charleston, SC.

Levine, M., Prosser, A., Evans, D., & Reicher, S. (2005). Identity and emergency intervention: How social group membership and inclusiveness of group boundaries shape helping behaviour. Personality and Social Psychology Bulletin,, 34(4), 443-453. doi:http://dx.10.1177/0146167204271651

Mascarenhas, S., Guimarães, M., Santos, P. A., Dias, J., Prada, R., & Paiva, A. (2021). FAtiMA Toolkit–Toward an effective and accessible tool for the development of intelligent virtual agents and social robots. arXiv preprint arXiv:2103.03020. https://arxiv.org/abs/2103.03020

Mascarenhas, S., Prada, R., Paiva, A., & Hofstede, G. J. (2013). Social Importance Dynamics: A Model for Culturally-Adaptive Agents. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Intelligent Virtual Agents (Vol. 8108, pp. 325-338): Springer.

Maslow, A. H. (1970). Motivation and Personality (2nd ed.). New York: Harper & Row.

McGinley, J. J., & Friedman, B. H. (2017). Autonomic specificity in emotion: The induction method matters. International Journal of Psychophysiology, 118, 48-57. doi:https://doi.org/10.1016/j.ijpsycho.2017.06.002

Mercier, H., & Sperber, D. (2017). The enigma of reason: Harvard University Press.

Milgrom, P. R., North, D. C., & Weingast, B. R. (1990). The Role of Institutions in the Revival of the Trade: The Law Merchant, Private Judges, and the Champagne Fairs. Economics and Politics, 2(1), 1–23.

Montesquieu, C. L. d. (1979 [1742]). De l’esprit des lois (Vol. 1). Paris: GF-Flammarion.

Moutoussis, M., Trujillo-Barreto, N. J., El-Deredy, W., Dolan, R., & Friston, K. (2014). A formal model of interpersonal inference. Frontiers in human neuroscience, 8, 160.

North, D. C. (1990). Institutions, Institutional Change, and Economic Performance: Cambridge University Press.

North, D. C., Wallis, J. J., & Weingast, B. R. (2009). Violence and Social Orders: A Conceptual Framework for Interpreting Recorded Human History: Cambridge University Press.

Ortony, A., Clore, G., & Collins, A. (1998). The Cognitive Structure of Emotions. Cambridge, UK: Cambridge University Press.

Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action: Cambridge University Press.

Reicher, S. D., Spears, R., & Haslam, S. A. (2010). The Social Identity Approach in Social Psychology. In M. S. Wetherlell & C. T. Mohanty (Eds.), The SAGE handbook of identities (pp. 45-62). London: Sage.

Ridgeway, C. L. (2019). Status: Why Is It Everywhere? Why Does It Matter? : Russell Sage Foundation.

Schlüter, M., Baeza, A., Dressler, G., Frank, K., Groeneveld, J., Jager, W., . . . NandaWijermans. (2017). A framework for mapping and comparing behavioural theories in models of social-ecological systems. Ecological Economics, 131, 21-35. doi:10.1016/j.ecolecon.2016.08.008

Scholz, G., Eberhard, T., Ostrowski, R., & Wijermans, N. (2021 (in press)). Social Identity in Agent-Based Models—Exploring the State of the Art.

Schröder, T., Hoey, J., & Rogers, K. B. (2016). Modeling dynamic identities and uncertainty in social interactions: Bayesian affect control theory. American Sociological Review, 81(4), 828-855.

Schweitzer, F., & Garcia, D. (2010). An agent-based model of collective emotions in onlinecommunities. The European Physical Journal B, 77(4), 533-545. doi:10.1140/epjb/e2010-00292-1

Searle, J. R. (1995). The Construction of Social Reality. London: Penguin.

Shivakoti, G. P., Janssen, M. A., & Chhetri, N. B. (2019). Agricultural and natural resources adaptations to climate change: Governance challenges in Asia. International Journal of the Commons. doi:10.5334/ijc.999

Shults, F. L., Gore, R., Wildman, W. J., Lynch, C. J., Lane, J. E., & Toft, M. D. (2018). A generative model of the mutual escalation of anxiety between religious groups. Journal of Artificial Societies and Social Simulation, 21(4), 7. http://jasss.soc.surrey.ac.uk/21/4/7.html

Shults, F. L., Lane, J. E., Wildman, W. J., Diallo, S., Lynch, C. J., & Gore, R. (2018). Modelling terror management theory: computer simulations of the impact of mortality salience on religiosity. Religion, Brain & Behavior, 8(1), 77-100.

Smith, P. B., Bond, M. H., & Kagitcibasi, C. (2006). Understanding Social Psychology Across Cultures Sage.

Sun, Z., Lorscheid, I., Millington, J. D., Lauf, S., Magliocca, N. R., Groeneveld, J., . . . Schulze, J. (2016). Simple or complicated agent-based models? A complicated issue. Environmental Modelling & Software, 86, 56-67.

Swaab, D. F., & Hedley-Prole, J. (2014). We Are Our Brains: A Neurobiography of the Brain, from the Womb to Alzheimer’s: Random House.

Tajfel, H. (1978). Intergroup behaviour. In Introducing social psychology (pp. 401-422). Harmondsworth: Penguin.

Tajfel, H. (1982). Social identity and intergroup relations: Cambridge University Press.

Tajfel, H., & Turner, J. C. (1986). The social identity theory of intergroup behaviour. In S. Worchel & W. G. Austin (Eds.), Psychology of intergroup relations (2nd ed., pp. 7-24). Chicago: Nelson-Hall.

Thaler, R. H. (2000). From homo economicus to homo sapiens. Journal of economic perspectives, 14(1), 133-141.

Tolk, A. (2015). Learning Something Right from Models That Are Wrong: Epistemology of Simulation. In L. Yilmaz (Ed.), Concepts and Methodologies for Modeling and Simulation: A Tribute to Tuncer Ören (pp. 87-106). Cham: Springer International Publishing.

Tönnies, F. (1963 [1887]). Community \& society : (Gemeinschaft und Gesellschaft).

Turner, J. C., Hogg, M. A., Oakes, P. J., Reicher, S. D., & Wetherell, M. (1987). Rediscovering the social group: A self-categorization theory. Oxford, UK: Basil Blackwell.

Turner, J. E. (2007). Human Emotions: A Sociological Theory: Routledge.

Van Vugt, M., & von Rueden, C. R. (2020). From genes to minds to cultures: Evolutionary approaches to leadership. The Leadership Quarterly, 101404.

Veissière, S. P., Constant, A., Ramstead, M. J., Friston, K. J., & Kirmayer, L. J. (2020). Thinking through other minds: A variational approach to cognition and culture. Behavioral and brain sciences, 43.

Waal, F. d. (1996). Good natured: the origins of right and wrong in humans and other animals: Harvard University Press.

Waal, F. d. (2009). The Age of Empathy. New York: Random House.

Wilson, E. O. (1999). Consilience: The unity of knowledge (Vol. 31): Vintage.

Wilson, E. O., & Hölldobler, B. (2005). Eusociality: Origin and consequences. Proceedings of the National Academy of Sciences of the United States of America, 102(38), 13367-13371. doi:10.1073/pnas.0505858102

Zhu, J., & Thagard, P. (2002). Emotion and action. Philosophical psychology, 15(1), 19-36.


Hofstede, G.J, Frantz, C., Hoey, J., Scholz, G. and Schröder, T. (2021) Artificial Sociality Manifesto. Review of Artificial Societies and Social Simulation, 8th Apr 2021. https://rofasss.org/2021/04/08/artsocmanif/


 

A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation

By Edmund Chattoe-Brown

The Motivation

Research that confronts models with data is still sufficiently rare that it is hard to get a representative sense of how it is done and how convincing the results are simply by “background reading”. One way to advance good quality empirical modelling is therefore simply to make it more visible in quantity. With this in mind I have constructed (building on the work of Angus and Hassani-Mahmooei 2015) the first version of a bibliography listing all ABM attempting empirical validation in JASSS between 1998 and 2019 (along with a few other example) – which generates 68 items in all. Each entry gives a full reference and also describes what comparisons are made and where in the article they occur. In addition the document contains a provisional bibliography of articles giving advice or technical support to validation and lists three survey articles that categorise large samples of simulations by their relationships to data (which served as actual or potential sources for the bibliography).

With thanks to Bruce Edmonds, this first version of the bibliography has been made available as a Centre for Policy Modelling Discussion Paper CPM-20-216, which can be downloaded http://cfpm.org/discussionpapers/256.

The Argument

It may seem quite surprising to focus only on validation initially but there is an argument (Chattoe-Brown 2019) which says that this is a more fundamental challenge to the quality of a model than calibration. A model that cannot track real data well, even when its parameters are tuned to do so is clearly a fundamentally inadequate model. Only once some measure of validation has been achieved can we decide how “convincing” it is (comparing independent empirical calibration with parameter tuning for example). Arguably, without validation, we cannot really be sure whether a model tells us anything about the real world at all (no matter how plausible any narrative about its assumptions may appear). This can be seen as a consequence of the arguments about complexity routinely made by ABM practitioners as the plausibility of the assumptions does not map intuitively onto the plausibility of the outputs.

The Uses

Although these are covered in the preface to the bibliography in greater detail, such a sample has a number of scientific uses which I hope will form the basis for further research.

  • To identify (and justify) good and bad practice, thus promoting good practice.
  • To identify (and then perhaps fill) gaps in the set of technical tools needed to support validation (for example involving particular sorts of data).
  • To test the feasibility and value of general advice offered on validation to date and refine it in the face of practical challenges faced by analysis of real cases.
  • To allow new models to demonstrably outperform the levels of validation achieved by existing models (thus creating the possibility for progressive empirical research in ABM).
  • To support agreement about the effective use of the term validation and to distinguish it from related concepts (like verification) and potentially unhelpful (for example ambiguous or rhetorically loaded) uses

The Plan

Because of the labour involved and the diversity of fields in which ABM have now been used over several decades, an effective bibliography on this kind cannot be the work of a single author (or even a team of authors). My plan is thus to solicit (fully credited) contributions and regularly release new versions of the bibliography – with new co-authors as appropriate. (This publishing model is intended to maintain the quality and suitability for citation of the resulting document relative to the anarchy that sometimes arises in genuine communal authorship!) All of the following contributions will be gratefully accepted for the next revision (on which I am already working myself in any event)

  • References to new surveys or literature reviews that categorise significant samples of ABM research by their relationship to data.
  • References for proposed new entries to the bibliography in as much detail as possible.
  • Proposals to delete incorrectly categorised entries. (There are a small number of cases where I have found it very difficult to establish exactly what the authors did in the name of validation, partly as a result of confusing or ambiguous terminology.)
  • Proposed revisions to incorrect or “unfair” descriptions of existing entries (ideally by the authors of those pieces).
  • Offers of collaboration for a proposed companion bibliography on calibration. Ultimately this will lead to a (likely very small) sample of calibrated and validated ABM (which are often surprisingly little cited given their importance to the credibility of the ABM “project” – see, for example, Chattoe-Brown (2018a, 2018b).

Acknowledgements

This article as part of “Towards Realistic Computational Models of Social Influence Dynamics” a project funded through ESRC (ES/S015159/1) by ORA Round 5.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16. <http://jasss.soc.surrey.ac.uk/18/4/16.html> doi:10.18564/jasss.2952

Chattoe-Brown, Edmund (2018a) ‘Query: What is the Earliest Example of a Social Science Simulation (that is Nonetheless Arguably an ABM) and Shows Real and Simulated Data in the Same Figure or Table?’ Review of Artificial Societies and Social Simulation, 11 June. https://rofasss.org/2018/06/11/ecb/

Chattoe-Brown, Edmund (2018b) ‘A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974)’, Review of Artificial Societies and Social Simulation, 1 June. https://rofasss.org/2018/06/01/ecb/

Chattoe-Brown, Edmund (2019) ‘Agent Based Models’, in Atkinson, Paul, Delamont, Sara, Cernat, Alexandru, Sakshaug, Joseph W. and Williams, Richard A. (eds.) SAGE Research Methods Foundations. doi:10.4135/9781526421036836969


Chattoe-Brown, E. (2020) A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation. Review of Artificial Societies and Social Simulation, 12th June 2020. https://rofasss.org/2020/06/12/abm-validation-bib/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)