Tag Archives: dinocarpentras

Why we are failing at connecting opinion dynamics to the empirical world

By Dino Carpentras

ETH Zürich – Department of Humanities, Social and Political Sciences (GESS)

The big mystery

Opinion dynamics (OD) is field dedicated to studying the dynamic evolution of opinions and is currently facing some extremely cryptic mysteries. Since 2009 there have been multiple calls for OD models to be strongly grounded in empirical data (Castellano et al., 2009, Valori et al., 2012, Flache et al., 2017; Dong et al., 2018), however the number of articles moving in this direction is still extremely limited. This is especially puzzling when compared with the increase in the number of publications in this field (see Fig 1). Another surprising issue, which extends also beyond OD, is that validated models are not cited as often as we would expect them to be (Chattoe-Brown, 2022; Kejjzer, 2022).

Some may argue that this could be explained by a general lack of people interested in the empirical side of opinion dynamics. However, the World seems in desperate need of empirically grounded OD models that could help us in shape policies on topics such as vaccination and climate change. Thus, it is very surprising to see that almost nobody is interested in meeting such a big and pressing demand.

In this short piece, I will share my experience both as a writer and as a reviewer for empirical OD papers, as well as the information I gathered from discussions with other researchers in similar roles. This will help us understand much better what is going on in the world of empirical OD and, more in general, in the empirical parts of agent-based modelling (ABM) related to psychological phenomena.

fig 1 rofasss

Publications containing the term “opinion dynamics” in abstract or title. Total 2,527. Obtained from dimensions.ai

Theoretical versus empirical OD

The main issue I have noticed with works in empirical OD is that these papers do not conform to the standard framework of ABM papers. Indeed, in “classical” ABM we usually try to address research questions like:

  1. Can we develop a toy model to show how variables X and Y are linked?
  2. Can we explain some macroscopic phenomenon as the result of agents’ interaction?
  3. What happens to the outputs of a popular model if we add a new variable?

However, empirical papers do not fit into this framework. Indeed, empirical ABM papers ask questions such as:

  1. How accurate are the predictions made by a certain model when compared with data?
  2. How close is the micro-dynamic to the experimental data?
  3. How can we refine previous models to improve their predicting ability?

Unfortunately, many reviewers do not view the latter questions as genuine research inquiries, ending up in pushing the authors to modify their papers to meet the first set of questions.

For instance, my empirical works often receive the critique that “the research question is not clear”, even though the question was explicitly stated in the main text, abstract and even in the title (See, for example “Deriving An Opinion Dynamics Model From Experimental Data”, Carpentras et al. 2022). Similarly, once a reviewer acknowledged that the experiment presented in the paper was an interesting addition to it, but they requested me to demonstrate why it was useful. Notice that, also in this case, the paper was on developing a model from the dynamical behavior observed in an experiment; therefore, the experiment was not just “an add on”, but core of the paper. I also have reviewed some empirical OD papers where the authors are asked, by other reviewers, to showcase how their model informs us about the world in a novel way.

As we will see in a moment, this approach does not just make authors’ life harder, but it also generates a cascade of consequences on the entire field of opinion dynamics. But to better understand our world, let us move first to a fictitious scenario.

A quick tale of natural selection of researcher

Let us now imagine a hypothetical world where people have almost no knowledge of the principles of physics. However, to keep the thought experiment simple, let us also suppose they have already developed the peer-review process. Of course, this fictious scenario is far from being realistic, but it should still help us understand what is going on with empirical OD.

In this world, a scientist named Alice writes a paper suggesting that there is an upward force when objects enter water. She also shows that many objects can float on water, therefore “validating” her model. The community is excited about this new paper which took Alice 6 months to write.

Now, consider another scientist named Bob. Bob, inspired by Alice’s paper, in 6 months conducts a series of experiments demonstrating that when an object is submerged in water, it experiences an upward force that is proportional to its submerged volume. This pushes knowledge forward as Bob does not just claim that this force exists, but he shows how this force has some clear quantitative relationship to the volume of the object.

However, when reviewers read Bob’s work, they are unimpressed. They question the novelty of his research and fail to see the specific research question he is attempting to address. After all, Alice already showed that this force exists, so what is new in this paper? One of the reviewers suggests that Bob should show how his study may impact their understanding of the world.

As a result, Bob spends an additional six months to demonstrate that he could technically design a floating object made out of metal (i.e. a ship). He also describes the advantages for society if such an object was invented. Unfortunately, one of the reviewers is extremely skeptical as metal is known to be extremely heavy and should not float in water, and requests additional proof.

After multiple revisions, Bob’s work is eventually published. However, the publication process takes significantly longer than Alice’s work, and the final version of the paper addresses a variety of points, including empirical validation, the feasibility of constructing a metal boat, and evidence to support this claim. Consequently, the paper becomes densely technical, making it challenging for most people to read and understand.

At the end, Bob is left with a single paper which is hardly readable (and therefore citable), while Alice, in the meanwhile, published many other easier-to-read papers having a much bigger impact.

Solving the mystery of empirical opinion dynamics

The previous sections helped us in understanding the following points: (1) validation and empirical grounding are often not seen as a legitimate research goal by many members of the ABM community. (2) This leads to bigger struggle when trying to publish this kind of research, and (3) reviewers often try to push the paper into the more classic research questions, possibly resulting in a monster-paper which tries to address multiple points all at once. (4) This also generates lower readability and so less impact.

So to sum it up: empirical OD gives you the privilege of working much more to obtain way less. This, combined with the “natural selection” of the “publish or perish” explains the scarcity of publications in this field, as authors need either to adapt to more standard ABM formulas or to “perish.” I also personally know an ex-researcher who tried to publish empirical OD until he got fed up and left the field.

Some clarifications

Let me make clear that this is a bit of a simplification and that, of course, it is definitely possible to publish empirical work in opinion dynamics even without “perishing.” However, choosing this instead of the traditional ABM approach strongly enhances the difficulty. This is a little like running while carrying extra weight: it is still possible that you will win the race, but the weight strongly decreases the probability of this happening.

I also want to say that while here I am offering an explanation of the puzzles I presented, I do not claim that this is the only possible explanation. Indeed, I am sure that what I am offering here is only part of the full story.

Finally, I want to clarify that I do not believe anyone in the system has bad intentions. Indeed, I think reviewers are in good faith when suggesting empirically-oriented papers to take a more classical approach. However, even with good intentions, we are creating a lot of useless obstacles for an entire research field.

Trying to solve the problem

To address this issue, in the past I have suggested dividing ABM researchers into theoretical and empirically oriented (Carpentras, 2020). The division of research into two streams could help us in developing better standards for both developing toy models and for empirical ABMs.

To give you a practical example, my empirical ABM works usually receive long and detailed comments about the model properties and almost no comment on the nature of the experiment or data analysis. Am I that good in these last two steps? Or maybe reviewers in ABM focus very little on the empirical side of empirical ABMs? While the first explanation would be flattering for me, I am afraid that the reality is better depicted by the second option.

With this in mind, together with other members of the community, we have created a special interest group for Experimental ABM (see http://www.essa.eu.org/sig/sig-experimental-abm/). However, for this to be successful, we really need people to recognize the distinction between these two fields. We need to acknowledge that empirically-related research questions are still valid and not push papers towards the more classical approach.

I really believe empirical OD will raise, but how this will happen is still to decide. Will it be at the cost of many researchers facing bigger struggle or will we develop a more fertile environment? Or maybe some researchers will create an entire new niche outside of the ABM community? The choice is up to us!

References

Carpentras, D., Maher, P. J., O’Reilly, C., & Quayle, M. (2022). Deriving An Opinion Dynamics Model From Experimental Data. Journal of Artificial Societies & Social Simulation, 25(4). https://www.jasss.org/25/4/4.html

Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/

Castellano, C., Fortunato, S., & Loreto, V. (2009). Statistical physics of social dynamics. Reviews of modern physics, 81(2), 591. DOI: 10.1103/RevModPhys.81.591

Chattoe-Brown, E. (2022). If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1 Feb 2022. https://rofasss.org/2022/02/01/citing-od-models/

Dong, Y., Zhan, M., Kou, G., Ding, Z., & Liang, H. (2018). A survey on the fusion process in opinion dynamics. Information Fusion, 43, 57-65. DOI: 10.1016/j.inffus.2017.11.009

Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4). https://www.jasss.org/20/4/2.html

Keijzer, M. (2022). If you want to be cited, calibrate your agent-based model: a reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation.  9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown

Valori, L., Picciolo, F., Allansdottir, A., & Garlaschelli, D. (2012). Reconciling long-term cultural diversity and short-term collective social behavior. Proceedings of the National Academy of Sciences, 109(4), 1068-1073. DOI: 10.1073/pnas.1109514109


Carpentras, D. (2023) Why we are failing at connecting opinion dynamics to the empirical world. Review of Artificial Societies and Social Simulation, 8 Mar 2023. https://rofasss.org/2023/03/08/od-emprics


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

ESSA@work: Reflections and looking ahead

By Kavin Narasimhan, Silvia Leoni, Katharina Luckner, Dino Carpentras, and Natalie Davis*

essaatworkgroup@gmail.com

*All authors contributed equally – author order determined by a pseudo-random number generator and does not reflect their respective contributions.

Introduction

Since its inception in 2010, ESSA@work has been a mainstay at the annual Social Simulation Conference (SSC). It continues as a forum where beginners in individual- and agent-based modelling (hereon, ABM) present a work-in-progress model, along with specific problems and questions, to a community of practitioners to get feedback, suggestions, and tips for specific aspects of their modelling projects. During the session, participants present their model to an audience and two experts, the latter of whom are chosen for their constructive style of feedback and broad expertise. Participants are not required to answer questions or defend their work, as might be the case in a more traditional setting. Instead, experts enter into a dialogue with each other with the explicit goal of providing constructive feedback towards the progress of the project. After the expert discussion, the audience can also add constructive ideas and questions.

Each ESSA@work session is organised by a team of volunteers, who were often introduced to the format by being participants themselves. In the weeks prior to the SSC, this group drafts all necessary documents to elicit participation, selects participants, contacts experts, and distributes information via mailing lists and social media channels. During the sessions, they serve as chairs and provide outreach via social media. In between conferences and other events with ESSA@work sessions, organisers serve as points of contact for anyone who might want to organise a local ESSA@work session engage with the management of the broader European Social Simulation Association (ESSA), maintain information on the ESSA@work website (http://www.essa.eu.org/essawork/), and recruit the next generation of volunteers. Organisers typically stay on for a number of years, so that a continuity of knowledge on the processes is secured.

Over the years, a few themes that characterise ESSA@work have crystallised and indicate the importance of the track. In this contribution, we outline these themes: how ESSA@work provides a learning experience to participants and the audience, as well as the organisers; how it fosters interdisciplinarity; and how it builds upon a community of practice. We conclude with our wishes for its future.

Themes

Learning experience

The participants in ESSA@work tend to be early-career researchers, such as masters students, doctoral candidates, or post-doctoral researchers, but we have also had participants who are experienced academics, but new to ABM. For early-career researchers, participating holds additional benefits, as the SSC where they participate in ESSA@work may be their first (on-site) conference. For instance, this was the case for the SSC2022, which was held in a hybrid format after a long period of restrictions and uncertainty due to COVID-19. This deeply affected the career of young researchers: for some, most of their PhD has been spent online with no or little opportunity to participate in events such as annual conferences.

While the learning experience is focused on the participants and their contributions, it extends beyond them to include audience members and organisers as well, so that ESSA@work sessions present different learning channels. The first learning channel is focused on presentation and social skills. These general skills apply to any career path and are facilitated and supported by the friendly environment and specific format that ESSA@work implements. The practice of presenting unfinished work fosters collaboration, open conversations, and reflection, among peers and more senior academics alike, rather than an environment where participants must ‘defend’ their work from reviewers. Participants must adapt their presentation to a specific format, where they clearly address their doubts and issues. This requires them to put together a clear, concise presentation aligned with the non-standard focus of the track. We have one-page guidance, detailed guidance, and Frequently Asked Questions (FAQs) covering these aspects online (http://www.essa.eu.org/essawork/how-to-participate/ and http://www.essa.eu.org/essawork/faq/). The track then also facilitates and encourages the development of social skills by bringing together members of the ESSA community of all experience levels, allowing participants to develop their network of contacts and collaborations based on shared experiences and mentorship.

Secondly, there is the specific feedback from experts, including literature and data recommendations, and references to existing models or other contacts. This adds to or complements the feedback that participants (especially PhD students and postdocs) receive from their supervisors. Participants can find diverse, enriching suggestions with respect to the line of work that they were following, and new perspectives. For cases in which relationships with supervisors and mentors are proving to be difficult, or where supervisors are less familiar with ABM, this can be a crucial source of motivation and support for researchers who find themselves stuck in the process. This can also be a useful source of ideas for audience members with similar questions or challenges.

There is also the organiser’s experience. This usually starts with being involved as a participant in the track. As speakers, participants begin to familiarise themselves with the specific ESSA@work format, as well as with the steps, timing, and process that lead to the conference events. This is also a way to get in touch with former and current team members before officially joining the organisers’ team. After being introduced to ESSA@work as a participant or audience member, new members of the organising team receive training by current and/or former members in a process of knowledge transfer guided by prior experiences. This is put in place with the goal of sharing, improving from the past, and creating a community.

Once researchers have fully joined the team and start helping to prepare the next edition of ESSA@work, the learning opportunities are numerous. From building and strengthening their network of contacts across ESSA, to practising organisation and chairing (which would otherwise often come at a later career stage), reviewing submitted manuscripts, improving communication and coordination skills, and project and time management. Last but not least, organisers work in a team. This exercise of coordination is a fantastic occasion for learning-by-doing of how to adapt and organise heterogeneous skills, schedules, and expertise towards continuous improvement. On the one hand, this mimics co-authorship, and thus offers an opportunity to familiarise oneself with a frequent pattern in academic work; on the other hand, it emphasises and strengthens the feeling of community that characterises ESSA, and that is even stronger in the ESSA@work family.

Interdisciplinarity

Through our time as organisers, we have seen first-hand how diverse the ABM community is. The background of participants can include physics, ecology, computer science, economics, or psychology, just to name a few. This is a double-edged sword, as it both allows researchers to produce work connecting multiple disciplines, but can also result in work that is not accessible to the different audiences who may otherwise be interested in it.

For example, people from statistical physics may be very interested in solving the mean-field approximation of a model, while psychologists may be more interested in the qualitative interpretation of such a model. Similar problems also regard the use of technical terms. For example, terms like “experiment” are used by some to mean “computer simulation” and by others to mean “empirical experiment with real people.” Similarly, Edmund Chattoe-Brown found 5 different uses of the term “validation” (Chattoe-Brown, 2021). Therefore, while ABM can connect multiple different fields, research content can still be very hard to understand by multiple scientists. This can paradoxically result in more difficulty in reaching out or communicating results to some communities or fields (Carpentras, 2022).

ESSA@work can have a unique role in tackling this problem, as it allows people who have recently begun working with ABM to get an “inside view” of the ABM community. By presenting their work and research questions to experts in ABM, and receiving feedback from them, participants can have a smoother process to publishing their models, for example by avoiding common mistakes and pitfalls, and gain more insights on typical research questions, problems and jargon of ABM. This allows participants to get more acquainted with the ABM community and mindsets (as discussed in the next section), allowing for a better integration and long-term connection with the field.

Community building

When speaking of communities of practice, we ask how practitioners of a certain profession or discipline both shape and are shaped by their profession. ESSA@work has a role to play in both, but perhaps more heavily in the latter.

As a whole, ESSA seems to be actively shaping a community of practice in social simulation, more specifically ABM, through shared standards and protocols, regular exchanges, and collaborations across disciplines, all rallying around a specific method. For many members, this is a community separate to the one that they belong to on a day-to-day basis in their departments or organisations. There is active communal support in jointly shaping the rules that should govern the community and the method, as well as a continuous (and friendly) negotiation of who or what is included and excluded from the community and where overlaps with other communities might be (see recent discussion in the SIMSOC mailing list; https://www.jiscmail.ac.uk/cgi-bin/wa-jisc.exe?A2=ind2211&L=SIMSOC&O=D&P=19269). This is how the community shapes the practice – both actively and passively.

The other side of the coin is how the practice shapes the community. As discussed, ESSA@work sessions are often the place and time where new members are introduced to the community, and where their future outlook on the community and the method is significantly shaped. Through useful, tactful, and constructive feedback, new members are introduced to the core texts that at least partially constitute the collective imaginary of the community of practice, to the protocols that govern what constitutes good practice, and – perhaps most importantly – to the tone that the community uses in interacting with one another. ESSA@work therefore not only provides a forum for constructive feedback on work-in-progress, but also an experience which is useful to decide whether someone wants to be part of this community. With ‘alumni’ often coming back as organisers or panellists, and recommending the track to their peers and students, there is a sense that ESSA@work – and the attitude it embodies – is passed on through academic generations. It therefore becomes very much part of what we do, and how we do things, in the agent-based modelling community.

Future themes

As we look to the future for ESSA@work, we have considered both its continuing role in providing a multi-faceted learning experience and central point for the ESSA community, as well as how it can continue to contribute to the future of both ESSA and the field of agent-based modelling more broadly. Specifically, as agent-based modelling has become more accepted as a method for simulating and analysing complex systems, and therefore taken a more empirical turn, ESSA@work can have a unique role in fostering and maintaining the diversity of modelling purposes, which may otherwise become less valued in the rest of the scientific community.

Most participants have questions related to specific stages of their modelling journey. If you think of an ABM journey being roughly divided into the following stages: (1) conceptualisation and design, (2) development, (3) verification and calibration, (4) validation, and (5) simulations, uncertainty analysis and results, most ESSA@work participants are somewhere between steps 2 and 4 in their modelling journey. As step 1 presents several possibilities and needs longer for background work (like literature review, brainstorming, stakeholder consultation, etc.), we intentionally encourage participation in the forum from step 2 onwards, when the purpose, scope and objectives of models become clearer. This in turn enables specific modelling questions being put forth that can be usefully addressed within the time and space of an ESSA@work session. Over the years, we have received submissions from across disciplines and mostly focusing on issues in steps 2 to 4 of the modelling journey. More recently, we also started receiving submissions with questions about running simulation experiments, calibration and validation with empirical data, interpreting results, and conducting uncertainty analysis. We believe this speaks to ABM becoming more mainstream as a microsimulation approach during this period, enabled also by the availability and accessibility to powerful computing resources.

We find that when questions fall under modelling stages 2, 3 and 5, participants receive more direct answers as questions tend to be specific, which our practitioner community addresses based on their own work, or on wider references. On the other hand, questions about model validation (stage 4) could be quite broad and open-ended to attract a useful response in the time available. ‘How can I validate my model?’ – or the essence of this question worded differently – is a popular question in this category. A practical and straightforward answer to validate a model is to collect or use data on the modelled phenomenon, and use them as test data to check if the model replicates patterns of the test data. Often though, participants indicate that the test data do not exist or are difficult to obtain. This would then raise questions about the purpose of the model: specifically, whether it’s intended as a toy model to generate plausible explanations about an observed phenomenon (historically the realm of ABM), or as a specialised model to allow meaningful forecasts. Having the latter objective would mean that the model needs good quality data at every stage of model development, and lacking those data would raise concerns about the suitability of ABM in the first place to address the proposed research questions. Without validation, as robust as a model may be, it may not be trusted to generate valid predictions or forecasts.

On the other hand, where models are intended as ‘toy models’, lack of validation is less of a problem. These models are meant to inspire more informed research questions about observed phenomena, which can subsequently be explored through further targeted real-world experiments, data collection, modelling, or a combination thereof. These models also provide clear entry points to the discipline for someone just beginning to explore complex systems, ABM, or both – many of us can point to reading texts such as Growing Artificial Societies (Epstein and Axtell, 1996) as the first time we truly understood and connected with ABM. But somehow there appear to be fewer takers for developing toy models in recent years. This could be due to perceptions that toy models risk being dismissed as vague (or at least harder to publish), because practitioners are on tight timelines and thus experience a lack of time or room to experiment with toy models, or because of a need  to deliver model-based predictions (forecasts or projections) to satisfy specific project requirements.

We fear that any such bias against toy models might incur a cost in the form of compromised quality of models, or discourage new entrants and sponsors for ABM. The former is likely to occur when modellers try to build an overly complicated or specific model based on minimal, poor, or fragmented data, and thus possibly relying on too many assumptions that lack sound evidence. The latter could happen when ABM is solely intended as a means to an end rather than as a means to experiment. Reflecting on our journey and thinking ahead, we believe ESSA@work could avoid these outcomes by providing an unbiased, supportive, and well-connected incubatory forum to encourage the development and housing of toy models, which have sound methodological and modelling rigour, despite being unsuitable for prediction due to the lack of validation using empirical data. We could then expect that a growing bank of model examples and modellers would pave the way for ABM practice to flourish, alongside guiding data confidentiality, data collection, sharing, and management practices that allow turning toy models into specialised models in methodical, reusable, and reproducible ways. The prominence of ESSA@work in the ESSA network could allow us to take on such a role in the future if more ABM practitioners (at all stages of their modelling career) volunteer to support with running the forum and its activities.

Conclusion

ESSA@work offers a valuable learning experience for participants, audience members, and organisers alike. It has become an integral part of the SSC annual conference and especially of the ABM community. While this is the result of past efforts and activities, our current work looks to the future and aims at continuity with the past but also renovation and further development.

We strive to improve and make our team and community grow. For this reason, we always welcome new organisers to contribute in this joint effort to grow both the spectrum and the reach of our activities. To guarantee continuity of this track and continue to improve it, we believe that diversity in participation could play a major role in innovation and better identifying early career researchers’ and other participants’ needs in the coming years.

The COVID era has confronted us, among others, with different professional and academic challenges. We all transferred our work from on-site to remote or hybrid, and likewise we adapted to new formats to guarantee that the ESSA community could continue to meet. While originally the result of needs and adaptation, online and hybrid formats have proved to be effective in ensuring a wide reach and increased accessibility. The SSC2022 and SocSimFesT past editions showed the possibility and success of a plurality of formats and ways to meet, discuss, and progress our research. These formats are now integrated in our working life and they represent a possibility for ESSA@work to get in touch with new cohorts of international modellers.

ESSA@work is a friendly space for in-depth discussion and learning, and as such, it extends beyond the boundaries of the annual conference or on-site events. We aim to continue offering online or hybrid events in the hope that they will make participation more accessible and provide additional feedback to anyone who needs it. In addition, we encourage the organisation of local ESSA@work sessions. In order to do so, the ambition and priority of ESSA@work is preserving its function as a community-builder and ensuring that participants are supported and able to self-organise according to the challenges and needs arising from their research.

References

Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/

Chattoe-Brown, E. (2022) Today We Have Naming Of Parts: A Possible Way Out Of Some Terminological Problems With ABM. Review of Artificial Societies and Social Simulation, 11th January 2022. https://rofasss.org/2022/01/11/naming-of-parts/

Epstein, J. M., & Axtell, R. (1996). Growing artificial societies: social science from the bottom up. Brookings Institution Press.


Narasimhan, K., Leoni, S., Luckner, K., Carpentras, D. and Davis, N. (2022) ESSA@work: Reflections and looking ahead. Review of Artificial Societies and Social Simulation, 20 Feb 2023. https://rofasss.org/2022/02/20/essawork


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Socio-Cognitive Systems – a position statement

By Frank Dignum1, Bruce Edmonds2 and Dino Carpentras3

1Department of Computing Science, Faculty of Science and Technology, Umeå University, frank.dignum@umu.se
2Centre for Policy Modelling, Manchester Metropolitan University, bruce@edmonds.name
3Department of Psychology, University of Limerick, dino.carpentras@gmail.com

In this position paper we argue for the creation of a new ‘field’: Socio-Cognitive Systems. The point of doing this is to highlight the importance of a multi-levelled approach to understanding those phenomena where the cognitive and the social are inextricably intertwined – understanding them together.

What goes on ‘in the head’ and what goes on ‘in society’ are complex questions. Each of these deserves serious study on their own – motivating whole fields to answer them. However, it is becoming increasingly clear that these two questions are deeply related. Humans are fundamentally social beings, and it is likely that many features of their cognition have evolved because they enable them to live within groups (Herrmann et al. 20007). Whilst some of these social features can be studied separately (e.g. in a laboratory), others only become fully manifest within society at large. On the other hand, it is also clear that how society ‘happens’ is complicated and subtle and that these processes are shaped by the nature of our cognition. In other words, what people ‘think’ matters for understanding how society ‘is’ and vice versa. For many reasons, both of these questions are difficult to answer. As a result of these difficulties, many compromises are necessary in order to make progress on them, but each compromise also implies some limitations. The main two types of compromise consist of limiting the analysis to only one of the two (i.e. either cognition or society)[1]. To take but a few examples of this.

  1. Neuro-scientists study what happens between systems of neurones to understand how the brain does things and this is so complex that even relatively small ensembles of neurones are at the limits of scientific understanding.
  2. Psychologists see what can be understood of cognition from the outside, usually in the laboratory so that some of the many dimensions can be controlled and isolated. However, what can be reproduced in a laboratory is a limited part of behaviour that might be displayed in a natural social context.
  3. Economists limit themselves to the study of the (largely monetary) exchange of services/things that could occur under assumptions of individual rationality, which is a model of thinking not based upon empirical data at the individual level. Indeed it is known to contradict a lot of the data and may only be a good approximation for average behaviour under very special circumstances.
  4. Ethnomethodologists will enter a social context and describe in detail the social and individual experience there, but not generalise beyond that and not delve into the cognition of those they observe.
  5. Other social scientists will take a broader view, look at a variety of social evidence, and theorise about aspects of that part of society. They (almost always) do not include individual cognition into account in these and do not seek to integrate the social and the cognitive levels.

Each of these in the different ways separate the internal mechanisms of thought from the wider mechanisms of society or limits its focus to a very specific topic. This is understandable; what each is studying is enough to keep them occupied for many lifetimes. However, this means that each of these has developed their own terms, issues, approaches and techniques which make relating results between fields difficult (as Kuhn, 1962, pointed out).

SCS Picture 1

Figure 1: Schematic representation of the relationship between the individual and society. Individuals’ cognition is shaped by society, at the same time, society is shaped by individuals’ beliefs and behaviour.

This separation of the cognitive and the social may get in the way of understanding many things that we observe. Some phenomena seem to involve a combination of these aspects in a fundamental way – the individual (and its cognition) being part of society as well as society being part of the individual. Some examples of this are as follows (but please note that this is far from an exhaustive list).

  • Norms. A social norm is a constraint or obligation upon action imposed by society (or perceived as such). One may well be mistaken about a norm (e.g. whether it is ok to casually talk to others at a bus stop), thus it is also a belief – often not told to one explicitly but something one needs to infer from observation. However, for a social norm to hold it also needs to be an observable convention. Decisions to violate social norms require that the norm is an explicit (referable) object in the cognitive model. But the violation also has social consequences. If people react negatively to violations the norm can be reinforced. But if violations are ignored it might lead to a norm disappearing. How new norms come about, or how old ones fade away, is a complex set of interlocking cognitive and social processes. Thus social norms are a phenomena that essentially involves both the social and the cognitive (Conte et al. 2013).
  • Joint construction of social reality. Many of the constraints on our behaviour come from our perception of social reality. However, we also create this social reality and constantly update it. For example, we can invent a new procedure to select a person as head of department or exit a treaty and thus have different ways of behaving after this change. However, these changes are not unconstrained in themselves. Sometimes the time is “ripe for change”, while at other times resistance is too big for any change to take place (even though a majority of the people involved would like to change). Thus what is socially real for us depends on what people individually believe is real, but this depends in complex ways on what other people believe and their status. And probably even more important: the “strength” of a social structure depends on the use people make of it. E.g. a head of department becomes important if all decisions in the department are deferred to the head. Even though this might not be required by university or law.
  • Identity. Our (social) identity determines the way other people perceive us (e.g. a sports person, a nerd, a family man) and therefore creates expectations about our behaviour. We can create our identities ourselves and cultivate them, but at the same time, when we have a social identity, we try to live up to it. Thus, it will partially determine our goals and reactions and even our feeling of self-esteem when we live up to our identity or fail to do so. As individuals we (at least sometimes) have a choice as to our desired identity, but in practice, this can only be realised with the consent of society. As a runner I might feel the need to run at least three times a week in order for other people to recognize me as runner. At the same time a person known as a runner might be excused from a meeting if training for an important event. Thus reinforcing the importance of the “runner” identity.
  • Social practices. The concept already indicates that social practices are about the way people habitually interact and through this interaction shape social structures. Practices like shaking hands when greeting do not always have to be efficient, but they are extremely socially important. For example, different groups, countries and cultures will have different practices when greeting and performing according to the practice shows whether you are part of the in-group or out-group. However, practices can also change based on circumstances and people, as it happened, for example, to the practice of shaking hands during the covid-19 pandemic. Thus, they are flexible and adapting to the context. They are used as flexible mechanisms to efficiently fit interactions in groups, connecting persons and group behaviour.

As a result, this division between cognitive and the social gets in the way not only of theoretical studies, but also in practical applications such as policy making. For example, interventions aimed at encouraging vaccination (such as compulsory vaccination) may reinforce the (social) identity of the vaccine hesitant. However, this risk and its possible consequences for society cannot be properly understood without a clear grasp of the dynamic evolution of social identity.

Computational models and systems provide a way of trying to understand the cognitive and the social together. For computational modellers, there is no particular reason to confine themselves to only the cognitive or only the social because agent-based systems can include both within a single framework. In addition, the computational system is a dynamic model that can represent the interactions of the individuals that connect the cognitive models and the social models. Thus the fact that computational models have a natural way to represent the actions as an integral and defining part of the socio-cognitive system is of prime importance. Given that the actions are an integral part of the model it is well suited to model the dynamics of socio-cognitive systems and track changes at both the social and the cognitive level. Therefore, within such systems we can study how cognitive processes may act to produce social phenomena whilst, at the same time, as how social realities are shaping the cognitive processes. Caarley and Newell (1994) discusses what is necessary at the agent level for sociality, Hofested et al. (2021) talk about how to understand sociality using computational models (including theories of individual action) – we want to understand both together. Thus, we can model the social embeddedness that Granovetter (1985) talked about – going beyond over- or under-socialised representations of human behaviour. It is not that computational models are innately suitable for modelling either the cognitive or the social, but that they can be appropriately structured (e.g. sets of interacting parts bridging micro-, meso- and macro-levels) and include arbitrary levels of complexity. Lots of models that represent the social have entities that stand for the cognitive, but do not explicitly represent much of that detail – similarly much cognitive modelling implies the social in terms of the stimuli and responses of an individual that would be to other social entities, but where these other entities are not explicitly represented or are simplified away.

Socio-Cognitive Systems (SCS) are: those models and systems where both cognitive and social complexity are represented with a meaningful level of processual detail.

A good example of an application where this appeared of the biggest importance was in simulations for the covid-19 crisis. The spread of the corona virus on macro level could be given by an epidemiological model, but the actual spreading depended crucially on the human behaviour that resulted from individuals’ cognitive model of the situation. In Dignum (2021) it was shown how the socio-cognitive system approach was fundamental to obtaining better insights in the effectiveness of a range of covid-19 restrictions.

Formality here is important. Computational systems are formal in the sense that they can be unambiguously passed around (i.e. unlike language, it is not differently re-interpreted by each individual) and operate according to their own precisely specified and explicit rules. This means that the same system can be examined and experimented on by a wider community of researchers. Sometimes, even when the researchers from different fields find it difficult to talk to one another, they can fruitfully cooperate via a computational model (e.g. Lafuerza et al. 2016). Other kinds of formal systems (e.g. logic, maths) are geared towards models that describe an entire system from a birds eye view. Although there are some exceptions like fibred logics Gabbay (1996), these are too abstract to be of good use to model practical situations. The lack of modularity and has been addressed in context logics Giunchiglia, F., & Ghidini, C. (1998). However, the contexts used in this setting are not suitable to generate a more general societal model. It results in most typical mathematical models using a number of agents which is either one, two or infinite (Miller and Page 2007), while important social phenomena happen with a “medium sized” population. What all these formalisms miss is a natural way of specifying the dynamics of the system that is modelled, while having ways to modularly describe individuals and the society resulting from their interactions. Thus, although much of what is represented in Socio-Cognitive Systems is not computational, the lingua franca for talking about them is.

The ‘double complexity’ of combining the cognitive and the social in the same system will bring its own methodological challenges. Such complexity will mean that many socio-cognitive systems will be, themselves, hard to understand or analyse. In the covid-19 simulations, described in (Dignum 2021), a large part of the work consisted of analysing, combining and representing the results in ways that were understandable. As an example, for one scenario 79 pages of graphs were produced showing different relations between potentially relevant variables. New tools and approaches will need to be developed to deal with this. We only have some hints of these, but it seems likely that secondary stages of analysis – understanding the models – will be necessary, resulting in a staged approach to abstraction (Lafuerza et al. 2016). In other words, we will need to model the socio-cognitive systems, maybe in terms of further (but simpler) socio-cognitive systems, but also maybe with a variety of other tools. We do not have a view on this further analysis, but this could include: machine learning, mathematics, logic, network analysis, statistics, and even qualitative approaches such as discourse analysis.

An interesting input for the methodology of designing and analysing socio-cognitive systems is anthropology and specifically ethnographical methods. Again, for the covid-19 simulations the first layer of the simulation was constructed based on “normal day life patterns”. Different types of persons were distinguished that each have their own pattern of living. These patterns interlock and form a fabric of social interactions that overall should satisfy most of the needs of the agents. Thus we calibrate the simulation based on the stories of types of people and their behaviours. Note that doing the same just based on available data of behaviour would not account for the underlying needs and motives of that behaviour and would not be a good basis for simulating changes. The stories that we used looked very similar to the type of reports ethnographers produce about certain communities. Thus further investigating this connection seems worthwhile.

For representing the output of the complex socio-cognitive systems we can also use the analogue of stories. Basically, different stories show the underlying (assumed) causal relations between phenomena that are observed. E.g. seeing an increase in people having lunch with friends can be explained by the fact that a curfew prevents people having dinner with their friends, while they still have a need to socialize. Thus the alternative of going for lunch is chosen more often. One can see that the explaining story uses both social as well as cognitive elements to describe the results. Although in the covid-19 simulations we have created a number of these stories, they were all created by hand after (sometimes weeks) of careful analysis of the results. Thus for this kind of approach to be viable, new tools are required.

Although human society is the archetypal socio-cognitive system, it is not the only one. Both social animals and some artificial systems also come under this category. These may be very different from the human, and in the case of artificial systems completely different. Thus, Socio-Cognitive Systems is not limited to the discussion of observable phenomena, but can include constructed or evolved computational systems, and artificial societies. Examination of these (either theoretically or experimentally) opens up the possibility of finding either contrasts or commonalities between such systems – beyond what happens to exist in the natural world. However, we expect that ideas and theories that were conceived with human socio-cognitive systems in mind might often be an accessible starting point for understanding these other possibilities.

In a way, Socio-Cognitive Systems bring together two different threads in the work of Herbert Simon. Firstly, as in Simon (1948) it seeks to take seriously the complexity of human social behaviour without reducing this to overly simplistic theories of individual behaviour. Secondly, it adopts the approach of explicitly modelling the cognitive in computational models (Newell & Simon 1972). Simon did not bring these together in his lifetime, perhaps due to the limitations and difficulty of deploying the computational tools to do so. Instead, he tried to develop alternative mathematical models of aspects of thought (Simon 1957). However, those models were limited by being mathematical rather than computational.

To conclude, a field of Socio-Cognitive Systems would consider the cognitive and the social in an integrated fashion – understanding them together. We suggest that computational representation or implementation might be necessary to provide concrete reference between the various disciplines that are needed to understand them. We want to encourage research that considers the cognitive and the social in a truly integrated fashion. If by labelling a new field does this it will have achieved its purpose. However, there is the possibility that completely new classes of theory and complexity may be out there to be discovered – phenomena that are denied if either the cognitive or the social are not taken together – a new world of a socio-cognitive systems.

Notes

[1] Some economic models claim to bridge between individual behaviour and macro outcomes, however this is traditionally notional. Many economists admit that their primary cognitive models (varieties of economic rationality) are not valid for individuals but are what people on average do – i.e. this is a macro-level model. In other economic models whole populations are formalised using a single representative agent. Recently, there are some agent-based economic models emerging, but often limited to agree with traditional models.

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, grant number ES/S015159/1.

References

Carley, K., & Newell, A. (1994). The nature of the social agent. Journal of mathematical sociology, 19(4): 221-262. DOI: 10.1080/0022250X.1994.9990145

Conte R., Andrighetto G. and Campennì M. (eds) (2013) Minding Norms – Mechanisms and dynamics of social order in agent societies. Oxford University Press, Oxford.

Dignum, F. (ed.) (2021) Social Simulation for a Crisis; Results and Lessons from Simulating the COVID-19 Crisis. Springer.

Herrmann E., Call J, Hernández-Lloreda MV, Hare B, Tomasello M (2007) Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science 317(5843): 1360-1366. DOI: 10.1126/science.1146282

Hofstede, G.J, Frantz, C., Hoey, J., Scholz, G. and Schröder, T. (2021) Artificial Sociality Manifesto. Review of Artificial Societies and Social Simulation, 8th Apr 2021. https://rofasss.org/2021/04/08/artsocmanif/

Gabbay, D. M. (1996). Fibred Semantics and the Weaving of Logics Part 1: Modal and Intuitionistic Logics. The Journal of Symbolic Logic, 61(4), 1057–1120.

Ghidini, C., & Giunchiglia, F. (2001). Local models semantics, or contextual reasoning= locality+ compatibility. Artificial intelligence, 127(2), 221-259. DOI: 10.1016/S0004-3702(01)00064-9

Granovetter, M. (1985) Economic action and social structure: The problem of embeddedness. American Journal of Sociology 91(3): 481-510. DOI: 10.1086/228311

Kuhn, T,S, (1962) The structure of scientific revolutions. University of Chicago Press, Chicago

Lafuerza L.F., Dyson L., Edmonds B., McKane A.J. (2016) Staged Models for Interdisciplinary Research. PLoS ONE 11(6): e0157261, DOI: 10.1371/journal.pone.0157261

Miller, J. H., Page, S. E., & Page, S. (2009). Complex adaptive systems. Princeton university press.

Newell A, Simon H.A. (1972) Human problem solving. Prentice Hall, Englewood Cliffs, NJ

Simon, H.A. (1948) Administrative behaviour: A study of the decision making processes in administrative organisation. Macmillan, New York

Simon, H.A. (1957) Models of Man: Social and rational. John Wiley, New York


Dignum, F., Edmonds, B. and Carpentras, D. (2022) Socio-Cognitive Systems – A Position Statement. Review of Artificial Societies and Social Simulation, 2nd Apr 2022. https://rofasss.org/2022/04/02/scs


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Challenges and opportunities in expanding ABM to other fields: the example of psychology

By Dino Carpentras

Centre for Social Issues Research, Department of Psychology, University of Limerick

The loop of isolation

One of the problems discussed during the last public meeting of the European Social Simulation Association (ESSA) at the Social Simulation Conference 2021 was the problem of reaching different communities outside the ABM one. This is a serious problem as we are risking getting trapped in a vicious cycle of isolation.

The cycle can be explained as follows. (a) Many fields are not familiar with ABM methods and standards. This results in the fact that (b) both reviewers and editors will struggle in understanding and evaluating the quality of an ABM paper. In general, this translates in a higher rejection rate and way longer time before publication. As results (c) fewer researchers in ABM will be willing to send their work to other communities, and, in general, fewer ABM works will be published in journals of other communities. Fewer articles using ABM makes it such that (d) fewer people would be aware of ABM, understand their methods and standards and even consider it an established research method.

Another point to consider is that, as time passes, each field evolves and develops new standards and procedures. Unfortunately, if two fields are not enough aware of each other, the new procedures will appear even more alien to members of the other community reinforcing the previously discussed cycle. A schematic of this is offered in figure 1.

fig1_v2

Figure 1: Vicious cycle of isolation

The challenge

Of course, a “brute force” solution would be to keep sending articles to journals in different fields until they get published. However, this would be extremely expensive in terms of time, and probably most researchers will not be happy of following this path.

A more elaborated solution could be framed as “progressively getting to know each other.” This would consist in modellers getting more familiar with the target community and vice versa. In this way, people from ABM would be able to better understand the jargon, the assumptions and even what is interesting enough to be the main result of a paper in a specific discipline. This would make it easier for members of our community to communicate research results using the language and methods familiar to the other field.

At the same time, researchers in the other field could slowly integrate ABM into their work, showing the potential of ABM and making it appear less alien to their peers. All of this would revert the previously discussed vicious cycle, by producing a virtuous one which would bring the two fields closer and closer.

Unfortunately, such goal cannot be obtained overnight, as it probably will require several events, collaborations, publications and probably several years (or even decades!). However, as result, our field would be familiar to and recognized by multiple other fields, enormously increasing the scientific impact of our research as well as the number of people working in ABM.

In this short communication, I would like to, firstly, highlight the importance and the challenges of reaching out other fields and, secondly, show a practical example with the field of psychology. I have chosen this field for no particular reason, besides the fact that I am currently working in the department of psychology. This gave me the opportunity of interacting with several researchers in this field.

In the next sections, I will summarize the main points of several informal discussions with these researchers. Specifically, I will try to highlight what they reported to be promising or interesting in ABM and also what felt alien or problematic to them.

Let me also stress that this does not want to be a complete overview, nor it should be thought as a summary of “what every psychologist think about ABM.” Instead, this is simply a summary of the discussions I had so far. What I hope, is that this will be at least a little useful to our community for building better connections with other fields.

The elephant in the room

Before moving to the list of comments on ABM I have collected, I want to address one point which appeared almost every time I discussed ABM with psychologists. Actually, it appeared almost every time I discuss ABM with people outside our field. This is the problem of experiments and validation.

I know there was recently a massive discussion on the SimSoc mailing list on opinion dynamics and validation, and this discussion will probably continue. Therefore, I am not going to discuss if all models should be tested, if a validated model should be considered superior, etc. Indeed, I do not want to discuss at all if validation should be considered important within our community. Instead, I want to discuss how important this is while interacting with other communities.

Indeed, many other fields give empirical data and validation a key role, having even developed different methods to test the quality of a hypothesis or a model when comparing it to empirical data (e.g. calculation of p-value, Krishnaiah 1980). Also, I repeatedly experienced disappointment or even mockery when I explained to non-ABM people that the model I was explaining them about was not empirically validated (e.g. the Deffuant model of opinion dynamics). In one single case, I even had a person laughing at me for this.

Unfortunately, many people which are not familiar with ABM end up considering it almost like a “nice exercise,” and even “not a real science.” This could be extremely dangerous for our field. Indeed, if multiple researchers will start thinking of ABM as a lesser science, communication with other fields – as well as obtaining funding for research – would get exponentially harder for our community.

Also, please, let me stress again to not “confuse the message with the messenger.” Here, I am not claiming that an unvalidated model should be considered inferior, or anything like that. What I am saying is that many people outside our field think in a similar fashion and this may eventually turn into a way bigger problem for us.

I will further discuss this point in the conclusion section, however, I will not claim that we should get rid of “pure models,” or that every model should be validated. What I will claim is that we should promote more empirical works as they will allow us to interact more easily with other fields.

Further points

In this section, I have collected (in no particular order) different comments and suggestions I have received from psychologist on the topic ABM. All of them had at least some experience of working side to side with a researcher developing ABMs.

Also in this case, please, remember that this are not my claims, but feedbacks I received. Furthermore, they should not be analysed as “what ABM is,” but more as “how ABM may look like to people in another field.”

  1. Some psychologists showed interest in the possibility of having loops in ABMs, which allow for relationships which go beyond simple cause and effect. Indeed, several models in psychology are structured in the form of “parameter X influences parameter Y” (and Y cannot influence X, forming a loop). While this approach is very common in psychology, many researchers are not satisfied with it, making ABMs are a very good opportunity for the development of more realistic models.
  2. Some psychologists said that at first impact, ABM looks very interesting. However, the extensive use of equations can confuse or even scare people who are not very used to them.
  3. Some praised Schelling’s model (Schelling 1971). Especially the approach of developing a hypothesis and then using an ABM to falsify it.
  4. Some criticized that often is not clear what an ABM should be used for or what such a model “is telling us.”
  5. Similarly, the use of models with a big number of parameters was criticized as “[these models] can eventually produce any result.”
  6. Another confusion that appeared multiple times was that often it is not clear if the model should be analysed and interpreted at the individual level (e.g. agents which start from state A often end up in state B) or at the more global level (e.g. distribution A results in distribution B).
  7. Another major complaint was that psychological measures are nominal or ordinal, while many models suppose interval-like variables.
  8. Another criticism was based on the fact that often agents behave all in the same way without including personal differences.
  9. In psychology there is a lot of attention on the sample size and if this is big enough to produce significant results. Some stressed that in many ABM works it is often not clear if the sample size (i.e. the number of agents) is sufficient for supporting the analysis.

Conclusion

I would like to stress again that these comments are not supposed to represent the thoughts of every psychologist, nor that I am suggesting that all the ABM literature should adapt to them or that they are always correct. For example, to my personal opinion, point 5 and 8 are pushing towards opposite directions; one aiming at simpler models and the other pushing towards complexity. Similarly, I do not think we should decrease the number of equations in our works to meet point 2. However, I think we should consider these feedbacks when planning interactions with the psychology community.

As mentioned before, a crucial role when interacting with other communities is played by experiments and validations. Even points 6 and especially points 7 and 9 suggest how member of this community often try to look for 1-to-1 relationships between agents of simulations and people in the real world.

fig2

Figure 2: (left) Empirical ABM acting as a bridge between theoretical ABM and other research fields. (Right) as the relationship between ABM and the other field matures, people become familiar with ABM standards and a direct link to theoretical ABM can be established.

As suggested by someone during the already mentioned discussion in the SimSoc mailing list, this could be solved by introducing a new figure (or, equivalently, a new research field) dedicated to empirical work in ABM. Following this solution, theoretical modellers could keep developing models without having to worry about validation. This would be similar to the work carried out by theoretical researchers in physics. At the same time, we would have also a stream of research dedicated to “experimental ABM.” People working on this topic will further explore the connection between models and the empirical world through experiments and validation processes. Of course, the two should not be mutually exclusive, as a researcher (or a piece of research) may still fall in both categories. However, having this distinction may help in giving more space to empirical work.

I believe that the role of experimental ABM could be crucial for developing good interactions between ABM and other communities. Indeed, this type of research could be accepted much more easily by other communities, producing better interactions with ABM. Especially, mentioning experiments and validation, could strongly decrease the initial mistrust that many people show when discussing ABM. Furthermore, as ABM develops stronger connections with another field, and our methods and standards become more familiar, we would probably also observe more people from the other community which would start looking into more theoretical ABM approaches and what-if scenarios (see fig 2).

References

Krishnaiah, P. R. (Ed.). (1980). A Hand Book of Statistics (Vol. 1). Motilal Banarsidass Publishe.

Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1(2), 143-186.

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144.


Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Where Now For Experiments In Agent-Based Modelling? Report of a Round Table at SSC2021, held on 22 September 2021


By Dino Carpentras1, Edmund Chattoe-Brown2*, Bruce Edmonds3, Cesar García-Diaz4, Christian Kammler5, Anna Pagani6 and Nanda Wijermans7

*Corresponding author, 1Centre for Social Issues Research, University of Limerick, 2School of Media, Communication and Sociology, University of Leicester, 3Centre for Policy Modelling, Manchester Metropolitan University, 4Department of Business Administration, Pontificia Universidad Javeriana, 5Department of Computing Science, Umeå University, 6Laboratory on Human-Environment Relations in Urban Systems (HERUS), École Polytechnique Fédérale de Lausanne (EPFL), 7Stockholm Resilience Centre, Stockholm University.

Introduction

This round table was convened to advance and improve the use of experimental methods in Agent-Based Modelling, in the hope that both existing and potential users of the method would be able to identify steps towards this aim[i]. The session began with a presentation by Bruce Edmonds (http://cfpm.org/slides/experiments%20and%20ABM.pptx) whose main argument was that the traditional idea of experimentation (controlling extensively for the environment and manipulating variables) was too simplistic to add much to the understanding of the sort of complex systems modelled by ABMs and that we should therefore aim to enhance experiments (for example using richer experimental settings, richer measures of those settings and richer data – like discussions between participants as well as their behaviour). What follows is a summary of the main ideas discussed organised into themed sections.

What Experiments Are

Defining the field of experiments proved to be challenging on two counts. The first was that there are a number of labels for potentially relevant approaches (experiments themselves – for example, Boero et al. 2010, gaming – for example, Tykhonov et al. 2008, serious games – for example Taillandier et al. 2019, companion/participatory modelling – for example, Ramanath and Gilbert 2004 and web based gaming – for example, Basole et al. 2013) whose actual content overlap is unclear. Is it the case that a gaming approach is generally more in line with the argument proposed by Edmonds? How can we systematically distinguish the experimental content of a serious game approach from a gaming approach? This seems to be a problem in immature fields where the labels are invented first (often on the basis of a few rather divergent instances) and the methodology has to grow into them. It would be ludicrous if we couldn’t be sure whether a piece of research was survey based or interview based (and this would radically devalue the associated labels if it were so.)

The second challenge is also more general in Agent-Based Modelling which is the same labels being used differently by different researchers. It is not productive to argue about which uses are correct but it is important that the concepts behind the different uses are clear so a common scheme of labelling might ultimately be agreed. So, for example, experiment can be used (and different round table participants had different perspectives on the uses they expected) to mean laboratory experiments (simplified settings with human subjects – again see, for example, Boero et al. 2010), experiments with ABMs (formal experimentation with a model that doesn’t necessarily have any empirical content – for example, Doran 1998) and natural experiments (choice of cases in the real world to, for example, test a theory – see Dinesen 2013).

One approach that may help with this diversity is to start developing possible dimensions of experimentation. One might be degree of control (all the way from very stripped down behavioural laboratory experiments to natural situations where the only control is to select the cases). Another might be data diversity: From pure analysis of ABMs (which need not involve data at all), through laboratory experiments that record only behaviour to ethnographic collection and analysis of diverse data in rich experiments (like companion modelling exercises.) But it is important for progress that the field develops robust concepts that allow meaningful distinctions and does not get distracted into pointless arguments about labelling. Furthermore, we must consider the possible scientific implications of experimentation carried out at different points in the dimension space: For example, what are the relative strengths and limitations of experiments that are more or less controlled or more or less data diverse? Is there a “sweet spot” where the benefit of experiments is greatest to Agent-Based Modelling? If so, what is it and why?

The Philosophy of Experiment

The second challenge is the different beliefs (often associated with different disciplines) about the philosophical underpinnings of experiment such as what we might mean by a cause. In an economic experiment, for example, the objective may be to confirm a universal theory of decision making through displayed behaviour only. (It is decisions described by this theory which are presumed to cause the pattern of observed behaviour.) This will probably not allow the researcher to discover that their basic theory is wrong (people are adaptive not rational after all) or not universal (agents have diverse strategies), or that some respondents simply didn’t understand the experiment (deviations caused by these phenomena may be labelled noise relative to the theory being tested but in fact they are not.)

By contrast qualitative sociologists believe that subjective accounts (including accounts of participation in the experiment itself) can be made reliable and that they may offer direct accounts of certain kinds of cause: If I say I did something for a certain reason then it is at least possible that I actually did (and that the reason I did it is therefore its cause). It is no more likely that agreement will be reached on these matters in the context of experiments than it has been elsewhere. But Agent-Based Modelling should keep its reputation for open mindedness by seeing what happens when qualitative data is also collected and not just rejecting that approach out of hand as something that is “not done”. There is no need for Agent-Based Modelling blindly to follow the methodology of any one existing discipline in which experiments are conducted (and these disciplines often disagree vigorously on issues like payment and deception with no evidence on either side which should also make us cautious about their self-evident correctness.)

Finally, there is a further complication in understanding experiments using analogies with the physical sciences. In understanding the evolution of a river system, for example, one can control/intervene, one can base theories on testable micro mechanisms (like percolation) and one can observe. But there is no equivalent to asking the river what it intends (whether we can do this effectively in social science or not).[ii] It is not totally clear how different kinds of data collection like these might relate to each other in the social sciences, for example, data from subjective accounts, behavioural experiments (which may show different things from what respondents claim) and, for example, brain scans (which side step the social altogether.) This relationship between different kinds of data currently seems incompletely explored and conceptualised. (There is a tendency just to look at easy cases like surveys versus interviews.)

The Challenge of Experiments as Practical Research

This is an important area where the actual and potential users of experiments participating in the round table diverged. Potential users wanted clear guidance on the resources, skills and practices involved in doing experimental work (and see similar issues in the behavioural strategy literature, for example, Reypens and Levine 2018). At the most basic level, when does a researcher need to do an experiment (rather than a survey, interviews or observation), what are the resource requirements in terms of time, facilities and money (laboratory experiments are unusual in often needing specific funding to pay respondents rather than substituting the researcher working for free) what design decisions need to be made (paying subjects, online or offline, can subjects be deceived?), how should the data be analysed (how should an ABM be validated against experimental data?) and so on.[iii] (There are also pros and cons to specific bits of potentially supporting technology like Amazon Mechanical Turk, Qualtrics and Prolific, which have not yet been documented and systematically compared for the novice with a background in Agent-Based Modelling.) There is much discussion about these matters in the traditional literatures of social sciences that do experiments (see, for example, Kagel and Roth 1995, Levine and Parkinson 1994 and Zelditch 2014) but this has not been summarised and tuned specifically for the needs of Agent-Based Modellers (or published where they are likely to see it).

However, it should not be forgotten that not all research efforts need this integration within the same project, so thinking about the problems that really need it is critical. Nonetheless, triangulation is indeed necessary within research programmes. For instance, in subfields such as strategic management and organisational design, it is uncommon to see an ABM integrated with an experiment as part of the same project (though there are exceptions, such as Vuculescu 2017). Instead, ABMs are typically used to explore “what if” scenarios, build process theories and illuminate potential empirical studies. In this approach, knowledge is accumulated instead through the triangulation of different methodologies in different projects (see Burton and Obel 2018). Additionally, modelling and experimental efforts are usually led by different specialists – for example, there is a Theoretical Organisational Models Society whose focus is the development of standards for theoretical organisation science.

In a relatively new and small area, all we often have is some examples of good practice (or more contentiously bad practice) of which not everyone is even aware. A preliminary step is thus to see to what extent people know of good practice and are able to agree that it is good (and perhaps why it is good).

Finally, there was a slightly separate discussion about the perspectives of experimental participants themselves. It may be that a general problem with unreal activity is that you know it is unreal (which may lead to problems with ecological validity – Bornstein 1999.) On the other hand, building on the enrichment argument put forward by Edmonds (above), there is at least anecdotal observational evidence that richer and more realistic settings may cause people to get “caught up” and perhaps participate more as they would in reality. Nonetheless, there are practical steps we can take to learn more about these phenomena by augmenting experimental designs. For example we might conduct interviews (or even group discussions) before and after experiments. This could make the initial biases of participants explicit and allow them to self-evaluate retrospectively the extent to which they got engaged (or perhaps even over-engaged) during the game. The first such questionnaire could be available before attending the experiment, whilst another could be administered right after the game (and perhaps even a third a week later). In addition to practical design solutions, there are also relevant existing literatures that experimental researchers should probably draw on in this area, for example that on systemic design and the associated concept of worldviews. But it is fair to say that we do not yet fully understand the issues here but that they clearly matter to the value of experimental data for Agent-Based Modelling.[iv]

Design of Experiments

Something that came across strongly in the round table discussion as argued by existing users of experimental methods was the desirability of either designing experiments directly based on a specific ABM structure (rather than trying to use a stripped down – purely behavioural – experiment) or mixing real and simulated participants in richer experimental settings. In line with the enrichment argument put forward by Edmonds, nobody seemed to be using stripped down experiments to specify, calibrate or validate ABM elements piecemeal. In the examples provided by round table participants, experiments corresponding closely to the ABM (and mixing real and simulated participants) seemed particularly valuable in tackling subjects that existing theory had not yet really nailed down or where it was clear that very little of the data needed for a particular ABM was available. But there was no sense that there is a clearly defined set of research designs with associated purposes on which the potential user can draw. (The possible role of experiments in supporting policy was also mentioned but no conclusions were drawn.)

Extracting Rich Data from Experiments

Traditional experiments are time consuming to do, so they are frequently optimised to obtain the maximum power and discrimination between factors of interest. In such situations they will often limit their data collection to what is strictly necessary for testing their hypotheses. Furthermore, it seems to be a hangover from behaviourist psychology that one does not use self-reporting on the grounds that it might be biased or simply involve false reconstruction (rationalisation). From the point of view of building or assessing ABMs this approach involves a wasted opportunity. Due to the flexible nature of ABMs there is a need for as many empirical constraints upon modelling as possible. These constraints can come from theory, evidence or abstract principles (such as simplicity) but should not hinder the design of an ABM but rather act as a check on its outcomes. Game-like situations can provide rich data about what is happening, simultaneously capturing decisions on action, the position and state of players, global game outcomes/scores and what players say to each other (see, for example, Janssen et al. 2010, Lindahl et al. 2021). Often, in social science one might have a survey with one set of participants, interviews with others and longitudinal data from yet others – even if these, in fact, involve the same people, the data will usually not indicate this through consistent IDs. When collecting data from a game (and especially from online games) there is a possibility for collecting linked data with consistent IDs – including interviews – that allows for a whole new level of ABM development and checking.

Standards and Institutional Bootstrapping

This is also a wider problem in newer methods like Agent-Based Modelling. How can we foster agreement about what we are doing (which has to build on clear concepts) and institutionalise those agreements into standards for a field (particularly when there is academic competition and pressure to publish).[v] If certain journals will not publish experiments (or experiments done in certain ways) what can we do about that? JASSS was started because it was so hard to publish ABMs. It has certainly made that easier but is there a cost through less publication in other journals? See, for example, Squazzoni and Casnici (2013). Would it have been better for the rigour and wider acceptance of Agent-Based Modelling if we had met the standards of other fields rather than setting our own? This strategy, harder in the short term, may also have promoted communication and collaboration better in the long term. If reviewing is arbitrary (reviewers do not seem to have a common view of what makes an experiment legitimate) then can that situation be improved (and in particular how do we best go about that with limited resources?) To some extent, normal individualised academic work may achieve progress here (researchers make proposals, dispute and refine them and their resulting quality ensures at least some individualised adoption by other researchers) but there is often an observable gap in performance: Even though most modellers will endorse the value of data for modelling in principle most models are still non-empirical in practice (Angus and Hassani-Mahmooei 2015, Figure 9). The jury is still out on the best way to improve reviewer consistency, use the power of peer review to impose better standards (and thus resolve a collective action problem under academic competition[vi]) and so on but recognising and trying to address these issues is clearly important to the health of experimental methods in Agent-Based Modelling. Since running experiments in association with ABMs is already challenging, adding the problem of arbitrary reviewer standards makes the publication process even harder. This discourages scientists from following this path and therefore retards this kind of research generally. Again, here, useful resources (like the Psychological Science Accelerator, which facilitates greater experimental rigour by various means) were suggested in discussion as raw material for our own improvements to experiments in Agent-Based Modelling.

Another issue with newer methods such as Agent-Based Modelling is the path to legitimation before the wider scientific community. The need to integrate ABMs with experiments does not necessarily imply that the legitimation of the former is achieved by the latter. Experimental economists, for instance, may still argue that (in the investigation of behaviour and its implications for policy issues), experiments and data analysis alone suffice. They may rightly ask: What is the additional usefulness of an ABM? If an ABM always needs to be justified by an experiment and then validated by a statistical model of its output, then the method might not be essential at all. Orthodox economists skip the Agent-Based Modelling part: They build behavioural experiments, gather (rich) data, run econometric models and make predictions, without the need (at least as they see it) to build any computational representation. Of course, the usefulness of models lies in the premise that they may tell us something that experiments alone cannot (see Knudsen et al. 2019). But progress needs to be made in understanding (and perhaps reconciling) these divergent positions. The social simulation community therefore needs to be clearer about exactly what ABMs can contribute beyond the limitations of an experiment, especially when addressing audiences of non-modellers (Ballard et al. 2021). Not only is a model valuable when rigorously validated against data, but also whenever it makes sense of the data in ways that traditional methods cannot.

Where Now?

Researchers usually have more enthusiasm than they have time. In order to make things happen in an academic context it is not enough to have good ideas, people need to sign up and run with them. There are many things that stand a reasonable chance of improving the profile and practice of experiments in Agent-Based Modelling (regular sessions at SSC, systematic reviews, practical guidelines and evaluated case studies, discussion groups, books or journal special issues, training and funding applications that build networks and teams) but to a great extent, what happens will be decided by those who make it happen. The organisers of this round table (Nanda Wijermans and Edmund Chattoe-Brown) are very keen to support and coordinate further activity and this summary of discussions is the first step to promote that. We hope to hear from you.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16, <http://jasss.soc.surrey.ac.uk/18/4/16.html>. doi:10.18564/jasss.2952

Ballard, Timothy, Palada, Hector, Griffin, Mark and Neal, Andrew (2021) ‘An Integrated Approach to Testing Dynamic, Multilevel Theory: Using Computational Models to Connect Theory, Model, and Data’, Organizational Research Methods, 24(2), April, pp. 251-284. doi: 10.1177/1094428119881209

Basole, Rahul C., Bodner, Douglas A. and Rouse, William B. (2013) ‘Healthcare Management Through Organizational Simulation’, Decision Support Systems, 55(2), May, pp. 552-563. doi:10.1016/j.dss.2012.10.012

Boero, Riccardo, Bravo, Giangiacomo, Castellani, Marco and Squazzoni, Flaminio (2010) ‘Why Bother with What Others Tell You? An Experimental Data-Driven Agent-Based Model’, Journal of Artificial Societies and Social Simulation, 13(3), June, article 6, <https://www.jasss.org/13/3/6.html>. doi:10.18564/jasss.1620

Bornstein, Brian H. (1999) ‘The Ecological Validity of Jury Simulations: Is the Jury Still Out?’ Law and Human Behavior, 23(1), February, pp. 75-91. doi:10.1023/A:1022326807441

Burton, Richard M. and Obel, Børge (2018) ‘The Science of Organizational Design: Fit Between Structure and Coordination’, Journal of Organization Design, 7(1), December, article 5. doi:10.1186/s41469-018-0029-2

Derbyshire, James (2020) ‘Answers to Questions on Uncertainty in Geography: Old Lessons and New Scenario Tools’, Environment and Planning A: Economy and Space, 52(4), June, pp. 710-727. doi:10.1177/0308518X19877885

Dinesen, Peter Thisted (2013) ‘Where You Come From or Where You Live? Examining the Cultural and Institutional Explanation of Generalized Trust Using Migration as a Natural Experiment’, European Sociological Review, 29(1), February, pp. 114-128. doi:10.1093/esr/jcr044

Doran, Jim (1998) ‘Simulating Collective Misbelief’, Journal of Artificial Societies and Social Simulation, 1(1), January, article 1, <https://www.jasss.org/1/1/3.html>.

Janssen, Marco A., Holahan, Robert, Lee, Allen and Ostrom, Elinor (2010) ‘Lab Experiments for the Study of Social-Ecological Systems’, Science, 328(5978), 30 April, pp. 613-617. doi:10.1126/science.1183532

Kagel, John H. and Roth, Alvin E. (eds.) (1995) The Handbook of Experimental Economics (Princeton, NJ: Princeton University Press).

Knudsen, Thorbjørn, Levinthal, Daniel A. and Puranam, Phanish (2019) ‘Editorial: A Model is a Model’, Strategy Science, 4(1), March, pp. 1-3. doi:10.1287/stsc.2019.0077

Levine, Gustav and Parkinson, Stanley (1994) Experimental Methods in Psychology (Hillsdale, NJ: Lawrence Erlbaum Associates).

Lindahl, Therese, Janssen, Marco A. and Schill, Caroline (2021) ‘Controlled Behavioural Experiments’, in Biggs, Reinette, de Vos, Alta, Preiser, Rika, Clements, Hayley, Maciejewski, Kristine and Schlüter, Maja (eds.) The Routledge Handbook of Research Methods for Social-Ecological Systems (London: Routledge), pp. 295-306. doi:10.4324/9781003021339-25

Ramanath, Ana Maria and Gilbert, Nigel (2004) ‘The Design of Participatory Agent-Based Social Simulations’, Journal of Artificial Societies and Social Simulation, 7(4), October, article 1, <https://www.jasss.org/7/4/1.html>.

Reypens, Charlotte and Levine, Sheen S. (2018) ‘Behavior in Behavioral Strategy: Capturing, Measuring, Analyzing’, in Behavioral Strategy in Perspective, Advances in Strategic Management Volume 39 (Bingley: Emerald Publishing), pp. 221-246. doi:10.1108/S0742-332220180000039016

Squazzoni, Flaminio and Casnici, Niccolò (2013) ‘Is Social Simulation a Social Science Outstation? A Bibliometric Analysis of the Impact of JASSS’, Journal of Artificial Societies and Social Simulation, 16(1), January, article 10, <http://jasss.soc.surrey.ac.uk/16/1/10.html>. doi:10.18564/jasss.2192

Taillandier, Patrick, Grignard, Arnaud, Marilleau, Nicolas, Philippon, Damien, Huynh, Quang-Nghi, Gaudou, Benoit and Drogoul, Alexis (2019) ‘Participatory Modeling and Simulation with the GAMA Platform’, Journal of Artificial Societies and Social Simulation, 22(2), March, article 3, <https://www.jasss.org/22/2/3.html>. doi:10.18564/jasss.3964

Tykhonov, Dmytro, Jonker, Catholijn, Meijer, Sebastiaan and Verwaart, Tim (2008) ‘Agent-Based Simulation of the Trust and Tracing Game for Supply Chains and Networks’, Journal of Artificial Societies and Social Simulation, 11(3), June, article 1, <https://www.jasss.org/11/3/1.html>.

Vuculescu, Oana (2017) ‘Searching Far Away from the Lamp-Post: An Agent-Based Model’, Strategic Organization, 15(2), May, pp. 242-263. doi:10.1177/1476127016669869

Zelditch, Morris Junior (2007) ‘Laboratory Experiments in Sociology’, in Webster, Murray Junior and Sell, Jane (eds.) Laboratory Experiments in the Social Sciences (New York, NY: Elsevier), pp. 183-197.


Notes

[i] This event was organised (and the resulting article was written) as part of “Towards Realistic Computational Models of Social Influence Dynamics” a project funded through ESRC (ES/S015159/1) by ORA Round 5 and involving Bruce Edmonds (PI) and Edmund Chattoe-Brown (CoI). More about SSC2021 (Social Simulation Conference 2021) can be found at https://ssc2021.uek.krakow.pl

[ii] This issue is actually very challenging for social science more generally. When considering interventions in social systems, knowing and acting might be so deeply intertwined (Derbyshire 2020) that interventions may modify the same behaviours that an experiment is aiming to understand.

[iii] In addition, experiments often require institutional ethics approval (but so do interviews, gaming activities and others sort of empirical research of course), something with which non-empirical Agent-Based Modellers may have little experience.

[iv] Chattoe-Brown had interesting personal experience of this. He took part in a simple team gaming exercise about running a computer firm. The team quickly worked out that the game assumed an infinite return to advertising (so you could have a computer magazine consisting entirely of adverts) independent of the actual quality of the product. They thus simultaneously performed very well in the game from the perspective of an external observer but remained deeply sceptical that this was a good lesson to impart about running an actual firm. But since the coordinators never asked the team members for their subjective view, they may have assumed that the simulation was also a success in its didactic mission.

[v] We should also not assume it is best to set our own standards from scratch. It may be valuable to attempt integration with existing approaches, like qualitative validity (https://conjointly.com/kb/qualitative-validity/) particularly when these are already attempting to be multidisciplinary and/or to bridge the gap between, for example, qualitative and quantitative data.

[vi] Although journals also face such a collective action problem at a different level. If they are too exacting relative to their status and existing practice, researchers will simply publish elsewhere.


Dino Carpentras, Edmund Chattoe-Brown, Bruce Edmonds, Cesar García-Diaz, Christian Kammler, Anna Pagani and Nanda Wijermans (2020) Where Now For Experiments In Agent-Based Modelling? Report of a Round Table as Part of SSC2021. Review of Artificial Societies and Social Simulation, 2nd Novermber 2021. https://rofasss.org/2021/11/02/round-table-ssc2021-experiments/