All posts by thesubmissionauthor

An Institute for Crisis Modelling (ICM) – Towards a resilience center for sustained crisis modeling capability

By Fabian Lorig1*, Bart de Bruin2, Melania Borit3, Frank Dignum4, Bruce Edmonds5, Sinéad M. Madden6, Mario Paolucci7, Nicolas Payette8, Loïs Vanhée4

*Corresponding author
1 Internet of Things and People Research Center, Malmö University, Sweden
2 Delft University of Technology, Netherlands
3 CRAFT Lab, Arctic University of Norway, Tromsø, Norway
4 Department of Computing Science, Umeå University, Sweden
5 Centre for Policy Modelling, Manchester Metropolitan University Business School, UK
6 School of Engineering, University of Limerick, Ireland
7 Laboratory of Agent Based Social Simulation, ISTC/CNR, Italy
8 Complex Human-Environmental Systems Simulation Laboratory, University of Oxford, UK

The Need for an ICM

Most crises and disasters do occur suddenly and hit the society while it is unprepared. This makes it particularly challenging to react quick to their occurrence, to adapt to the resulting new situation, to minimize the societal impact, and to recover from the disturbance. A recent example was the Covid-19 crisis, which revealed weak points of our crisis preparedness. Governments were trying to put restrictions in place to limit the spread of the virus while ensuring the well-being of the population and at the same time preserving economic stability. It quickly became clear that interventions which worked well in some countries did not seem to have the intended effect in other countries and the reason for this is that the success of interventions to a great extent depends on individual human behavior.

Agent-based Social Simulations (ABSS) explicitly model the behavior of the individuals and their interactions in the population and allow us to better understand social phenomena. Thus, ABSS are perfectly suited for investigating how our society might be affected by different crisis scenarios and how policies might affect the societal impact and consequences of these disturbances. Particularly during the Covid-19 crisis, a great number of ABSS have been developed to inform policy making around the globe (e.g., Dignum et al. 2020, Balkely et al. 2021, Lorig et al. 2021). However, weaknesses in creating useful and explainable simulations in a short time also became apparent and there is still a lack of consistency to be better prepared for the next crisis (Squazzoni et al. 2020). Especially, ABSS development approaches are, at this moment, more geared towards simulating one particular situation and validating the simulation using data from that situation. In order to be prepared for a crisis, instead, one needs to simulate many different scenarios for which data might not yet be available. They also typically need a more interactive interface where stake holders can experiment with different settings, policies, etc.

For ABSS to become an established, reliable, and well-esteemed method for supporting crisis management, we need to organize and consolidate the available competences and resources. It is not sufficient to react once a crisis occurs but instead, we need to proactively make sure that we are prepared for future disturbances and disasters. For this purpose, we also need to systematically address more fundamental problems of ABSS as a method of inquiry and particularly consider the specific requirements for the use of ABSS to support policy making, which may differ from the use of ABSS in academic research. We therefore see the need for establishing an Institute for Crisis Modelling (ICM), a resilience center to ensure sustained crisis modeling capability.

The vision of starting an Institute for Crisis Modelling was the result of the discussions and working groups at the Lorentz Center workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” that took place in Leiden, Netherlands from 27 February to 3 March 2023**.

Vision of the ICM

“To have tools suitable to support policy actors in situations that are of
big uncertainty, large consequences, and dependent on human behavior.”

The ICM consists of a taskforce for quickly and efficiently supporting policy actors (e.g., decision makers, policy makers, policy analysts) in situations that are of big uncertainty, large consequences, and dependent on human behavior. For this purpose, the taskforce consists of a larger (informal) network of associates that contribute with their knowledge, skills, models, tools, and networks. The group of associates is composed of a core group of multidisciplinary modeling experts (ranging from social scientists and formal modelers to programmers) as well as of partners that can contribute to specific focus areas (like epidemiology, water management, etc.). The vision of ICM is to consolidate and institutionalize the use of ABSS as a method for crisis management. Although physically ABSS competences may be distributed over a variety of universities, research centers, and other institutions, the ICM serves as a virtual location that coordinates research developments and provides a basic level of funding and communication channel for ABSS for crisis management. This does not only provide policy actors with a single point of contact, making it easier for them to identify who to reach when simulation expertise is needed and to develop long-term trust relationships. It also enables us to jointly and systematically evolve ABSS to become a valuable and established tool for crisis response. The center combines all necessary resources, competences, and tools to quickly develop new models, to adapt existing models, and to efficiently react to new situations.

To achieve this goal and to evolve and establish ABSS as a valuable tool for policy makers in crisis situations, research is needed in different areas. This includes the collection, development, critical analysis, and review of fundamental principles, theories, methods, and tools used in agent-based modeling. This also includes research on data handling (analysis, sharing, access, protection, visualization), data repositories, ontologies, user-interfaces, methodologies, documentation, and ethical principles. Some of these points are concisely described in (Dignum, 2021, Ch. 14 and 15).

The ICM shall be able to provide a wide portfolio of models, methods, techniques, design patterns, and components required to quickly and effectively facilitate the work of policy actors in crisis situations by providing them with adequate simulation models. For the purpose of being able to provide specialized support, the institute will coordinate the human effort (e.g., the modelers) and have specific focus areas for which expertise and models are available. This might be, for instance, pandemics, natural disasters, or financial crises. For each of these focus areas, the center will develop different use cases, which ensures and facilitates rapid responses due to the availability of models, knowledge, and networks.

Objectives of the ICM

To achieve this vision, there are a series of objectives that a resilience center for sustained crisis modeling capability in crisis situations needs to address:

1) Coordinate and promote research

Providing quick and appropriate support for policy actors in crisis situations requires not only a profound knowledge on existing models, methods, tools, and theories but also the systematic development of new approaches and methodologies. This is to advance and evolve ABSS for being better prepared for future crises and will serve as a beacon for organizing the ABSS research oriented towards practical applications.

2) Enable trusted connections with policy actors

Sustainable collaborations and interactions with decision-makers and policy analysts as well as other relevant stakeholders is a great challenge in ABSS. Getting in contact with the right actors, “speaking the same language”, and having realistic expectations are only some of the common problems that need to be addressed. Thus, the ICM should not only connect to policy actors in times of crises, but have continuous interactions, provide sample simulations, develop use cases, and train the policy actors wherever possible.

3) Enable sustainability of the institute itself

Classic funding schemes are unfit for responding in crises, which require fast responses with always-available resources as well as the continuous build-up of knowledge, skills, network, and technological buildup requires long-term. Sustainable funding is needed that for enabling such a continuity, for which the IBM provides a demarked, unifying frame.

4) Actively maintain the network of associates

Maintaining a network of experts is challenging because it requires different competences and experiences. PhD candidates, for instance, might have a great practical experience in using different simulation frameworks, however, after their graduation, some might leave academia and others might continue to other positions where they do not have the opportunity to use their simulation expertise. Thus, new experts need to be acquired continuously to form a resilient and balanced network.

5) Inform policy actors

The most advanced and profound models cannot do any good in crisis situations in case of a lacking demand from policy actors. Many modelers perceive a certain hesitation from policy actors regarding the use of ABSS which might be due to them being unfamiliar with the potential benefits and use-cases of ABSS, lacking trust in the method itself, or simply due to a lack of awareness that ABSS actually exists. Hence, the center needs to educate policy makers and raise awareness as well as improve trust in ABSS.

6) Train the next generation of experts

To quickly develop suitable ABSS models in critical situations requires a variety of expertise. In addition to objective 4, the acquisition of associates, it is also of great importance to educate and train the next generation of experts. ABSS research is still a niche and not taught as an inherent part of the spectrum of methods of most disciplines. The center shall promote and strengthen ABSS education to ensure the training of the next generation of experts.

7) Engage the general public

Finally, the success of ABSS does not only depend on the trust of policy actors but also on how it is perceived by the general public. When developing interventions during the Covid-19 crisis and giving recommendations, the trust in the method was a crucial success factor. Also, developing realistic models requires the active participation of the general public.

Next steps

For ABSS to become a valuable and established tool for supporting policy actors in crisis situations, we are convinced that our efforts need to be institutionalized. This allows us to consolidate available competences, models, and tools as well as to coordinate research endeavors and the development of new approaches required to ensure a sustained crisis modeling capability.

To further pursue this vision, a Special Interest Group (SIG) on Building ResilienCe with Social Simulations (BRICSS) was established at the European Social Simulation Association (ESSA). Moreover, Special Tracks will be organized at the 2023 Social Simulation Conference (SSC) to bring together interested experts.

However, for this vision to become reality, the next steps towards establishing an Institute for Crisis Modelling consist of bringing together ambitious and competent associates as well as identifying core funding opportunities for the center. If the readers feel motivated to contribute in any way to this topic, they are encouraged to contact Frank Dignum, Umeå University, Sweden or any of the authors of this article.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise. The final report of the workshop as well as more information can be found on the webpage of the Lorentz Center: https://www.lorentzcenter.nl/agent-based-simulations-for-societal-resilience-in-crisis-situations.html

References

Blakely, T., Thompson, J., Bablani, L., Andersen, P., Ouakrim, D. A., Carvalho, N., Abraham, P., Boujaoude, M.A., Katar, A., Akpan, E., Wilson, N. & Stevenson, M. (2021). Determining the optimal COVID-19 policy response using agent-based modelling linked to health and cost modelling: Case study for Victoria, Australia. Medrxiv, 2021-01.

Dignum, F., Dignum, V., Davidsson, P., Ghorbani, A., van der Hurk, M., Jensen, M., Kammler C., Lorig, F., Ludescher, L.G., Melchior, A., Mellema, R., Pastrav, C., Vanhee, L. & Verhagen, H. (2020). Analysing the combined health, social and economic impacts of the coronavirus pandemic using agent-based social simulation. Minds and Machines, 30, 177-194. doi: 10.1007/s11023-020-09527-6

Dignum, F. (ed.). (2021) Social Simulation for a Crisis; Results and Lessons from Simulating the COVID-19 Crisis. Springer.

Lorig, Fabian, Johansson, Emil and Davidsson, Paul (2021) ‘Agent-Based Social Simulation of the Covid-19 Pandemic: A Systematic Review’ Journal of Artificial Societies and Social Simulation 24(3), 5. http://jasss.soc.surrey.ac.uk/24/3/5.html. doi: 10.18564/jasss.4601

Squazzoni, F. et al. (2020) ‘Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action‘ Journal of Artificial Societies and Social Simulation 23(2), 10. http://jasss.soc.surrey.ac.uk/23/2/10.html. doi: 10.18564/jasss.4298


Lorig, F., de Bruin, B., Borit, M., Dignum, F., Edmonds, B., Madden, S.M., Paolucci, M., Payette, N. and Vanhée, L. (2023) An Institute for Crisis Modelling (ICM) –
Towards a resilience center for sustained crisis modeling capability. Review of Artificial Societies and Social Simulation, 22 May 2023. https://rofasss.org/2023/05/22/icm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

A Tale of Three Pandemic Models: Lessons Learned for Engagement with Policy Makers Before, During, and After a Crisis

By Emil Johansson1,2, Vittorio Nespeca3, Mikhail Sirenko4, Mijke van den Hurk5, Jason Thompson6, Kavin Narasimhan7, Michael Belfrage1, 2, Francesca Giardini8, and Alexander Melchior5,9

  1. Department of Computer Science and Media Technology, Malmö University, Sweden
  2. Internet of Things and People Research Center, Malmö University, Sweden
  3. Computational Science Lab, University of Amsterdam, The Netherlands
  4. Faculty of Technology, Policy and Management, Delft University of Technology, The Netherlands
  5. Department of Information and Computing Sciences, Utrecht University, The Netherlands
  6. Transport, Health and Urban Design Research Lab, The University of Melbourne, Australia
  7. Centre for Research in Social Simulation, University of Surrey, United Kingdom
  8. Department of Sociology & Agricola School for Sustainable Development, University of Groningen, The Netherlands
  9. Ministry of Economic Affairs and Climate Policy and Ministry of Agriculture, Nature and Food Quality, The Netherlands

Motivation

Pervasive and interconnected crises such as the COVID-19 pandemic, global energy shortages, geopolitical conflicts, and climate change have shown how a stronger collaboration between science, policy, and crisis management is essential to foster societal resilience. As modellers and computational social scientists we want to help. Several cases of model-based policy support have shown the potential of using modelling and simulation as tools to prepare for, learn from (Adam and Gaudou, 2017), and respond to crises (Badham et al., 2021). At the same time, engaging with policy-makers to establish effective crisis-management solutions remains a challenge for many modellers due to lacking forums that promote and help develop sustained science-policy collaborations. Equally challenging is to find ways to provide effective solutions under changing circumstances, as it is often the case with crises.

Despite the existing guidance regarding how modellers can engage with policy makers e.g. (Vennix, 1996; Voinov and Bousquet, 2010), this guidance often does not account for the urgency that characterizes crisis response. In this article, we tell the stories of three different models developed during the COVID-19 pandemic in different parts of the world. For each of the models, we draw key lessons for modellers regarding how to engage with policy makers before, during, and after crises. Our goal is to communicate the findings from our experiences to  modellers and computational scientists who, like us, want to engage with policy makers to provide model-based policy and crisis management support. We use selected examples from Kurt Vonnegut’s 2004 lecture on ‘shapes of stories’ alongside analogy with Lewis Carroll’s Alice In Wonderland as inspiration for these stories.

Boy Meets Girl (Too Late)

A Social Simulation On the Corona Crisis’ (ASSOCC) tale

The perfect love story between social modellers and stakeholders would be they meet (pre-crisis), build a trusting foundation and then, when a crisis hits, they work together as a team, maybe have some fight, but overcome the crisis together and have a happily ever after.

In the case of the ASSOCC project, we as modellers met our stakeholders too late, (i.e., while we were already in the middle of the COVID-19 crisis). The stakeholders we aimed for had already met their ‘boy’: Epidemiological modellers. For them, we were just one of the many scientists showing new models and telling them that ours should be looked at. Although, for example, our model showed that using a track and tracing-app would not help reduce the rate of new COVID-19 infections (as turned out to be the case), our psychological and social approach was novel for them. It was not the right time to explain the importance of integrating these kinds of concepts in epidemiological models, so without this basic trust, they were reluctant to work with us.

The moral of our story is that not only should we invest in a (working) relationship during non-crisis times to get the stakeholders on board during a crisis, such an approach would be helpful for us modelers too. For example, we integrated both social and epidemiological models within the ASSOCC project. We wanted to validate our model with that used by Oxford University. However, our model choices were not compatible with this type of validation. Had we been working with these types of researchers before a pandemic, we could have built a proper foundation for validation.

So, our biggest lesson learned is the importance of having a good relationship with stakeholders before a crisis hits, when there is time to get into social models and show the advantages of using these. When you invest in building and consolidating this relationship over time, we promise a happily ever after for every social modeler and stakeholder (until the next crisis hits).

Modeller’s Adventures in Wonderland

A Health Emergency Response in Interconnected Systems (HERoS) tale

If you are a modeler, you are likely to be curious and imaginative, like Alice from “Alice’s Adventures in Wonderland.” You like to think about how the world works and make models that can capture these sometimes weird mechanisms. We are the same. When Covid came, we made a model of a city to understand how its citizens would behave.

But there is more. When Alice first saw the White Rabbit, she found him fascinating. A rabbit with a pocket watch which is too late, what could be more interesting? Similarly, our attention got caught by policymakers who wear waistcoats, who are always busy but can bring change. They must need a model that we made! But why are they running away? Our model is so helpful, just let us explain! Or maybe our model is not good enough?

Yes, we fell down deep into a rabbit hole. Our first encounter with a policymaker didn’t result in a happy “yes, let’s try your model out.” However, we kept knocking on doors. How many did Alice try? But alright, there is one. It seems too tiny. We met with a group of policymakers but had only 10 minutes to explain our large-scale data-driven agent-based-like model. How can we possibly do that? Drink from a “Drink me” bottle, which will make our presentation smaller! Well, that didn’t help. We rushed over all the model complexities too fast and got applause, but that’s it. Ok, we have the next one, which will last 1 hour. Quickly! Eat an “Eat me” cake that will make the presentation longer! Oh, too many unnecessary details this time. To the next venue!

We are in the garden. The garden of crisis response. And it is full of policymakers: Caterpillar, Duchess, Cheshire Cat and Mad Hatter. They talk riddles: “We need to consult with the Head of Paperclip Optimization and Supply Management,” want different things: “Can you tell us what will be the impact of a curfew. Hmm, yesterday?” and shift responsibility from one to another. Thankfully there is no Queen of Hearts who would order to behead us.

If the world of policymaking is complex, then the world of policymaking during the crisis is a wonderland. And we all live in it. We must overgrow our obsession with building better models, learn about its fuzzy inhabitants, and find a way to instead work together. Constant interaction and a better understanding of each other’s needs must be at the centre of modeler-policymaker relations.

“But I don’t want to go among mad people,” Alice remarked.

“Oh, you can’t help that,” said the Cat: “we’re all mad here. I’m mad. You’re mad.”

“How do you know I’m mad?” said Alice.

“You must be,” said the Cat, “or you wouldn’t have come here.”

Lewis Carroll, Alice in Wonderland

Cinderella – A city’s tale

Everyone thought Melbourne was just too ugly to go to the ball…..until a little magic happened.

Once upon a time, the bustling Antipodean city of Melbourne, Victoria found itself in the midst of a dark and disturbing period. While all other territories in the great continent of Australia had ridded themselves of the dreaded COVID-19 virus, it was itself, besieged. Illness and death coursed through the land.

Shunned, the city faced scorn and derision. It was dirty. Its sisters called it a “plague state” and the people felt great shame and sadness as their family, friends and colleagues continued to fall to the virus. All they wanted was a chance to rejoin their families and countryfolk at the ball. What could they do?

Though downtrodden, the kind-hearted and resilient residents of Melbourne were determined to regain control over their lives. They longed for a glimmer of sunshine on these long, gloomy days – a touch of magic, perhaps? They turned to their embattled leaders for answers. Where was their Fairy Godmother now?

In this moment of despair, a group of scientists offered a gift in the form of a powerful agent-based model that was running on a supercomputer. This model, the scientists said, might just hold the key to transforming the fate of the city from vanquished to victor (Blakely et al., 2020). What was this strange new science? This magical black box?

Other states and scientists scoffed. “You can never achieve this!”, they said. “What evidence do you have? These models are not to be trusted. Such a feat as to eliminate COVID-19 at this scale has never been done in the history of the world!” But what of it? Why should history matter? Quietly and determinedly, the citizens of Melbourne persisted. They doggedly followed the plan.

Deep down, even the scientists knew it was risky. People’s patience and enchantment with the mystical model would not last forever. Still, this was Melbourne’s only chance. They needed to eliminate the virus so it would no longer have a grip on their lives. The people bravely stuck to the plan and each day – even when schools and businesses began to re-open – the COVID numbers dwindled from what seemed like impossible heights. Each day they edged down…

and down…

and down…until…

Finally! As the clock struck midnight, the people of Melbourne achieved the impossible: they had defeated COVID-19 by eliminating transmission. With the help of the computer model’s magic, illness and death from the virus stopped. Melbourne had triumphed, emerging stronger and more united than ever before (Thompson et al., 2022a).

From that day forth, Melbourne was internationally celebrated as a shining example of resilience, determination, and the transformative power of hope. Tens of thousands of lives were saved – and after enduring great personal and community sacrifice, its people could once again dance at the ball.

But what was the fate of the scientists and the model? Did such an experience change the way agent-based social simulation was used in public health? Not really. The scientists went back to their normal jobs and the magic of the model remained just that – magic. Its influence vanished like fairy dust on a warm Summer’s evening.

Even to this day the model and its impact largely remains a mystery (despite over 10,000 words of ODD documentation). Occasionally, policy-makers or researchers going about their ordinary business might be heard to say, “Oh yes, the model. The one that kept us inside and ruined the economy. Or perhaps it was the other way around? I really can’t recall – it was all such a blur. Anyway, back to this new social problem – Shall we attack it with some big data and ML techniques?”.

The fairy dust has vanished but the concrete remains.

And in fairness, while agent-based social simulation remains mystical and our descriptions opaque, we cannot begrudge others for ever choosing concrete over dust (Thompson et al, 2022b).

Conclusions

So what is the moral of these tales? We consolidate our experiences into these main conclusions:

  • No connection means no impact. If modellers wish for their models to be useful before, during or after a crisis, then it is up to them to start establishing a connection and building trust with policymakers.
  • The window of opportunity for policy modelling during crises can be narrow, perhaps only a matter of days. Capturing it requires both that we can supply a model within the timeframe (impossible as it may appear) and that our relationship with stakeholders is already established.
  • Engagement with stakeholders requires knowledge and skills that might be too much to ask of modelers alone, including project management, communication with individuals without a technical background, and insight into the policymaking process.
  • Being useful only sometimes means being excellent. A good model is one that is useful. By investing more in building relationships with policymakers and learning about each other, we have a bigger chance of providing the needed insight. Such a shift, however, is radical and requires us to give up our obsession with the models and engage with the fuzziness of the world around us.
  • If we cannot communicate our models effectively, we cannot expect to build trust with end-users over the long term, whether they be policy-makers or researchers. Individual models – and agent-based social simulation in general – needs better understanding that can only be achieved through greater transparency and communication, however that is achieved.

As taxing, time-consuming and complex as the process of making policy impact with simulation models might be, it is very much a fight worth fighting; perhaps even more so during crises. Assuming our models would have a positive impact on the world, not striving to make this impact could be considered admitting defeat. Making models useful to policymakers starts with admitting the complexity of their environment and willingness to dedicate time and effort to learn about it and work together. That is how we can pave the way for many more stories with happy endings.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise.

References

Adam, C. and Gaudou, B. (2017) ‘Modelling Human Behaviours in Disasters from Interviews: Application to Melbourne Bushfires’ Journal of Artificial Societies and Social Simulation 20(3), 12. http://jasss.soc.surrey.ac.uk/20/3/12.html. doi: 10.18564/jasss.3395

Badham, J., Barbrook-Johnson, P., Caiado, C. and Castellani, B. (2021) ‘Justified Stories with Agent-Based Modelling for Local COVID-19 Planning’ Journal of Artificial Societies and Social Simulation 24 (1) 8 http://jasss.soc.surrey.ac.uk/24/1/8.html. doi: 10.18564/jasss.4532

Crammond, B. R., & Kishore, V. (2021). The probability of the 6‐week lockdown in Victoria (commencing 9 July 2020) achieving elimination of community transmission of SARS‐CoV‐2. The Medical Journal of Australia, 215(2), 95-95. doi:10.5694/mja2.51146

Thompson, J., McClure, R., Blakely, T., Wilson, N., Baker, M. G., Wijnands, J. S., … & Stevenson, M. (2022). Modelling SARS‐CoV‐2 disease progression in Australia and New Zealand: an account of an agent‐based approach to support public health decision‐making. Australian and New Zealand Journal of Public Health, 46(3), 292-303. doi:10.1111/1753-6405.13221

Thompson, J., McClure, R., Scott, N., Hellard, M., Abeysuriya, R., Vidanaarachchi, R., … & Sundararajan, V. (2022). A framework for considering the utility of models when facing tough decisions in public health: a guideline for policy-makers. Health Research Policy and Systems, 20(1), 1-7. doi:10.1186/s12961-022-00902-6

Voinov, A., & Bousquet, F. (2010). Modelling with stakeholders. Environmental modelling & software, 25(11), 1268-1281. doi:10.1016/j.envsoft.2010.03.007

Vennix, J.A.M. (1996). Group Model Building: Facilitating Team Learning Using System Dynamics. Wiley.

Vonnegut, K. (2004). Lecture to Case College. https://www.youtube.com/watch?v=4_RUgnC1lm8


Johansson,E., Nespeca, V., Sirenko, M., van den Hurk, M., Thompson, J., Narasimhan, K., Belfrage, M., Giardini, F. and Melchior, A. (2023) A Tale of Three Pandemic Models: Lessons Learned for Engagement with Policy Makers Before, During, and After a Crisis. Review of Artificial Societies and Social Simulation, 15 Mar 2023. https://rofasss.org/2023/05/15/threepandemic


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Making Models FAIR: An educational initiative to build good ABM practices

By Marco A. Janssen1, Kelly Claborn1, Bruce Edmonds2, Mohsen Shahbaznezhadfard1 and Manuela Vanegas-Ferro1

  1. Arizona State University, USA
  2. Manchester Metropolitan University, UK

Imagine a world where models are available to build upon. You do not have to build from scratch and painstakingly try to figure out how published papers are getting the published results. To achieve this utopian world, models have to be findable, accessible, interoperable, and reusable (FAIR). With the “Making Models FAIR” initiative, we seek to contribute to moving towards this world.

The initiative – Making Models FAIR – aims to provide capacity building opportunities to improve the skills, practices, and protocols to make computational models findable, accessible, interoperable and reusable (FAIR). You can find detailed information about the project on the website (tobefair.org), but here we will present the motivations behind the initiative and a brief outline of the activities.

There is increasing interest to make data and model code FAIR, and there is quite a lot of discussion on standards (https://www.openmodelingfoundation.org/ ). What is lacking are opportunities to gain skills for how to do this in practice. We have selected a list of highly cited publications from different domains and developed a protocol for making those models FAIR. The protocol may be adapted over time when we learn what works well.

This list of model publications provides opportunities to learn the skills needed to make models FAIR. The current list is a starting point, and you can suggest alternative model publications as desired. The main goal is to provide the modeling community a place to build capacity in making models FAIR. How do you use Github, code a model in a language or platform of your choice, and write good model documentation? These are necessary skills for collaboration and developing FAIR models. A suggested way of participating is for an instructor to have student groups participate in this activity, selecting a model publication that is of interest to their research.

To make a model FAIR, we focus on five activities:

  1. If the code is not available with the publication, find out whether the code is available (contact the authors) or replicate the model based on the model documentation. It might also happen that the code is available in programming language X, but you want to have it available in another language.
  2. If the code does not have a license, make sure an appropriate license is selected to make it available.
  3. Get a DOI, which is a permanent link to the model code and documentation. You could use comses.net or zenodo.org or similar services.
  4. Can you improve the model documentation? There is typically a form of documentation in a publication, in the article or an appendix, but is this detailed enough to understand how and why certain model choices have been made? Could you replicate the model from the information provided in the model documentation?
  5. What is the state of the model code? We know that most of us are not professional programmers and might be hesitant to share our code. Good practice is to provide comments on what different procedures are doing, defining variables, and not leave all kinds of wild ideas commented out left in the code base.

Most of the models listed do not have code available with the publication, which will require participants to contact the original others to obtain the code and/or to reproduce the code from the model documentation.

We are eager to learn what challenges people experience to make models FAIR. This could help to improve the protocols we provide. We also hope that those who made a model FAIR publish a contribution in RofASSS or relevant modeling journals. For publishing contributions in journals, it would be interesting to use a FAIR model to explore the robustness of the model results, especially for models that have been published many years ago and for which there were less computational resources available.

The tobefair.org website contains a lot of detailed information and educational opportunities. Below is a diagram from the site that aims to illustrate the road map of making models FAIR, so you can easily find the relevant information. Learn more by navigating to the About page and clicking through the diagram.

Making simulation models findable, accessible, interoperable and reusable is an important part of good scientific practice for simulation research. If important models fail to reach this standard, then this makes it hard for others to reproduce, check and extend them. If you want to be involved – to improve the listed models, or to learn the skills to make models FAIR – we hope you will participate in the project by going to tobefair.org and contributing.


Janssen, M.A., Claborn, K., Edmonds, B., Shahbaznezhadfard, M. and Vanegas-Ferro, M. (2023) Making Models FAIR: An educational initiative to build good ABM practices. Review of Artificial Societies and Social Simulation, 8 May 2023. https://rofasss.org/2023/05/11/fair/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Towards an Agent-based Platform for Crisis Management

By Christian Kammler1, Maarten Jensen1, Rajith Vidanaarachchi2 Cezara Păstrăv1

  1. Department of Computer Science, Umeå University, Sweden
    Transport, Health, and Urban Design (THUD)
  2. Research Lab, The University of Melbourne, Australia

Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” — John Woods

1       Introduction

Agent-based modelling can be a valuable tool for gaining insight into crises [3], both, during and before to increase resilience. However, in the current state of the art, the models have to build up from scratch which is not well suitable for a crisis situation as it hinders quick responses. Consequently, the models do not play the central supportive role that they could. Not only is it hard to compare existing models (given the absence of existing standards) and asses their quality, but also the most widespread toolkits, such as Netlogo [6], MESA (Python) [4], Repast (Java) [1,5], or Agents.jl (Julia) [2], are specific for the modelling field and lack the platform support necessary to empower policy makers to use the model (see Figure 1).

Fig. 1. Platform in the middle as a connector between the code and the model and interaction point for the user. It must not require any expert knowledge.

While some of these issues are systemic within the field of ABM (Agent-Based Modelling) itself, we aim to alleviate some of them in this particular context by using a platform purpose-built for developing and using ABM in a crisis. To do so, we view the problem through a multi-dimensional space which is as follows (consisting of the dimensions A-F):

  • A: Back-end to front-end interactivity
  • B: User and stakeholder levels
    – Social simulators to domain experts to policymakers
    – Skills and expertise in coding, modelling and manipulating a model
  • C: Crisis levels (Risk, Crisis, Resilience – also identified as – Pre Crisis, In Crisis, Post Crisis)
  • D: Language specific to language independent
  • E: Domain specific to domain-independent (e.g.: flooding, pandemic, climate change, )
  • F: Required iteration level (Instant, rapid, slow)

A platform can now be viewed as a vector within this space. While all of these axes require in-depth research (for example in terms of correlation or where existing platforms fit), we chose to focus on the functionalities we believe would be the most relevant in ABM for crises.

2       Rapid Development

During a crisis, time is compressed, and fast iterations are necessary (mainly focusing on axes C and F), making instant and rapid/fast iterations necessary while slow iterations are not suitable. As the crisis develops, the model may need to be adjusted to quickly absorb new data, actors, events, and response strategies, leading to new scenarios that need to be modelled and simulated. In this environment, models need to be built with reusability and rapid versioning in mind from the beginning, otherwise every new change makes the model more unstable and less trustworthy.

While a suite of best practices exists in general Software Development, they are not widely used in the agent-based modelling community. The platform needs a coding environment that favors modular reusable code, easy storage and sharing of such modules in well-organized libraries and makes it easy to integrate existing modules with new code.

Having this modularity is not only helping with the right side of Figure 1, we can also use it to help with the left side of the Figure at the same time. Meaning that the conceptual model can be part of the respective module, allowing to quickly determine if a module is relevant and understanding what the module is doing. Furthermore, it can be used to create a top-level drag and drop like model building environment to allow for rapid changes without having to write code (given that we take of the interface properly).

Having the code and the conceptual model together would also lower the effort required to review these modules. The platform can further help with this task by keeping track of which modules have been reviewed, and with versioning of the modules, as they can be annotated accordingly. It has to be noted however,

that such as system does not guarantee a trustworthy model, even though it might be up to date in terms of versioning.

3       Model transparency

Another key factor we want to focus on is the stakeholder dimension (axis B). These people are not experts in terms of models, mainly the left side of Figure 1, and thus need extensive support to be empowered to use the simulation in a – for them  – meaningful  way. While for  the visualization side  (the how? )  we can use insights from Data Visualization, for the why side it is not that easy.

In a crisis, it is crucial to quickly determine why the model behaves in a certain way in order to interpret the results. Here, the platform can help by offering tools to build model narratives (at agent, group, or whole population level), to detect events and trends, and to compare model behavior between runs. We can take inspiration from the larger software development field for a few useful ideas on how to visually track model elements, log the behavior of model elements, or raise flags when certain conditions or events are detected. However, we also have to be careful here, as we easily move towards the technical solution side and away from the stakeholder and policy maker. Therefore, more research has to be done on what support policy makers actually need. An avenue here can be techniques from data story-telling.

4       The way forward

What this platform will look like depends on the approaches we take going forward. We think that the following two questions are central (also to prompt further research):

  1. What are relevant roles that can be identified for a platform?
  2. Given a role for the platform, where should it exist within the space de- scribed, and what attributes/characteristics should it have?

While these questions are key to identify whether or not existing platforms can be extended and shaped in the way we need them or if we need to build a sandbox from scratch, we strongly advocate or an open source approach. An open source approach can not only help to allow for the use of the range of expertise spread across the field, but also alleviate some of the trust challenges. One of the main challenges is that  a  trustworthy,  well-curated  model  base with different modules does not yet exist. As such, the platform should aim first to aid in building this shared resource and add more related functionality as it becomes relevant. As for model tracking tools, we should aim for simple tools first and build more complex functionality on top of them later.

A starting point can be to build modules for existing crises, such as earth- quakes or floods where it is possible to pre-identify most of the modelling needs, the level of stakeholder engagement, the level of policymaker engagement, etc.

With this we can establish the process of open-source modelling and learn how to integrate new knowledge quickly, and be potentially better prepared for unknown crises in the future.

Acknowledgements

This piece is a result of discussions at the Lorentz workshop on “Agent Based Simulations for Societal Resilience in Crisis Situations” at Leiden, NL in earlier this year! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such a productive enterprise.

References

  1. Collier, N., North, M.: Parallel agent-based simulation with repast for high per- formance computing. SIMULATION 89(10), 1215–1235 (2013), https://doi.org/10. 1177/0037549712462620
  2. Datseris, G., Vahdati, A.R., DuBois, T.C.: Agents.jl: a performant and feature-full agent-based modeling software of minimal code complexity. SIMULATION 0(0), 003754972110688 (2022), https://doi.org/10.1177/00375497211068820
  3. Dignum, F. (ed.): Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer International Publishing, Cham (2021)
  4. Kazil, J., Masad, D., Crooks, A.: Utilizing python for agent-based modeling: The mesa framework. In: Thomson, R., Bisgin, H., Dancy, C., Hyder, A., Hussain, M. (eds.) Social, Cultural, and Behavioral Modeling. pp. 308–317. Springer Interna- tional Publishing, Cham (2020)
  5. North, M.J., Collier, N.T., Ozik, J., Tatara, E.R., Macal, C.M., Bragen, M., Sydelko, P.: Complex adaptive systems modeling with Repast Simphony. Complex Adaptive Systems Modeling 1(1), 3 (March 2013), https://doi.org/10.1186/2194-3206-1-3
  6. Wilensky, U.: Netlogo. http://ccl.northwestern.edu/netlogo/, Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL (1999), http://ccl.northwestern.edu/netlogo/

Kammler, C., Jensen, M., Vidanaarachchi, R. and Păstrăv, C. (2023) Towards an Agent-based Platform for Crisis Management. Review of Artificial Societies and Social Simulation, 10 May 2023. https://rofasss.org/2023/05/10/abm4cm


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Designing Crisis Models: Report of Workshop Activity and Prospectus for Future Research

By: Mike Bithell1, Giangiacomo Bravo2, Edmund Chattoe-Brown3, René Mellema4, Harko Verhagen5 and Thorid Wagenblast6

  1. Formerly Department of Geography, University of Cambridge
  2. Center for Data Intensive Sciences and Applications, Linnaeus University
  3. School of Media, Communication and Sociology, University of Leicester
  4. Department of Computing Science, Umeå Universitet
  5. Department of Computer and Systems Sciences, Stockholm University
  6. Department of Multi-Actor Systems, Delft University of Technology

Background

This piece arose from a Lorentz Center (Leiden) workshop on Agent Based Simulations for Societal Resilience in Crisis Situations held from 27 February to 3 March 2023 (https://www.lorentzcenter.nl/agent-based-simulations-for-societal-resilience-in-crisis-situations.html). During the week, our group was tasked with discussing requirements for Agent-Based Models (hereafter ABM) that could be useful in a crisis situation. Here we report on our discussion and propose some key challenges for platform support where models deal with such challenges.

Introduction

When it comes to crisis situations, modelling can provide insights into which responses are best, how to avoid further negative spill over consequences of policy interventions, and which arrangements could be useful to increase present or future resilience. This approach can be helpful in preparation for a crisis situation, for management during the event itself, or in the post-crisis evaluation of response effectiveness. Further, evaluation of performance in these areas can also lead to subsequent progressive improvement of the models themselves. However, to serve these ends, models need to be built in the most effective way possible. Part of the goal of this piece is to outline what might be needed to make such models effective in various ways and why: Reliability, validity, flexibility and so on. Often, diverse models seem to be built ad hoc when the crisis situation occurs, putting the modellers under time pressure, which can lead to important system aspects being neglected (https://www.jasss.org/24/4/reviews/1.html). This is part of a more general tendency, contrary to say the development of climate modelling, to merely proliferate ABM rather than progress them (https://rofasss.org/2021/05/11/systcomp/). Therefore, we propose some guidance about how to make models for crises that may better inform policy makers about the potential effects of the policies under discussion. Furthermore, we draw attention to the fact that modelling may need to be just part of a wider process of crisis response that occurs both before and after the crisis and not just while it is happening.

Crisis and Resilience: A Working Definition

A crisis can be defined as an initial (relatively stable) state that is disrupted in some way (e.g., through a natural disaster such as a flood) and after some time reaches a new relatively stable state, possibly inferior (or rarely superior – as when an earthquake leads to reconstruction of safer housing) to the initial one (see Fig. 1).

fig1

Fig. 1: Potential outcomes of a disruption of an initial (stable) state.

While some data about daily life may be routinely collected for the initial state and perhaps as the disruption evolves, it is rarely known how the disruption will affect the initial state and how it will subsequently evolve into the new state. (The non-feasibility of collecting much data during a crisis may also draw attention to methods that can more effectively be used, for example, oral history data – see, for example, Holmes and Pilkington 2011.) ABM can help increase the understanding of those changes by providing justified – i. e. process based – scenarios under different circumstances. Based on this definition, and justifying it, we can identify several distinct senses of resilience (for a wider theoretical treatment see, for example, Holing 2001). We decided to use the example of flooding because the group did not have much pre-existing expertise and because it seemed like a fairly typical kind of crisis to draw potentially generalisable conclusions from. However, it should be recognised that not all crises are “known” and building effective resilience capacity for “unknown” crises (like alien invasion) remains an open challenge.

Firstly, a system can be resilient if it is able to return quickly to a desirable state after disruption. For example, a system that allows education and healthcare to become available again in at least their previous forms soon after the water goes down.

Secondly, however, the system is not resilient if it cannot return to anything like its original state (i. e. the society was only functioning at a particular level because it happened that there was no flood in a flood zone) usually owing to resource constraints, poor governance and persistent social inequality. (It is probably only higher income countries that can afford to “build back better” after a crisis. All low income countries can often do is hope they do not happen.) This raises the possibility that more should be invested in resilience without immediate payoff to create a state you can actually return to (or, better, one where vulnerability is reduced) rather than a “Fool’s Paradise” state. This would involve comparison of future welfare streams and potential trade-offs under different investment strategies.

Thirdly, and probably more usually, the system can be considered resilient if it can deliver alternative modes of provision (for example of food) during the crisis. People can no longer go shopping when they want but they can be fed effectively at local community centres which they are nonetheless able to reach despite the flood water.

The final insight that we took from these working definitions is that daily routines operate over different time scales and it may be these scales that determine the unfolding nature of different crises. For example, individuals in a flood area must immediately avoid drowning. They will very rapidly need clean water to drink and food to eat. Soon after, they may well have shelter requirements. After that, there may be a need for medical care and only in the rather longer term for things like education, restored housing and community infrastructure.

Thus, an effective response to a crisis is one that is able to provide what is needed over the timescale at which it occurs (initially escape routes or evacuation procedures, then distribution of water and food and so on), taking into account different levels of need. It is an inability to do this (or one goal conflicting with another as when people escape successfully but in a way that means they cannot then be fed) which leads to the various causes of death (and, in the longer term things like impoverishment – so ideally farmers should be able to save at least some of their livestock as well as themselves) like drowning, starvation, death by waterborne diseases and so on. The effects of some aspects of a crisis (like education disruption and “learning loss”, destruction of community life and of mental health or loss of social capital) may be very long term if they cannot be avoided (and there may therefore be a danger of responding mainly to the “most obvious” effects which may not ultimately be the most damaging).

Preparing for the Model

To deal effectively with a crisis, it is crucial not to “just start building an ABM”, but to approach construction in a structured manner. First, the initial state needs to be defined and modelled. As well as making use of existing data (and perhaps identifying the need to collect additional data going forward, see Gilbert et al. 2021), this is likely to involve engaging with stakeholders, including policy makers, to collect information, for example, on decision-making procedures. Ideally, the process will be carried out in advance of the crisis and regularly updated if changes in the represented system occur (https://rofasss.org/2018/08/22/mb/). This idea is similar to a digital twin https://www.arup.com/perspectives/digital-twin-managing-real-flood-risks-in-a-virtual-world or the “PetaByte Playbook” suggested by Joshua Epstein – Epstein et al. 2011. Second, as much information as possible about potential disruptions should be gathered. This is the sort of data often revealed by emergency planning exercises (https://www.osha.gov/flood), for example involving flood maps, climate/weather assessments (https://check-for-flooding.service.gov.uk/)  or insight into general system vulnerabilities – for example the effects of parts of the road network being underwater – as well as dissections of failed crisis responses in the particular area being modelled and elsewhere (https://www.theguardian.com/environment/2014/feb/02/flooding-winter-defences-environment-climate-change). Third, available documents such as flood plans (https://www.peterborough.gov.uk/council/planning-and-development/flood-and-water-management/water-data) should be checked to get an idea of official crisis response (and also objectives, see below) and thus provide face validity for the proposed model. It should be recognised that certain groups, often disadvantaged, may be engaging in activities – like work – “under the radar” of official data collection: https://www.nytimes.com/2021/09/27/nyregion/hurricane-ida-aid-undocumented-immigrants.html. Engaging with such communities as well as official bodies is likely to be an important aspect of successful crisis management (e.g. Mathias et al. 2020). The general principle here is to do as much effective work as possible before any crisis starts and to divide what can be done in readiness from what can only be done during or after a crisis.

Scoping the Model

As already suggested above, one thing that can and should be done before the crisis is to scope the model for its intended use. This involves reaching a consensus on who the model and its outputs are for and what it is meant to achieve. There is some tendency in ABM for modellers to assume that whatever model they produce (even if they don’t attend much to a context of data or policy) has to be what policy makers and other users must need. Besides asking policy makers, this may also require the negotiation of power relationships so that the needs of the model don’t just reflect the interests/perspective of politicians but also numerous and important but “politically weak” groups like small scale farmers or local manufacturers. Scoping refers not just to technical matters (Is the code effectively debugged? What evidence can be provided that the policy makers should trust the model?) but also to “softer” preparations like building trust and effective communication with the policy makers themselves. This should probably focus any literature reviewing exercise on flood management using models that are least to some extent backed by participatory approaches (for example, work like Mehryar et al. 2021 and Gilligan et al. 2015). It would also be useful to find some way to get policy makers to respond effectively to the existing set of models to direct what can most usefully be “rescued” from them in a user context. (The models that modellers like may not be the ones that policy makers find most useful.)

At the same time, participatory approaches face the unavoidable challenge of interfacing with the scientific process. No matter how many experts believe something to be true, the evidence may nonetheless disagree. So another part of the effective collaboration is to make sure that, whatever its aims, the model is still constructed according to an appropriate methodology (for example being designed to answer clear and specific research questions). This aim obliges us to recognise that the relationship between modellers and policy makers may not just involve evidence and argument but also power, so that modellers then have to decide what compromises they are willing to make to maintain a relationship. In the limit, this may involve negotiating the popular perception that policy makers only listen to academics when they confirm decisions that have already been taken for other reasons. But the existence of power also suggests that modelling may not only be effective with current governments (the most “obvious” power source) but also with opposition parties, effective lobbyists, and NGOs, in building bridges to enhance the voice of “the academic community” and so on.

Finally, one important issue may be to consider whether “the model” is a useful response at all. In order to make an effective compromise (or meet various modelling challenges) it might be necessary to design a set of models with different purposes and scales and consider how/whether they should interface. The necessity for such integration in human-environments systems is already widely recognised (see for example Luus et al. 2013) but it may need to be adjusted more precisely to crisis management models. This is also important because it may be counter-productive to reify policy makers and equate them to the activities of the central government. It may be more worthwhile to get emergency responders or regional health planners, NGOs or even local communities interested in the modelling approach in the first instance.

Large Scale Issues of Model Design

Much as with the research process generally, effective modelling has to proceed through a sequence of steps, each one dependent on the quality of the steps before it. Having characterised a crisis (and looked at existing data/modelling efforts) and achieved a workable measure of consensus regarding who the model is for and (broadly) what it needs to do, the next step is to consider large scale issues of model design (as opposed, for example, to specific details of architecture or coding.)

Suppose, for example, that a model was designed to test scenarios to minimise the death toll in the flooding of a particular area so that governments could focus their flood prevention efforts accordingly (build new defences, create evacuation infrastructure, etc.) The sort of large scale issues that would need to be addressed are as follows:

Model Boundaries: Does it make sense just to model the relevant region? Can deaths within the region be clearly distinguished from those outside it (for example people who escape to die subsequently)? Can the costs and benefits of specific interventions similarly be limited to being clearly inside a model region? What about the extent to which assistance must, by its nature, come from outside the affected area? In accordance with general ABM methodology (Gilbert and Troitzsch 2005), the model needs to represent a system with a clearly and coherently specified “inside” and “outside” to work effectively. This is another example of an area where there will have to be a compromise between the sway of policy makers (who may prefer a model that can supposedly do everything) and the value of properly followed scientific method.

Model Scale: This will also inevitably be a compromise between what is desirable in the abstract and what is practical (shaped by technical issues). Can a single model run with enough agents to unfold the consequences of a year after a flood over a whole region? If the aim is to consider only deaths, then does it need to run that long or that widely? Can the model run fast enough (and be altered fast enough) to deliver the answers that policy makers need over the time scale at which they need them? This kind of model practicality, when compared with the “back of an envelope” calculations beloved of policy advisors, is also a strong argument for progressive modelling (where efforts can be combined in one model rather than diffused among many.)

Model Ontology: One advantage of the modelling process is to serve as a checklist for necessary knowledge. For example, we have to assume something about how individuals make decisions when faced with rising water levels. Ontology is about the evidence base for putting particular things in models or modelling in certain ways. For example, on what grounds do we build an ABM rather than a System Dynamics model beyond doing what we prefer? On what grounds are social networks to be included in a model of emergency evacuation (for example that people are known to rescue not just themselves but their friends and kin in real floods)? Based on wider experience of modelling, the problems here are that model ontologies are often non-empirical, that the assumptions of different models contradict each other and so on. It is unlikely that we already have all the data we need to populate these models but we are required for their effectiveness to be honest about the process where we ideally proceed from completely “made up” models to steadily increasing quality/consensus of ontology. This will involve a mixture of exploring existing models, integrating data with modelling and methods for testing reliability, and perhaps drawing on wider ideas (like modularisation where some modellers specialise in justifying cognitive models, others in transport models and so on). Finally, the ontological dimension may have to involve thinking effectively about what it means to interface a hydrological model (say) with a model of human behaviour and how to separate out the challenges of interfacing the best justified model of each kind. This connects to the issue above about how many models we may need to build an effective compromise with the aims of policy makers.

It should be noted that these dimensions of large scale design may interact. For example, we may need less fine grained models of regions outside the flooded area to understand the challenges of assistance (perhaps there are infrastructure bottlenecks unrelated to the flooding) and escape (will we be able to account for and support victims of the flood who scatter to friends and relatives in other areas? Might escapees create spill over crises in other regions of a low income country?). Another example of such interactions would be that ecological considerations might not apply to very short term models of evacuation but might be much more important to long term models of economic welfare or environmental sustainability in a region. It is instructive to recall that in Ancient Egypt, it was the absence of Nile flooding that was the disaster!

Technical Issues: One argument in favour of trying to focus on specific challenges (like models of flood crises suitable for policy makers) is that they may help to identify specific challenges to modelling or innovations in technique. For example, if a flooding crisis can be usefully divided into phases (immediate, medium and long term) then we may need sets of models each of which creates starting conditions for the next. We are not currently aware of any attention paid to this “model chaining” problem. Another example is the capacity that workshop participants christened “informability”, the ability of a model to easily and quickly incorporate new data (and perhaps even new behaviours) as a situation unfolds. There is a tendency, not always well justified, for ABM to be “wound up” with fixed behaviours and parameters and just left to run. This is only sometimes a good approximation to the social world.

Crisis, Response and Resilience Features: This has already been touched on in the preparatory phase but is also clearly part of large scale model design. What is known (and needs to be known) about the nature of flooding? (For example, one important factor we discovered from looking at a real flood plan was that in locations with dangerous animals, additional problems can be created by these also escaping to unflooded locations (https://www.youtube.com/watch?v=PPpvciP5im8). We would have never worked that out “from the armchair”, meaning it would be left out of a model we would have created.) What policy interventions are considered feasible and how are they supposed to work? (Sometimes the value of modelling is just to show that a plausible sounding intervention doesn’t actually do what you expect.) What aspects of the system are likely to promote (tendency of households to store food) or impede (highly centralised provision of some services) resilience in practice? (And this in turn relates to a good understanding of as many aspects of the pre-crisis state as possible.)

Although a “single goal” model has been used as an example, it would also be a useful thought experiment to consider how the model would need to be different if the aim was the conservation of infrastructure rather than saving lives. When building models really intended for crisis management, however, single issue models are likely to be problematic, since they might show damage in different areas but make no assessment of trade-offs. We experienced a recent example of this where epidemiological COVID models focusing on COVID deaths but not on deaths caused by postponed operations or the health impact from the economic costs of interventions – for example depression and suicide caused by business failure. For an example of attempts at multi-criteria analyses see for example the UK NEA synthesis of key findings (http://uknea.unep-wcmc.org/Resources/tabid/82/Default.aspx), and the IPCC AR6 synthesis for policy makers (https://report.ipcc.ch/ar6syr/pdf/IPCC_AR6_SYR_SPM.pdf).

Model Quality Assurance and “Overheads”

Quality assurance runs right through the development of effective crisis models. Long before you start modelling it is necessary to have an agreement on what the model should do and the challenge of ontology is to justify why the model is as it is and not some other way to successfully achieve this goal. Here, ABM might benefit from more clearly following the idea of “research design”: a clear research question leading to a specifically chosen method, corresponding data collection and analysis leading to results that “provably” answer the right question. This is clearly very different from the still rather widespread “here’s a model and it does some stuff” approach. But the large scale design for the model should also (feeding into the specifics of implementation) set up standards to decide how the model is performing. In the case of crises rather than everyday repeated behaviours, this may require creative conceptual thinking about, for instance, “testing” the model on past flooding incidents (perhaps building on ideas about retrodiction, see for example, Kreps and Ernst 2017). At the same time, it is necessary to be aware of the “overheads” of the model: What new data is needed to fill discovered gaps in the ontology and what existing data must continue to be collected to keep the model effective. Finally, attention must be paid to mundane quality control. How do we assure the absence of disastrous programming bugs? How sensitive is the model to specific assumptions, particularly those with limited empirical support? The answers to these questions obviously matter far more when someone is actually using the model for something “real” and where decisions may be taken that affect people’s livelihoods.

The “Dark Side”

It is also necessary to have a reflexive awareness of ways in which floods are not merely technocratic or philanthropic events. What if the unstated aims of a government in flood control are actually preserving the assets of their political allies? What if a flood model needs to take account of looters and rapists as well as the thirsty and homeless? And, of course, the modellers themselves have to guard against the possibility that models and their assumptions discriminate against the poor, the powerless, or the “socially invisible”. For example, while we have to be realistic about answering the questions that policy makers want answered, we also have to be scientifically critical about what problems they show no interest in.

Conclusion and Next Steps

One way to organise the conclusion of a rather wide-ranging group discussion is to say that the next steps are to make the best use of what already exists and (building on this) to most effectively discover what does not. This could be everything from a decent model of “decision making” during panic to establishing good will from relevant policy makers. At the same time, the activities proposed have to take place within a broad context of academic capabilities and dissemination channels (when people are very busy and have to operate within academic incentive structures). This process can be divided into a number of parts.

  • Getting the most out of models: What good work has been done in flood modelling and on what basis do we call it good? What set of existing model elements can we justify drawing on to build a progressive model? This would be an obvious opportunity for a directed literature review, perhaps building on the recent work of Zhuo and Han (2020).
  • Getting the most out of existing data: What is actually known about flooding that could inform the creation of better models? Do existing models use what is already known? Are there stylised facts that could prune the existing space of candidate models? Can an ABM synthesise interviews, statistics and role playing successfully? How? What appears not to be known? This might also suggest a complementary literature review or “data audit”. This data auditing process may also create specific sub-questions: How much do we know about what happens during a crisis and how do we know it? (For example, rather than asking responders to report when they are busy and in danger, could we make use of offline remote analysis of body cam data somehow?)
  • Getting the most out of the world: This involves combining modelling work with the review of existing data to argue for additional or more consistent data collection. If data matters to the agreed effectiveness of the model, then somehow it has to be collected. This is likely to be carried out through research grants or negotiation with existing data collection agencies and (except in a few areas like experiments) seems to be a relatively neglected aspect of ABM.
  • Getting the most out of policy makers: This is probably the largest unknown quantity. What is the “opening position” of policy makers on models and what steps do we need to take to move them towards a collaborative position if possible? This may have to be as basic as re-education from common misperceptions about the technique (for example that ABM are unavoidably ad hoc.) While this may include more standard academic activities like publishing popular accounts where policy makers are more likely to see them, really the only way to proceed here seems to be to have as many open-minded interactions with as many relevant people as possible to find out what might help the dialogue next.
  • Getting the most out of the population: This overlaps with the other categories. What can the likely actors in a crisis contribute before, during and after the crisis to more effective models? Can there be citizen science to collect data or civil society interventions with modelling justifications? What advantages might there be to discussions that don’t simply occur between academics and central government? This will probably involve the iteration of modelling, science communication and various participatory activities, all of which are already carried out in some areas of ABM.
  • Getting the most out of modellers: One lesson from the COVID crisis is that there is a strong tendency for the ABM community to build many separate (and ultimately non-comparable) models from scratch. We need to think both about how to enforce responsibility for quality where models are actually being used and also whether we can shift modelling culture towards more collaborative and progressive modes (https://rofasss.org/2020/04/13/a-lot-of-time-and-many-eyes/). One way to do this may be precisely to set up a test case on which people can volunteer to work collaboratively to develop this new approach in the hope of demonstrating its effectiveness.

If this piece can get people to combine to make these various next steps happen then it may have served its most useful function!

Acknowledgements

This piece is a result of discussions (both before and after the workshop) by Mike Bithell, Giangiacomo Bravo, Edmund Chattoe-Brown, Corinna Elsenbroich, Aashis Joshi, René Mellema, Mario Paolucci, Harko Verhagen and Thorid Wagenblast. Unless listed as authors above, these participants bear no responsibility for the final form of the written document summarising the discussion! We are grateful to the organisers of the workshop and to the Lorentz Center as funders and hosts for such productive enterprises.

References

Epstein, J. M., Pankajakshan, R., and Hammond, R. A. (2011) ‘Combining Computational Fluid Dynamics and Agent-Based Modeling: A New Approach to Evacuation Planning’, PLoS ONE, 6(5), e20139. doi:10.1371/journal.pone.0020139

Gilbert, N., Chattoe-Brown, E., Watts, C., and Robertson, D. (2021) ‘Why We Need More Data before the Next Pandemic’, Sociologica, 15(3), pp. 125-143. doi:10.6092/issn.1971-8853/13221

Gilbert, N. G., and Troitzch, K. G. (2005) Simulation for the Social Scientist (Buckingham: Open University Press).

Gilligan, J. M., Brady, C., Camp, J. V., Nay, J. J., and Sengupta, P. (2015) ‘Participatory Simulations of Urban Flooding for Learning and Decision Support’, 2015 Winter Simulation Conference (WSC), Huntington Beach, CA, USA, pp. 3174-3175. doi:10.1109/WSC.2015.7408456.

Holling, C. (2001) ‘Understanding the Complexity of Economic, Ecological, and Social Systems’, Ecosystems, 4, pp. 390-405. doi:10.1007/s10021-001-0101-5

Holmes, A. and Pilkington, M. (2011) ‘Storytelling, Floods, Wildflowers and Washlands: Oral History in the River Ouse Project’, Oral History, 39(2), Autumn, pp. 83-94. https://www.jstor.org/stable/41332167

Krebs, F. and Ernst, A. (2017) ‘A Spatially Explicit Agent-Based Model of the Diffusion of Green Electricity: Model Setup and Retrodictive Validation’, in Jager, W., Verbrugge, R., Flache, A., de Roo, G., Hoogduin, L. and Hemelrijk, C. (eds.) Advances in Social Simulation 2015 (Cham: Springer), pp. 217-230. doi:10.1007/978-3-319-47253-9_19

Luus, K. A., Robinson, D. T., and Deadman, P. J. (2013) ‘Representing ecological processes in agent-based models of land use and cover change’, Journal of Land Use Science, 8(2), pp. 175-198. doi:10.1080/1747423X.2011.640357

Mathias, K., Rawat, M., Philip, S. and Grills, N. (2020) ‘“We’ve Got Through Hard Times Before”: Acute Mental Distress and Coping among Disadvantaged Groups During COVID-19 Lockdown in North India: A Qualitative Study’, International Journal for Equity in Health, 19, article 224. doi:10.1186/s12939-020-01345-7

Mehryar, S., Surminski, S., and Edmonds, B. (2021) ‘Participatory Agent-Based Modelling for Flood Risk Insurance’, in Ahrweiler, P. and Neumann, M. (eds) Advances in Social Simulation, ESSA 2019 (Springer: Cham), pp. 263-267. doi:10.1007/978-3-030-61503-1_25

Zhuo, L. and Han, D. (2020) ‘Agent-Based Modelling and Flood Risk Management: A Compendious Literature Review’, Journal of Hydrology, 591, 125600. doi:10.1016/j.jhydrol.2020.125600


Bithell, M., Bravo, G., Chattoe-Brown, E., Mellema, R., Verhagen, H. and Wagenblast, T. (2023) Designing Crisis Models: Report of Workshop Activity and Prospectus for Future Research. Review of Artificial Societies and Social Simulation, 3 May 2023. https://rofasss.org/2023/05/03/designingcrisismodels


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

“One mechanism to rule them all!” A critical comment on an emerging categorization in opinion dynamics

By Sven Banisch

Department for Sociology, Institute of Technology Futures
Karlsruhe Institute of Technology

It has become common in the opinion dynamics community to categorize different models according to how two agents i and j change their opinions oi and oj in interaction (Flache et al. 2017, Lorenz et al. 2021, Keijzer and Mäs 2022). Three major classes have emerged. First, models of assimilation or positive social influence are characterized by a reduction of opinion differences in interaction as achieved, for instance, by classical models with averaging (French 1956, Friedkin and Johnson 2011). Second, in models with repulsion or negative influence agents may be driven further apart if they are already too distant (Jager and Amblard 2005, Flache and Macy 2011). Third, reinforcement models are characterized by the fact that agents on the same side of the opinion spectrum reinforce their opinion and go more extreme (Martins 2008, Banisch and Olbrich 2019, Baumann et al. 2020). While this categorization is useful for differentiating different classes of models along with their assumptions, for assessing if different model implementations belong to the same class, and for understanding the macroscopic phenomena that can be expected, it is not without problems and may lead to misclassification and misunderstanding.

This comment aims to provide a critical — yet constructive — perspective on this emergent theoretical language for model synthesis and comparison. It directly links to a recent comment in this forum (Carpentras 2023) that describes some of the difficulties that researchers face when developing empirically grounded or validated models of opinion dynamics which often “do not conform to the standard framework of ABM papers”. I have made very similar experiences during a long review process for a paper (Banisch and Shamon 2021) that, to my point of view, rigorously advances argument communication theory — and its models — through experimental research. In large part, the process has been so difficult because authors from different branches of opinion dynamics speak different languages and I feel that some conventions may settle us into a “vicious cycle of isolation” (Carpentras 2020) and closure. But rather than suggesting a divide into theoretically and empirically oriented opinion dynamics research, I would like to work towards a common ground for empirical and theoretical ABM research by a more accurate use of opinion dynamics language.

The classification scheme for basic opinion change mechanisms might be particularly problematic for opinion models that take cognitive mechanisms and more complex opinion structures into account. These often more complex models are required in order to capture linguistic associations observed in real debates, or to better link to a specific experimental design. In this note, I will look at argument communication models (ACMs) (Mäs and Flache 2013, Feliciani et al. 2020, Banisch and Olbrich 2021, Banisch and Shamon 2021) to show how theoretically-inspired model classification can be misleading. I will first show that the classical ACM by Mäs and Flache (2013) has been repeatedly misclassified as a reinforcement model while it is purely averaging when looking at the implied attitude changes. Second, only when biased processing is incorporated into argument-induced opinion changes such that agents favor arguments aligned with their opinion, ACMs become reinforcing or contagious (Lorenz et al. 2021). Third, when biases become large, ACMs may feature patterns of opinion adaptation which — according to the above categorization — are considered as negative influence. 

Opinion change functions for the three model classes

Let us start by looking at the opinion change assumptions entailed in “typical” positive and negative influence and reinforcement models. Following Flache et al. (2017) and Lorenz et al. (2021), we will consider opinion change functions of the following form:

Δoi=f(oi,oj).

That is, the opinion change of agent i is given as a function of i’s opinion and the opinion of an interaction partner j. This is sufficient to characterize an ABM with dyadic interaction where repeatedly two agents with two opinions (oi,oj) are chosen at random and f(oi,oj) is applied. Here we deal with continuous opinions in the interval oi∈[-1,1] in the context of which the model categorizations have been mainly introduced. Notice that some authors refer to f as an influence response function, but as this notion has been introduced in the context of discrete choice models (Lopez-Pintado and Watts 2008, Mäs 2021) governing the behavioral response of agents to the behavior in their neighborhood, we will stick to the term opinion change function (OCF) here. OCFs hence map from two opinions to the induced opinion change: [-1,1]2R and we can depict them in form of a contour density vector plot as shown in Figure 1.

The most simple form of a positive influence OCF is weighted averaging:

Δoi=μ(oj-oi).

That is, an agent i approaches the opinion of another agent j by a parameter μ times the distance between i and j. This function is shown on the left of Figure 1. If oj<oi  (above the diagonal where oj=oi)  approaches the opinion of  from below. The opinion change is positive indicating a shift to the right (red shades). If oi<oj (below the diagonal) i approaches j from above implying negative opinion change and shift to the left (blue shades). Hence, agents left to the diagonal will shift rightwards, and agents right to the diagonal will shift to the left.

Macroscopically, these models are well-known to converge to consensus on connected networks. However, Deffuant et al. (2000) and Hegselmann and Krause (2002) introduced bounded confidence to circumvent global convergence — and many others have followed with more sophisticated notions of homophily. This class of models (models with similarity bias in Flache et al. 2017) affects the OCF essentially by setting f=0 for opinion pairs that are beyond a certain distance threshold from the diagonal. I will briefly comment on homophily later.

Negative influence can be seen as an extension of bounded confidence such that opinion pairs that are too distant will lead to a repulsive force driving opinions further apart. As the review by Flache et al. (2017), we rely on the OCF from Jager and Amblard (2005) as the paradigmatic case. However, the function shown in Flache et al. (2017) seems to be slightly mistaken so we resort to the original implementation of negative influence by Jager and Amblard (2005):

That is, if the opinion distance |oioj| is below a threshold u, we have positive influence as before. If the distance |oioj| is larger than a second threshold t, there is repulsive influence such that i is driven away from j. In between these two thresholds, there is a band of no opinion change f(oi,oj)=0 just as for bounded confidence. This function is shown in the middle of Figure 1 (u=0.4 and t=0.7). In this case, we observe a left shift towards a more negative opinion (blue shades) above the diagonal and sufficiently far from it (governed by t). By symmetry, a right shift to a more positive opinion is observed below the diagonal when oi is sufficiently larger than oj. Negative influence is at work in these regions such that an agent i at the right side of the opinion scale (oi<0) will shift towards an even more rightist position when interacting with a leftist agent  with opinion oj>0 (same on the other side).

Notice also that this implementation does not ensure opinions are bound to the interval [-1,1] as negative opinion changes are present even if oi is already at a value of -1. Vice versa for the positive extreme. Typically this is artificially resolved by forcing opinions back to the interval once they exceed it, but a more elegant and psychologically motivated solution has been proposed in Lorenz et al. (2021) by introducing a polarity factor (incorporated below).

Finally, reinforcement models are characterized by the fact that agents on the same side of the opinion scale become stronger in interaction. As pointed out by Lorenz et al. (2021) the most paradigmatic case of reinforcement is simple contagion and the OCF used here for illustration is adopted from their notion:

Δoi=αSign(oj).

That is, agent j signals whether she is in favor (oj>0) or against (oj<0) the object of opinion, and agent i adjusts his opinion by taking a step α in that direction. This means that positive opinion change is observed whenever i meets an agent with an opinion larger than zero. Agent i’s opinion will shift rightwards and become more positive. Likewise, a negative opinion change and shift to the left is observed whenever oj is negative. Notice that, in reinforcement models, opinions assimilate when two agents of opposing opinions interact so that induced opinion changes are similar to positive influence in some regions of the space. As for negative influence, this OCF does not ensure that opinions remain in [-1,1], but see Banisch and Olbrich (2019) for a closely related reinforcement learning model that endogenously remains bound to the interval.

Argument-induced opinion change

Compared to models that fully operate on the level of opinions oi∈[-1,1] and are hence completely specified by an OCF, argument-based models are slightly more complex and the derivation of OCFs from the model rules is not straightforward. But let us first, at least briefly, describe the model as introduced in Banisch and Shamon (2021).

In the model, agents hold a number of M pro- and M counterarguments which may be either zero (disbelief) or one (belief). The opinion of an agent is defined as the number of pro versus con arguments. For instance, if an agent believes 3 pro arguments and only one con argument her opinion will be oi=2. For the purposes of this illustration, we will normalize opinions to lie in between -1 and 1 which is achieved by division through M: oioi/M. In interaction, agent j acts as a sender articulating an argument to a receiving agent i. The receiver  takes over that argument with probability

p beta = 1 / (1 + exp(-beta oi dir(arg)))

where the function dir(arg) designates whether the new argument implies positive or negative opinion change. This probability accounts for the fact that agents are more willing to accept information that coheres with their opinion. The free parameter β models the strength of this bias.

From these rules, we can derive an OCF of the form Δoi=f(oi,oj) by considering (i) the probability that  chooses an argument with a certain direction and (ii) the probability that this argument is new to  (see Banisch and Shamon 2021 on the general approach):

Delta 0i=(oj-oi+(1-oioj)tanh(beta*oi/2)))/4M

Notice that this is an approximation because the ACM is not reducible to the level of opinions. First, there are several combinations of pro and con arguments that give rise to the same opinion (e.g. an opinion of +1 is implied by 4 pro and 3 con arguments as well as by 1 pro and 0 con arguments). Second, the probability that ’s argument is new to  depends on the specific argument strings, and there is a tendency that these strings become correlated over time. These correlations lead to memory effects that become visible in the long convergence times of ACMs (Mäs and Flache 2013, Banisch and Olbrich 2021, Banisch and Shamon 2021). The complete mathematical characterization of these effects is far from trivial and beyond the scope of this comment. However, they do not affect the qualitative picture presented here.

  1. Argument models without bias are averaging.

With that OCF it becomes directly visible that it is incorrect to place the original ACM (without bias) within the class of reinforcement models. No bias means β=0, in which case we obtain:

delta oi=(oj-oi)/4M

That is, we obtain the typical positive influence OCF with μ=1/4M shown on the left of Figure 2.

This may appear counter-intuitive (it did in the reviews) because the ACM by Mäs and Flache (2013) generates the idealtypic pattern of bi-polarization in which two opinion camps approach the extreme ends of the opinion scale. But this macro effect is an effect of homophily and the associated changes in the social interaction structure. It is important to note that homophily does not transform an averaging OCF into a reinforcing one. When implemented as bounded confidence it only cuts off certain regions by setting f(oi,oj)=0. Homophily is a social mechanism that acts at another layer and its reinforcing effect in ACMs is conditional on the social configuration of the entire population. In the models, it generates biased argument pools in a way strongly reminiscent of Sunstein’s law of group polarization (2002). That given, the main result by Mäs and Flache (2013) („differentiation without distancing“) is all the more remarkable! But it is at least misleading to associate it with models that implement reinforcement mechanisms (Martins 2008, Banisch and Olbrich 2019, Baumann et al. 2020).

2. Argument models with moderate bias are reinforcing.

It is only when biased processing is enabled that ACMs become what is called reinforcement models. This is clearly visible on the right of Figure 2 where a bias of β=2 has been used. If, in Figure 1, we accounted for the polarity effect, circumventing that opinions exceed the opinion interval   (Lorenz et al. 2021), the match between the right-hand sides of Figures 1 and 2 would be even more remarkable.

This transition from averaging to reinforcement by biased processing shows that the characterization of models in terms of induced opinion changes (OCF) may be very useful and enables model comparison. Namely, at the macro scale, ACMs with moderate bias behave precisely as other reinforcement models. In a dense group, it will lead to what is called group polarization in psychology: the whole group collectively shifts to an extreme opinion at one side of the spectrum. On networks with communities, these radicalization processes may take different directions in different parts of the network and feature collective-level bi-polarization (Banisch and Olbrich 2019).

  1. Argument models with strong bias may appear as negative influence.

Finally, when the β parameter becomes larger, the ACM leaves the regime of reinforcement models and features patterns that we would associate with negative influence. This is shown in the middle of Figure 2. Under strong biased processing, a leftist agent i with an opinion of (say) oi=-0.75 will shift further to the left when encountering a rightist agent j with an opinion of (say) oj=+0.5. Within the existing classes of models, such a pattern is only possible under negative influence. ACMs with biased processing offer a psychologically compelling alternative, and it is an important empirical question whether observed negative influence effects (Bail et al. 2018) are actually due to repulsive forces or due to cognitive biases in information reception.

The reader will notice that, when looking at the entire OCF in the space spanned by (oi,oj)∈[-1,1]2, there are qualitative differences between the ACM and the OCF defined in Jager and Amblard (2005). The two mechanisms are different and imply different response functions (OCFs). But for some specific opinion pairs the two functions are hardly discernible as shown in the next figure. The blue solid curve shows the OCF of the argument model for β=5 and an agent i interacting with a neutral agent j, i.e. f(oi,0). The ACM with biased processing is aligned with experimental design and entails a ceiling effect so that maximally positive (negative) agents cannot further increase (decrease) their opinion. To enable fair comparison, we introduce the polarity effect used in Lorenz et al. (2021) to the negative influence OCF ensuring that opinions remain within [-1,1]. That is, for the dashed red curve the factor (1- oi2) (cf. Eq. 6 in Lorenz et al. 2021) is multiplied with the function from Jager and Amblard (2005) using u=0.2 and t=0.4. In this specific case, the shapes of the two OCFs are extremely similar. Experimental test would hardly distinguish the two.

Macroscopically, strong biased processing leads to collective bi-polarization even in the absence of homophily (Banisch and Shamon 2021). This insight has been particularly puzzling and mind-boggling to some of the referees. But the reason for this to happen is precisely the fact that ACMs with biased processing may lead to negative influence opinion change phenomena. This indicates, among other things, that one should be very careful to draw collective-level conclusions such as a depolarizing effect of filter bubbles from empirical signatures of negative influence (Bail et al. 2018). While their argumentation seems at least puzzling on the ground of “classical” negative influence models (Mäs and Bischofberger 2015, Keijzer and Mäs 2022), it could be clearly rejected if the empirical negative influence effects are attributed to the cognitive mechanism of biased processing. In ACMs, homophily generally enhances polarization tendencies (Banisch and Shamon 2021).

What to take from here?

Opinion dynamics is at a challenging stage! We have problems with empirical validation (Sobkowicz 2009, Flache et al. 2017) but seem to not sufficiently acknowledge those who advance the field into that direction (Chattoe-Brown 2022, Keijzer 2022, Carpentras 2023). It is greatly thanks to the RofASSS forum that these deficits have become visible. Against that background, this comment is written as a critical one, because developing models with a tight connection to empirical data does not always fit with the core model classes derived from research with a theoretical focus.

The prolonged review process for Banisch and Shamon (2021) — strongly reminiscent of the patterns described by Carpentras (2023) — revealed that there is a certain preference in the community to draw on models building on “opinions” as the smallest and atomic analytical unit. This is very problematic for opinion models that take cognitive mechanisms and complexity into due account. Moreover, we barely see “opinions” in empirical measurements, but rather observe argumentative statements and associations articulated on the web and elsewhere. To my point of view, we have to acknowledge that opinion dynamics is a field that cannot isolate itself from psychology and cognitive science because intra-individual mechanisms of opinion change are at the core of all our models. And just as new phenomena may emerge as we go from individuals to groups or populations, surprises may happen when a cognitive layer of beliefs, arguments, and their associations is underneath. We can treat these emergent effects as mere artifacts of expendable cognitive detail, or we can truly embrace the richness of opinion dynamics as a field spanning multiple levels from cognition to macro social phenomena.

On the other hand, the analysis of the OCF “emerging” from argument exchange also points back to the atomic layer of opinions as a useful reference for model comparisons and synthesis. Specific patterns of opinion updates emerge in any opinion dynamics model however complicated its rules and their implementation might be. For understanding macro effects, more complicated psychological mechanisms may be truly relevant only in so far as they imply qualitatively different OCFs. The functional form of OCFs may serve as an anchor of reference for “model translations” allowing us to better understand the role of cognitive complexity in opinion dynamics models.

What this research comment — clearly overstating at the very front — also aims to show is that modeling based in psychology and cognitive science does not automatically mean we leave behind the principles of parsimony. The ACM with biased processing has only a single effective parameter (β) but is rich enough to span over three very different classes of models. It is averaging if β=0,  it behaves like a reinforcement model with moderate bias (β=2), and may look like negative influence for larger values of . For me, this provides part of an explanation for the misunderstandings that we experienced in the review process for Banisch and Shamon (2021). It’s just inappropriate to talk about ACMs with biased processing within the categories of “classical” models of assimilation, repulsion, and reinforcement. So the review process has been insightful, and I am very grateful that traditional Journals afford such productive spaces of scientific discourse. My main “take-home” from this whole enterprise is that current language enjoins caution to not mix opinion change phenomena with opinion change mechanisms.

Acknowledgements

I am grateful to the Sociology and Computational Social Science group at KIT  — Michael Mäs, Fabio Sartori, and Andreas Reitenbach — for their feedback on a preliminary version of this commentary. I also thank Dino Carpentras for his preliminary reading.

This comment would not have been written without the three anonymous referees at Sociological Methods and Research.

References

Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation20(4),2 http://jasss.soc.surrey.ac.uk/20/4/2.html. DOI:10.18564/jasss.3521

Lorenz, J., Neumann, M., & Schröder, T. (2021). Individual attitude change and societal dynamics: Computational experiments with psychological theories. Psychological Review128(4), 623. https://psycnet.apa.org/doi/10.1037/rev0000291

Keijzer, M. A., & Mäs, M. (2022). The complex link between filter bubbles and opinion polarization. Data Science, 5(2), 139-166. DOI:10.3233/DS-220054

French Jr, J. R. (1956). A formal theory of social power. Psychological review63(3), 181. DOI:10.1037/h0046123

Friedkin, N. E., & Johnsen, E. C. (2011). Social influence network theory: A sociological examination of small group dynamics (Vol. 33). Cambridge University Press.

Jager, W., & Amblard, F. (2005). Uniformity, bipolarization and pluriformity captured as generic stylized behavior with an agent-based simulation model of attitude change. Computational & Mathematical Organization Theory10, 295-303. https://link.springer.com/article/10.1007/s10588-005-6282-2

Flache, A., & Macy, M. W. (2011). Small Worlds and Cultural Polarization. Journal of Mathematical Sociology35, 146-176. https://doi.org/10.1080/0022250X.2010.532261

Martins, A. C. (2008). Continuous opinions and discrete actions in opinion dynamics problems. International Journal of Modern Physics C19(04), 617-624. https://doi.org/10.1142/S0129183108012339

Banisch, S., & Olbrich, E. (2019). Opinion polarization by learning from social feedback. The Journal of Mathematical Sociology43(2), 76-103. https://doi.org/10.1080/0022250X.2018.1517761

Baumann, F., Lorenz-Spreen, P., Sokolov, I. M., & Starnini, M. (2020). Modeling echo chambers and polarization dynamics in social networks. Physical Review Letters124(4), 048301. https://doi.org/10.1103/PhysRevLett.124.048301

Carpentras, D. (2023). Why we are failing at connecting opinion dynamics to the empirical world. 8th March 2023. https://rofasss.org/2023/03/08/od-emprics/

Banisch, S., & Shamon, H. (2021). Biased Processing and Opinion Polarisation: Experimental Refinement of Argument Communication Theory in the Context of the Energy Debate. Available at SSRN 3895117. The most recent version is available as an arXiv preprint arXiv:2212.10117.

Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/

Mäs, M., & Flache, A. (2013). Differentiation without distancing. Explaining bi-polarization of opinions without negative influence. PloS One, 8(11), e74516. https://doi.org/10.1371/journal.pone.0074516

Feliciani, T., Flache, A., & Mäs, M. (2021). Persuasion without polarization? Modelling persuasive argument communication in teams with strong faultlines. Computational and Mathematical Organization Theory, 27, 61-92. https://link.springer.com/article/10.1007/s10588-020-09315-8

Banisch, S., & Olbrich, E. (2021). An Argument Communication Model of Polarization and Ideological Alignment. Journal of Artificial Societies and Social Simulation, 24(1). https://www.jasss.org/24/1/1.html
DOI: 10.18564/jasss.4434

Lorenz, J., Neumann, M., & Schröder, T. (2021). Individual attitude change and societal dynamics: Computational experiments with psychological theories. Psychological Review, 128(4), 623. https://psycnet.apa.org/doi/10.1037/rev0000291

Mäs, M. (2021). Interactions. In Research Handbook on Analytical Sociology (pp. 204-219). Edward Elgar Publishing.

Lopez-Pintado, D., & Watts, D. J. (2008). Social influence, binary decisions and collective dynamics. Rationality and Society, 20(4), 399-443. https://doi.org/10.1177/1043463108096787

Deffuant, G., Neau, D., Amblard, F., & Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 3(01n04), 87-98.

Hegselmann, R., & Krause, U. (2002). Opinion Dynamics and Bounded Confidence Models, Analysis and Simulation. Journal of Artificial Societies and Social Simulation, 5(3),2. https://jasss.soc.surrey.ac.uk/5/3/2.html

Sunstein, C. R. (2002). The Law of Group Polarization. The Journal of Political Philosophy, 10(2), 175-195. https://dx.doi.org/10.2139/ssrn.199668

Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. F., … & Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37), 9216-9221. https://doi.org/10.1073/pnas.1804840115

Mäs, M., & Bischofberger, L. (2015). Will the personalization of online social networks foster opinion polarization? Available at SSRN 2553436. https://dx.doi.org/10.2139/ssrn.2553436

Sobkowicz, P. (2009). Modelling opinion formation with physics tools: Call for closer link with reality. Journal of Artificial Societies and Social Simulation, 12(1), 11. https://www.jasss.org/12/1/11.html

Chattoe-Brown, E. (2022). If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1 Feb 2022. https://rofasss.org/2022/02/01/citing-od-models/

Keijzer, M. (2022). If you want to be cited, calibrate your agent-based model: a reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation.  9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown


Banisch, S. (2023) “One mechanism to rule them all!” A critical comment on an emerging categorization in opinion dynamics. Review of Artificial Societies and Social Simulation, 26 Apr 2023. https://rofasss.org/2023/04/26/onemechanism


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Teaching highly intelligent primary school kids energy system complexity

By Emile Chappin

An energy system complexity lecture for kids?

I was invited to open the ‘energy theme’ at a primary school with a lecture on energy and wanted to give it a complexity and modelling flavour. And I wondered… can you teach this to a large group of 7-to-12-year-old children, all highly intelligent but so far apart in their development? What works in this setting, and what doesn’t? How long should I make such a lecture? Can I explain and let them feel what research is? Can I do some experiments? Can I share what modelling is? What concepts should I include? What are such kids interested in? What do they know? What would they expect? Many of these questions haunted me for some time, and I thought it would be nice to share my observations from simply going for it.

I outline my learning goals, observations from the first few minutes, approach, some later observations, and main takeaways. I end with a plea for teaching social simulation at primary schools. This initiative is part of the Special Interest Group on Education (http://www.essa.eu.org/sig/sig-education/) of the European Social Simulation Association.

Learning goals

I wanted to provide the following insights to these kids:

  • Energy is everywhere; you can feel, hear, and see it all around you. Even from outer space, you can see where cities are when you look at the earth. All activities you do require some form of energy.
  • Energy comes in different forms and can be converted into other forms.
  • Everyone likes to play games, and we can use games even to do research and perform experiments.
  • Doing research/being a researcher involves asking (sometimes odd) questions, looking very carefully at things, studying how the world works and why and solving problems.
  • You can use computers to perform social simulations that help us think. Not necessarily to answer questions but as tools that help us think about the world, do many experiments and study their implications.

First observations

It is easy to notice that this is a rather ambitious plan. Nevertheless, I learnt very quickly that these kids knew a lot! And that they (may) question everything from every angle. They are keen to understand and eager to share what they know. I was happy I could connect with them quickly by helping them get seated, chit chatting before the start.

My approach

I used symbols/analogies to explain deep concepts and layered the meaning, deepening the understanding layer by layer. I came back to and connected all these layers. This enables kids from different age groups to understand the lecture on their level. An example is that I mentioned early on how I was interested in as a kid in black holes. I explained that black holes were discovered by thinking carefully about how the universe works and that theoretical physicists concluded there might be something like a black hole. It was decades later before a real black hole was photographed. The fact that you can imagine and reason how something may exist that you cannot (yet) observe… that much later has been proven to exist. This is what research can be; it is incredible how this happened. Much later in the talk, I connected this to how you can use the computer to imagine, dream up, and test ideas because, in many cases, it is tough to do in real life.

I asked many questions and listened carefully to the answers. Some answers are way off-topic, and it is essential to guide these kids enough so the story continues, but at the same time, the kids stay on board. An early question was… do you like to play games? It is so lovely to have a group of kids cheering that they want to play games! It provides a connection. Another question I asked was, what is the similarity between a wind turbine and a sheep? Kids laughed at the funny question and picture but also came up with the desired answer (they both need/convert energy). Other creative solutions were that the colours were similar, and the shape had similarities. These are fun answers and also correct!

Because of these questions, kids came up with many great insights and good observations. This was astonishing. Research is looking at something carefully, like a snail. A black hole comes from a collapsing star, and our sun will collapse at some point in time. One kid knew that the object I brought was a kazoo… so I invited him to try imitating the sound of Max Verstappen’s Formula One car. And, of course, I had a few more kazoos, so we made a few reasonable attempts. I went back to 5+ times during the next hour to some of these kids’ great remarks: it helped to keep connected to the kids.

I played the ‘heroes and cowards’ game (similar to the ‘heroes and cowards’ model from the Netlogo library). This was a game as well as an experiment. I announced that it only works if we all follow the rules carefully. I made the kids silently think about what would happen. It worked reasonably well: they could observe the emergent phenomenon of the group cluttering and exploding, although it went somewhat rough.

A fantastic moment was to explain the concept of validity to young kids simply by experiencing it. I pressed on the fact that following the rules was crucial for our experiment to be valid and that stumbling and running was problematic for our outcomes. It was amazing that this landed so well; I was fortunate that the circumstances allowed this.

After playing this game a couple of times, with hilarious moments and chaos, I showed how you could replicate what happened in a simulation in Netlogo. I showed that you could repeat it rapidly and do variations that would be hard to do with the kids. I even showed the essential piece of code. And I remarked that the kids on the computer did listen better to me.

Later observations

We planned to take 60 minutes, observe how far we could go, and adapt. I noticed I could stretch it to 75 minutes, far longer than I thought was possible. I used less material than I thought I would use for 60 minutes. I started relatively slow and with a personal touch. I was happy I had flexible material and could adapt to what the kids shared. I used my intuition and picked up objects that were around that I could use to tell the story.

Some sweet things happened. When I first arrived, one kid played the piano in the general area. He played with much possess, small but intense. I said in the lecture that I heard him play and that I was also into music. Raised hand: ‘Will you play something for us at the end’? Of course, I promised this! During the lecture… I repeatedly promised I would; the question came back many times. I played a song the young piano player liked to hear.

These children were very open and direct. I had expected that but was still surprised by how honest and straightforward. ‘Ow, now I lost my question, this happens to me all the time’. I said: do you know I also have this quite often? It is perfectly normal. It doesn’t matter. If the question comes back, you can raise your hand again. If it doesn’t, then that is also just fine.

My takeaways

  • Do fun things, even if it is not perfectly connected. It helps with the attention span and provides a connection. Using humour helps us all to be open to learning.
  • Ask many questions, and use your intuition when asking questions. Listen to the answers, remember important ones (and who gave them), and refer back to them. If something is off-topic, you can ‘park’ that question and remark or answer it politely without dismissing it.
  • Act things out very dramatically. I acted very brave and very cowardly when introducing the game. I used two kids to show the rules and kept referring to them using their names.
  • Don’t overprepare but make the lecture flexible. Where can you expand? What do you need to do to make the connection, to make it stick?
  • I was happy that the class teachers helped me by asking a crucial question at the end, allowing me to close a couple of circles. Keep the teacher active and involved in the lecture. Invite them beforehand to do so.
  • A helpful hint I received afterwards was to use a whiteboard (or something similar) to develop a visual record of concepts and keywords raised by the kids, e.g., in the form of a mind map.
  • Kids keep surprising you all the way. One asked about NetLogo: ‘Can you install this software on Windows 8?’ It is free. You can try it out yourselves, I said. ‘Can you upgrade windows 8 to windows 10’. Well, this depends on your computer, I said. These kids keep surprising you!
  • You can teach complexity, emergence, and agent-based modelling without using words. But if kids use a term, acknowledge it. In this case: ‘But with AI….’ This is AI. It is worth exploring how to reach and teach children crucial complexity insights at a young age.

Teaching social simulation in primary schools

I plea that it is worth the effort to inspire children at a young age with crucial insights into what research is, into complexity, and into using social simulation. In this specific lecture, I only briefly touched on the use of social simulation (right at the end). It is a fantastic gift to help someone see complexity unfold before their eyes and to catch a glimpse of the tools that show the ingredients of this complexity. And it is a relatively small step towards unravelling social behaviour through social simulations. I’m tempted to conclude that you could teach young children a basic understanding of social simulation with relatively small educational modules. Even if it is implicit through games and examples, they may work effectively if placed carefully in the social environment that the different age groups typically face. Showing social structures emerging from behavioural rules. Illustrating different patterns emerging due to stochasticity and changes in assumptions. Dreaming up basic (but distinct) codified decision rules about actual (social) behaviour you see around you. If this becomes an immersive experience, such educational modules have the potential to contribute to an intuitive understanding of what social simulations are and how they can be used. Children may be inspired to learn to see and understand emergent phenomena around them from an early age; they may become the thinkers of tomorrow. And for the kids I met on this occasion: I’d be amazed if none of them became researchers one day. I hope that if you get the chance, you also give it a go and share your experience! I’m keen to hear and learn!


Chappin, E. (2023) Teaching highly intelligent primary school kids energy system complexity. Review of Artificial Societies and Social Simulation, 19 Apr 2023. https://rofasss.org/2023/04/19/teachcomplex


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

The Challenge of Validation

By Martin Neumann

Introduction

In November 2021 Chattoe-Brown initiated a discussion at the SimSoc list on validation which generated quite some traffic on the list. The interest in this topic revealed that empirical validation provides a notorious challenge for agent-based modelling. The discussion raised many important points and questions which even motivated a “smalltalk about big things” at the Social Simulation Fest 2022. Many contributors highlighted that validation cannot be reduced to the comparison of numbers between simulated and empirical data. Without attempting a comprehensive review of this insightful discussion, it has been emphasized that different kinds of science call for different kinds of quality criteria. Prediction might be one criterium that is particularly important in statistics, but that is not sufficient for agent-based social simulation. For instance, agent-based modelling is specifically suited for studying complex systems and turbulent phenomena. Modelling also enables studying alternative and counterfactual scenarios which deviates from the paradigm of prediction as quality criterion. Besides output validation, other quality criteria for agent-based models include for instance input validation or process validation, reflecting the realism of the initialization and the mechanisms implemented in the model.

Qualitative validation procedures

The brief introduction is by no means an exhaustive summary of the broad discussion on validation. Already the measurement of empirical data can be put into question. Less discussed however, had been the role which qualitative methods potentially could play in this endeavor. In fact, there has been a long debate in the community of qualitative social research on this issue as well. Like agent-based social simulation also qualitative methods are challenged by the notion of validation. It has been noted that already the vocabulary that is used in attempts to ensure scientific rigor has a background in a positivist understanding of science whereas qualitative researcher often take up constructivist or poststructuralist positions (Cho and Trent 2006). For this reason, in qualitative research sometimes the notion of trustworthiness (Lincoln and Guba 1985) is preferred rather than speaking of validation. In an influential article (according to google scholar cited more than 17.000 times in May 2023) Creswell and Miller (2000) distinguish between a postpositivist, a constructivist, and a critical paradigm as well as between the lens of the researcher, the lens of the study participants, and the lens of external people and assign different validity procedures for qualitative research to the combinations of these different paradigms and lenses.

Paradigm/ lenspostpositivistconstructivistcritical
Lens of researchertriangulationDisconfirming evidenceReflexivity
Lens of study participantsMember checkingEngagement in the fieldCollaboration
Lens of external peopleAudit trialThick descriptionPeer debriefing
Table 1. validity procedures according to Creswell and Miller (2000).

While it remains contested if the validation procedure depends on the research design, this is at least a source of different accounts. Others differentiate between transactional and transformational validity (Cho and Trent 2006). The former concentrates on formal techniques in the research process for avoiding misunderstandings. Such procedures include for instance, techniques such as member checking. The latter account perceives research as an emancipatory process on behalf of the research subjects. This goes along with questioning the notion of absolute truth in the domain of human sciences which calls for alternative sources for the legitimacy of science such as emancipation of the researched subjects. This concept of emancipatory research resonates with participatory modelling approaches. In fact, in participatory modelling accounts some of these procedures are well-known even though they differ in terminology. The participatory approach originates from research on resource management (Pahl-Wostl 2002). For this purpose, integrated assessment models have been developed, inspired by the concept of post-normal science (Funtowicz and Ravetz 1993). Post-normal science emphasizes the communication of uncertainty, justification of practice, and complexity. This approach recognizes the legitimacy of multiple perspectives on an issue, both with respect to multiple scientific disciplines as well as lay men involved in the issue. For instance, Wynne (1992) analyzed the knowledge claims of sheep farmers in the interaction with scientists and authorities. In such an extended peer community of a citizen science (Stilgoe 2009), lay men of the affected communities play an active role in knowledge production, not only because of moral principles of fairness but to increase the quality of science (Fjelland 2016). One of the most well-known participatory approaches is the so-called companion modelling (ComMod) developed at CIRAD, a French agricultural research center for international development. The term companion modelling has been coined originally by (Barreteau et al 2003) and been further developed to a research paradigm for decision making in complex situations to support sustainable development (Étienne 2014). In fact, these approaches have a strong emancipatory component and rely on collaboration and member checking for ensuring resonance and practicality of the models (Tesfatsion 2021).

An interpretive validation procedure

While the participatory approaches show a convergence of methods between modelling and qualitative research even though they differ in terminology, in the following a further approach for examining the trustworthiness of simulation scenarios will be introduced that has not been considered so far, namely interpretive methods from qualitative research. A strong feature of agent-based modelling is that it allows for studying “what-if” questions. The ex-ante investigation of possible alternative futures enables identifying possible options of action alternatives but also detecting early warning signals of undesired developments. For this purpose, counterfactual scenarios are an important feature of agent-based modelling. It is important to note in this context that counterfactuals do not match empirical data. In the following it is suggested to examine the trustworthiness of counterfactual scenarios by using methods from objective hermeneutics (Oevermann 2002), the so-called sequence analysis (Kurt and Herbrik 2014). In terms of Creswell and Miller (2000) the examination of trustworthiness is from the lens of the researcher and a constructivist paradigm. For this purpose, simulation results have to be transformed into narrative scenarios, a method which is described in (Lotzmann and Neumann 2017).   

In the social sciences, sequence analysis is regarded as the central instrument of hermeneutic interpretation of meaning. It is “a method of interpretation that attempts to reconstruct the meaning of any kind of human action sequence by sequence, i.e. sense unit by sense unit […]. Sequence analysis is guided by the assumption that in the succession of actions […] contexts of meaning are realized …” (Kurt and Herbrik 2014: 281). A first important rule is the sequential procedure. The interpretation takes place in the sequence that the protocol to be analyzed itself specifies. It is assumed that each sequence point closes possibilities on the one hand and opens new possibilities on the other hand. This is done practically by sketching a series of stories in which the respective sequence passage would make sense. The basic question that can be asked of each sequence passage can be summarized as, “Consider who might have addressed this utterance to whom, under what conditions, with what justification, and what purpose?” (Schneider 1995: 140). The answers to these questions are the thought-experimentally designed stories. These stories are examined for commonalities and differences and condensed into readings. Through the generation of readings, certain possibilities of connection to the interpreted sequence passage become visible at the same time. In this sense, each step of interpretation makes sequentially spaces of possibility visible and at the same time closes other spaces of possibility.

In the following it will be argued that this method enables an examination of the trustworthiness of counterfactual scenarios using the example of a counterfactual simulation scenario of a successful non-violent conflict regulation within a criminal group: ‘They had a meeting at their lawyer’s office to assess the value of his investment, and Achim complied with the request. Thus, trust was restored, and the group continued their criminal activities’ (names are fictitious). Following Dickel and Neumann (2021) it is argued that this is a meaningful story. It is an example of how the linking of the algorithmic rules generates something new from the individual parts of the empirical material. However, it also shows how the individual pieces of the puzzle of the empirical data material are put together to form a collage that tells a story that makes sense. A sequence that can be interpreted in a meaningful way is produced. It should be noted, however, that this is a counterfactual sequence. In fact, a significantly different sequence is found in the empirical data: ‘Achim was ordered to his lawyer’s office. Instead of his lawyer, however, Toby and three thugs were waiting for him. They forced him to his knees and pointed a machine gun at his stomach’. In fact, this was by no means a non-violent form of conflict regulation. However, after Achim (in the real case) was forced to his knees by three thugs and threatened with a machine gun, the way to non-violent conflict regulation was hardly open any more. The sequence generated by the simulation, on the other hand, shows a way how the violence could have been avoided – a way that was not taken in reality. Is this now a programming error in the modeling? On the contrary, it is argued that it demonstrates the trustworthiness of the counterfactual scenario: from a methodological point of view a comparison of the factual with the counterfactual is instructive: Factually, Achim had a machine gun pointed at his stomach. Counterfactually, Achim agreed on a settlement. From a sequence-analytic perspective, this is a logical conclusion to a story, even if it does not apply to the factual course of events. Thus, the sequence analysis shows that the simulation here has decided between two possibilities, a path branching in which certain possibilities open and others close.

The trustworthiness of a counterfactual narrative is shown by whether 1) a meaningful case structure can be generated at all, or whether the narrative reveals itself as an absurd series of sequence passages from which no rules of action can be reconstructed. 2) it can be tested whether the case structure withstands a confrontation with the ‘external context’ and can be interpreted as a plausible structural variation. If both are given, scenarios can be read as explorations of the space of cultural possibilities, or of a cultural horizon (in this case: a specific criminal milieu). Thereby the interpretation of the counterfactual scenario provides a means for assessing the trustworthiness of the simulation.

References

Barreteau, O., et al. (2003). Our companion modelling approach. Journal of Artificial Societies and Social Simulation 6(2): 1. https://www.jasss.org/6/2/1.html

Cho, J., Trent, A. (2006). Validity in qualitative research revisited. Qualitative Research 6(3), 319-340. https://doi.org/10.1177/1468794106065006

Creswell, J., Miller, D. (2000). Determining validity in qualitative research. Theory into Practice 39(3), 124-130. https://doi.org/10.1207/s15430421tip3903_2

Dickel, S., Neumann. M. (2021). Hermeneutik sozialer Simulationen: Zur Interpretation digital erzeugter Narrative. Sozialer Sinn 22(2): 252-287. https://doi.org/10.1515/sosi-2021-0013

Étienne, M. (2014)(Ed.). Companion Modelling: A Participatory Approach to Support Sustainable Development. Springer, Dordrecht. https://link.springer.com/book/10.1007/978-94-017-8557-0

Fjelland, R. (2016). When Laypeople are Right and Experts are Wrong: Lessons from Love Canal. International Journal for Philosophy of Chemistry 22(1): 105–125. https://www.hyle.org/journal/issues/22-1/fjelland.pdf

Funtowicz, S., Ravetz, J. (1993). Science for the post-normal age. Futures 31(7): 735-755. https://doi.org/10.1016/0016-3287(93)90022-L

Kurt, R.; Herbrik, R. (2014). Sozialwissenschaftliche Hermeneutik und hermeneutische Wissenssoziologie. In: Baur, N.; Blasius, J. (eds.): Handbuch Methoden der empirischen Sozialforschung, pp. 473–491. Springer VS, Wiesbaden. https://link.springer.com/chapter/10.1007/978-3-658-21308-4_37

Lotzmann, U., Neumann, M. (2017). Simulation for interpretation. A methodology for growing virtual cultures. Journal of Artificial Societies and Social Simulation 20(3): 13. https://www.jasss.org/20/3/13.html

Lincoln, Y.S., Guba, E.G. (1985). Naturalistic Inquiry. Sage, Beverly Hill.

Oevermann, U. (2002). Klinische Soziologie auf der Basis der Methodologie der objektiven Hermeneutik. Manifest der objektiv hermeneutischen Sozialforschung http://www.ihsk.de/publikationen/Ulrich_Oevermann-Manifest_der_objektiv_hermeneutischen_Sozialforschung.pdf (Download am 01.03.2020).

Pohl-Wostl, C. (2002). Participative and Stakeholder-Based Policy Design, Evaluation and Modeling Processes. Integrated Assessment 3(1): 3-14. https://doi.org/10.1076/iaij.3.1.3.7409

Schneider, W. L. (1995). Objektive Hermeneutik als Forschungsmethode der Systemtheorie. Soziale Systeme 1(1): 135–158.

Stilgoe, J. (2009). Citizen Scientists: Reconnecting Science with Civil Society. Demos, London.

Tesfatsion, L. (2021). “Agent-Based Modeling: The Right Mathematics for Social Science?,” Keynote address, 16th Annual Social Simulation Conference (virtual), sponsored by the European Social Simulation Association (ESSA), September 20-24, 2021.

Wynne, B. (1992). Misunderstood misunderstanding: social identities and public uptake of science. Public Understanding of Science 1(3): 281–304.


Neumann, M. (2023) The Challenge of Validation. Review of Artificial Societies and Social Simulation, 18th Apr 2023. https://rofasss.org/2023/04/18/ChallengeValidation


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)The Challenge of Validation

Yes, but what did they actually do? Review of: Jill Lepore (2020) “If Then: How One Data Company Invented the Future”

By Nick Gotts

ngotts@gn.apc.org

Jill Lepore (2020) If Then: How One Data Company Invented the Future. John Murray. ISBN: 978-1-529-38617-2 (2021 pbk edition). [Link to book]

This is a most frustrating book. The company referred to in the subtitle is the Simulmatics Corporation, which collected and analysed data on public attitudes for politicians, retailers and the US Department of Defence between 1959 and 1970. Lepore says it carried out “simulation”, but is never very clear about what “simulation” meant to the founders of Simulmatics, what algorithms were involved, or how these algorithms used data. The history of Simulmatics is narrated along with that of US politics and the Vietnam War during its period of operation; the company worked for John Kennedy’s presidential campaign in 1960, although the campaign was shy about admitting this. There is much of interest in this historical context, but the book is marred by the apparent limitations of Lepore’s technical knowledge, her prejudices against the social and behavioural sciences (and in particular the use of computers within them), and irritating “tics” such as the frequent repetition of “If/Then”. There are copious notes, and an index, but no bibliography.

Lepore insists that human behaviour is not predictable, whereas both everyday observation and the academic study of human sciences and history show that on both individual and collective levels it is partially predictable – if it were not, social life would be impossible – and partially unpredictable; she also claims that there is a general repudiation of the importance of history among social and behavioural scientists and in “Silicon Valley”, and seems unaware that many historians and other humanities researchers use mathematics and even computers in their work.

Information about Simulmatics’ uses of computers is in fact available from contemporary documents which its researchers published. In the case of Kennedy’s presidential campaign (de Sola Pool and Abelson 1961, de Sola Pool 1963), the “simulation” involved was the construction of synthetic populations in order to amalgamate polling data from past (1952, 1954, 1956, 1958) American election campaigns. Americans were divided into 480 demographically defined “voter types” (e.g. “Eastern, metropolitan, lower-income, white, Catholic, female Democrats”), and the favourable/unfavourable/neither polling responses of members of these types to 52 specific “issues” (examples given include civil rights, anti-Communism, anti-Catholicism, foreign aid) were tabulated. Attempts were then made to “simulate” 32 of the USA’s 50 states by calculating the proportions of the 480 types in those states and assuming the frequency of responses within a voter type would be the same across states. This produced a ranking of how well Kennedy could be expected to do across these states, which matched the final results quite well. On top of this work an attempt was made to assess the impact of Kennedy’s Catholicism if it became an important issue in the election, but this required additional assumptions on how members of nine groups cross-classified by political and religious allegiance would respond. It is not clear that Kennedy’s campaign actually made any use of Simulmatics’ work, and there is no sense in which political dynamics were simulated. By contrast, in later Simulmatics work not dealt with by Lepore, on local referendum campaigns about water fluoridation (Abelson and Bernstein 1963), an approach very similar to current work in agent-based modelling was adopted. Agents based on the anonymised survey responses of individuals both responded to external messaging, and interacted with each other, to produce a dynamically simulated referendum campaign. It is unclear why Lepore does not cover this very interesting work. She does cover Simulmatics’ involvement in the Vietnam War, where their staff interviewed Vietnamese civilians and supposed “defectors” from the National Liberation Front of South Vietnam (“Viet Cong”) – who may in fact simply have gone back to their insurgent activity afterwards; but this work does not appear to have used computers for anything more than data storage.

In its work on American national elections (which continued through 1964) Simulmatics appears to have wildly over-promised given the data that it would have had available, subsequently under-performed, and failed as a company as a result; from this, indeed, today’s social simulators might take warning. Its leaders started out as “liberals” in American terms, but appear to have retained the colonialist mentality generally accompanying this self-identification, and fell into and contributed to the delusions of American involvement in the Vietnam War – although it is doubtful whether the history of this involvement would have been significantly different if the company had never existed. The fact that Simulmatics was largely forgotten, as Lepore recounts, hints that it was not, in fact, particularly influential, although interesting as the venue of early attempts at data analytics of the kind which may indeed now threaten what there is of democracy under capitalism (by enabling the “microtargeting” of specific lies to specific portions of the electorate), and at agent-based simulation of political dynamics. From a personal point of view, I am grateful to Lepore for drawing my attention to contemporary papers which contain far more useful information than her book about the early use of computers in the social sciences.

References

Abelson, R.P. and Bernstein, A. (1963) A Computer Simulation Model of Community Referendum Controversies. The Public Opinion Quarterly Vol. 27, No. 1 (Spring, 1963), pp. 93-122. Stable URL http://www.jstor.com/stable/2747294.

de Sola Pool, I. (1963) AUTOMATION: New Tool For Decision Makers. Challenge Vol. 11, No. 6 (MARCH 1963), pp. 26-27. Stable URL https://www.jstor.org/stable/40718664.

de Sola Pool, I. and Abelson, R.P. (1961) The Simulmatics Project. The Public Opinion Quarterly, Vol. 25, No. 2 (Summer, 1961), pp. 167-183. Stable URL https://www.jstor.org/stable/2746702.


Gotts, N. (2023) Yes, but what did they actually do? Review of: Jill Lepore (2020) "If Then: How One Data Company Invented the Future". Review of Artificial Societies and Social Simulation, 9 Mar 2023. https://rofasss.org/2023/03/09/ReviewofJillLepore


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Why we are failing at connecting opinion dynamics to the empirical world

By Dino Carpentras

ETH Zürich – Department of Humanities, Social and Political Sciences (GESS)

The big mystery

Opinion dynamics (OD) is field dedicated to studying the dynamic evolution of opinions and is currently facing some extremely cryptic mysteries. Since 2009 there have been multiple calls for OD models to be strongly grounded in empirical data (Castellano et al., 2009, Valori et al., 2012, Flache et al., 2017; Dong et al., 2018), however the number of articles moving in this direction is still extremely limited. This is especially puzzling when compared with the increase in the number of publications in this field (see Fig 1). Another surprising issue, which extends also beyond OD, is that validated models are not cited as often as we would expect them to be (Chattoe-Brown, 2022; Kejjzer, 2022).

Some may argue that this could be explained by a general lack of people interested in the empirical side of opinion dynamics. However, the World seems in desperate need of empirically grounded OD models that could help us in shape policies on topics such as vaccination and climate change. Thus, it is very surprising to see that almost nobody is interested in meeting such a big and pressing demand.

In this short piece, I will share my experience both as a writer and as a reviewer for empirical OD papers, as well as the information I gathered from discussions with other researchers in similar roles. This will help us understand much better what is going on in the world of empirical OD and, more in general, in the empirical parts of agent-based modelling (ABM) related to psychological phenomena.

fig 1 rofasss

Publications containing the term “opinion dynamics” in abstract or title. Total 2,527. Obtained from dimensions.ai

Theoretical versus empirical OD

The main issue I have noticed with works in empirical OD is that these papers do not conform to the standard framework of ABM papers. Indeed, in “classical” ABM we usually try to address research questions like:

  1. Can we develop a toy model to show how variables X and Y are linked?
  2. Can we explain some macroscopic phenomenon as the result of agents’ interaction?
  3. What happens to the outputs of a popular model if we add a new variable?

However, empirical papers do not fit into this framework. Indeed, empirical ABM papers ask questions such as:

  1. How accurate are the predictions made by a certain model when compared with data?
  2. How close is the micro-dynamic to the experimental data?
  3. How can we refine previous models to improve their predicting ability?

Unfortunately, many reviewers do not view the latter questions as genuine research inquiries, ending up in pushing the authors to modify their papers to meet the first set of questions.

For instance, my empirical works often receive the critique that “the research question is not clear”, even though the question was explicitly stated in the main text, abstract and even in the title (See, for example “Deriving An Opinion Dynamics Model From Experimental Data”, Carpentras et al. 2022). Similarly, once a reviewer acknowledged that the experiment presented in the paper was an interesting addition to it, but they requested me to demonstrate why it was useful. Notice that, also in this case, the paper was on developing a model from the dynamical behavior observed in an experiment; therefore, the experiment was not just “an add on”, but core of the paper. I also have reviewed some empirical OD papers where the authors are asked, by other reviewers, to showcase how their model informs us about the world in a novel way.

As we will see in a moment, this approach does not just make authors’ life harder, but it also generates a cascade of consequences on the entire field of opinion dynamics. But to better understand our world, let us move first to a fictitious scenario.

A quick tale of natural selection of researcher

Let us now imagine a hypothetical world where people have almost no knowledge of the principles of physics. However, to keep the thought experiment simple, let us also suppose they have already developed the peer-review process. Of course, this fictious scenario is far from being realistic, but it should still help us understand what is going on with empirical OD.

In this world, a scientist named Alice writes a paper suggesting that there is an upward force when objects enter water. She also shows that many objects can float on water, therefore “validating” her model. The community is excited about this new paper which took Alice 6 months to write.

Now, consider another scientist named Bob. Bob, inspired by Alice’s paper, in 6 months conducts a series of experiments demonstrating that when an object is submerged in water, it experiences an upward force that is proportional to its submerged volume. This pushes knowledge forward as Bob does not just claim that this force exists, but he shows how this force has some clear quantitative relationship to the volume of the object.

However, when reviewers read Bob’s work, they are unimpressed. They question the novelty of his research and fail to see the specific research question he is attempting to address. After all, Alice already showed that this force exists, so what is new in this paper? One of the reviewers suggests that Bob should show how his study may impact their understanding of the world.

As a result, Bob spends an additional six months to demonstrate that he could technically design a floating object made out of metal (i.e. a ship). He also describes the advantages for society if such an object was invented. Unfortunately, one of the reviewers is extremely skeptical as metal is known to be extremely heavy and should not float in water, and requests additional proof.

After multiple revisions, Bob’s work is eventually published. However, the publication process takes significantly longer than Alice’s work, and the final version of the paper addresses a variety of points, including empirical validation, the feasibility of constructing a metal boat, and evidence to support this claim. Consequently, the paper becomes densely technical, making it challenging for most people to read and understand.

At the end, Bob is left with a single paper which is hardly readable (and therefore citable), while Alice, in the meanwhile, published many other easier-to-read papers having a much bigger impact.

Solving the mystery of empirical opinion dynamics

The previous sections helped us in understanding the following points: (1) validation and empirical grounding are often not seen as a legitimate research goal by many members of the ABM community. (2) This leads to bigger struggle when trying to publish this kind of research, and (3) reviewers often try to push the paper into the more classic research questions, possibly resulting in a monster-paper which tries to address multiple points all at once. (4) This also generates lower readability and so less impact.

So to sum it up: empirical OD gives you the privilege of working much more to obtain way less. This, combined with the “natural selection” of the “publish or perish” explains the scarcity of publications in this field, as authors need either to adapt to more standard ABM formulas or to “perish.” I also personally know an ex-researcher who tried to publish empirical OD until he got fed up and left the field.

Some clarifications

Let me make clear that this is a bit of a simplification and that, of course, it is definitely possible to publish empirical work in opinion dynamics even without “perishing.” However, choosing this instead of the traditional ABM approach strongly enhances the difficulty. This is a little like running while carrying extra weight: it is still possible that you will win the race, but the weight strongly decreases the probability of this happening.

I also want to say that while here I am offering an explanation of the puzzles I presented, I do not claim that this is the only possible explanation. Indeed, I am sure that what I am offering here is only part of the full story.

Finally, I want to clarify that I do not believe anyone in the system has bad intentions. Indeed, I think reviewers are in good faith when suggesting empirically-oriented papers to take a more classical approach. However, even with good intentions, we are creating a lot of useless obstacles for an entire research field.

Trying to solve the problem

To address this issue, in the past I have suggested dividing ABM researchers into theoretical and empirically oriented (Carpentras, 2020). The division of research into two streams could help us in developing better standards for both developing toy models and for empirical ABMs.

To give you a practical example, my empirical ABM works usually receive long and detailed comments about the model properties and almost no comment on the nature of the experiment or data analysis. Am I that good in these last two steps? Or maybe reviewers in ABM focus very little on the empirical side of empirical ABMs? While the first explanation would be flattering for me, I am afraid that the reality is better depicted by the second option.

With this in mind, together with other members of the community, we have created a special interest group for Experimental ABM (see http://www.essa.eu.org/sig/sig-experimental-abm/). However, for this to be successful, we really need people to recognize the distinction between these two fields. We need to acknowledge that empirically-related research questions are still valid and not push papers towards the more classical approach.

I really believe empirical OD will raise, but how this will happen is still to decide. Will it be at the cost of many researchers facing bigger struggle or will we develop a more fertile environment? Or maybe some researchers will create an entire new niche outside of the ABM community? The choice is up to us!

References

Carpentras, D., Maher, P. J., O’Reilly, C., & Quayle, M. (2022). Deriving An Opinion Dynamics Model From Experimental Data. Journal of Artificial Societies & Social Simulation, 25(4). https://www.jasss.org/25/4/4.html

Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/

Castellano, C., Fortunato, S., & Loreto, V. (2009). Statistical physics of social dynamics. Reviews of modern physics, 81(2), 591. DOI: 10.1103/RevModPhys.81.591

Chattoe-Brown, E. (2022). If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1 Feb 2022. https://rofasss.org/2022/02/01/citing-od-models/

Dong, Y., Zhan, M., Kou, G., Ding, Z., & Liang, H. (2018). A survey on the fusion process in opinion dynamics. Information Fusion, 43, 57-65. DOI: 10.1016/j.inffus.2017.11.009

Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4). https://www.jasss.org/20/4/2.html

Keijzer, M. (2022). If you want to be cited, calibrate your agent-based model: a reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation.  9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown

Valori, L., Picciolo, F., Allansdottir, A., & Garlaschelli, D. (2012). Reconciling long-term cultural diversity and short-term collective social behavior. Proceedings of the National Academy of Sciences, 109(4), 1068-1073. DOI: 10.1073/pnas.1109514109


Carpentras, D. (2023) Why we are failing at connecting opinion dynamics to the empirical world. Review of Artificial Societies and Social Simulation, 8 Mar 2023. https://rofasss.org/2023/03/08/od-emprics


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)