By Dino Carpentras
ETH Zürich – Department of Humanities, Social and Political Sciences (GESS)
The big mystery
Opinion dynamics (OD) is field dedicated to studying the dynamic evolution of opinions and is currently facing some extremely cryptic mysteries. Since 2009 there have been multiple calls for OD models to be strongly grounded in empirical data (Castellano et al., 2009, Valori et al., 2012, Flache et al., 2017; Dong et al., 2018), however the number of articles moving in this direction is still extremely limited. This is especially puzzling when compared with the increase in the number of publications in this field (see Fig 1). Another surprising issue, which extends also beyond OD, is that validated models are not cited as often as we would expect them to be (Chattoe-Brown, 2022; Kejjzer, 2022).
Some may argue that this could be explained by a general lack of people interested in the empirical side of opinion dynamics. However, the World seems in desperate need of empirically grounded OD models that could help us in shape policies on topics such as vaccination and climate change. Thus, it is very surprising to see that almost nobody is interested in meeting such a big and pressing demand.
In this short piece, I will share my experience both as a writer and as a reviewer for empirical OD papers, as well as the information I gathered from discussions with other researchers in similar roles. This will help us understand much better what is going on in the world of empirical OD and, more in general, in the empirical parts of agent-based modelling (ABM) related to psychological phenomena.
Publications containing the term “opinion dynamics” in abstract or title. Total 2,527. Obtained from dimensions.ai
Theoretical versus empirical OD
The main issue I have noticed with works in empirical OD is that these papers do not conform to the standard framework of ABM papers. Indeed, in “classical” ABM we usually try to address research questions like:
- Can we develop a toy model to show how variables X and Y are linked?
- Can we explain some macroscopic phenomenon as the result of agents’ interaction?
- What happens to the outputs of a popular model if we add a new variable?
However, empirical papers do not fit into this framework. Indeed, empirical ABM papers ask questions such as:
- How accurate are the predictions made by a certain model when compared with data?
- How close is the micro-dynamic to the experimental data?
- How can we refine previous models to improve their predicting ability?
Unfortunately, many reviewers do not view the latter questions as genuine research inquiries, ending up in pushing the authors to modify their papers to meet the first set of questions.
For instance, my empirical works often receive the critique that “the research question is not clear”, even though the question was explicitly stated in the main text, abstract and even in the title (See, for example “Deriving An Opinion Dynamics Model From Experimental Data”, Carpentras et al. 2022). Similarly, once a reviewer acknowledged that the experiment presented in the paper was an interesting addition to it, but they requested me to demonstrate why it was useful. Notice that, also in this case, the paper was on developing a model from the dynamical behavior observed in an experiment; therefore, the experiment was not just “an add on”, but core of the paper. I also have reviewed some empirical OD papers where the authors are asked, by other reviewers, to showcase how their model informs us about the world in a novel way.
As we will see in a moment, this approach does not just make authors’ life harder, but it also generates a cascade of consequences on the entire field of opinion dynamics. But to better understand our world, let us move first to a fictitious scenario.
A quick tale of natural selection of researcher
Let us now imagine a hypothetical world where people have almost no knowledge of the principles of physics. However, to keep the thought experiment simple, let us also suppose they have already developed the peer-review process. Of course, this fictious scenario is far from being realistic, but it should still help us understand what is going on with empirical OD.
In this world, a scientist named Alice writes a paper suggesting that there is an upward force when objects enter water. She also shows that many objects can float on water, therefore “validating” her model. The community is excited about this new paper which took Alice 6 months to write.
Now, consider another scientist named Bob. Bob, inspired by Alice’s paper, in 6 months conducts a series of experiments demonstrating that when an object is submerged in water, it experiences an upward force that is proportional to its submerged volume. This pushes knowledge forward as Bob does not just claim that this force exists, but he shows how this force has some clear quantitative relationship to the volume of the object.
However, when reviewers read Bob’s work, they are unimpressed. They question the novelty of his research and fail to see the specific research question he is attempting to address. After all, Alice already showed that this force exists, so what is new in this paper? One of the reviewers suggests that Bob should show how his study may impact their understanding of the world.
As a result, Bob spends an additional six months to demonstrate that he could technically design a floating object made out of metal (i.e. a ship). He also describes the advantages for society if such an object was invented. Unfortunately, one of the reviewers is extremely skeptical as metal is known to be extremely heavy and should not float in water, and requests additional proof.
After multiple revisions, Bob’s work is eventually published. However, the publication process takes significantly longer than Alice’s work, and the final version of the paper addresses a variety of points, including empirical validation, the feasibility of constructing a metal boat, and evidence to support this claim. Consequently, the paper becomes densely technical, making it challenging for most people to read and understand.
At the end, Bob is left with a single paper which is hardly readable (and therefore citable), while Alice, in the meanwhile, published many other easier-to-read papers having a much bigger impact.
Solving the mystery of empirical opinion dynamics
The previous sections helped us in understanding the following points: (1) validation and empirical grounding are often not seen as a legitimate research goal by many members of the ABM community. (2) This leads to bigger struggle when trying to publish this kind of research, and (3) reviewers often try to push the paper into the more classic research questions, possibly resulting in a monster-paper which tries to address multiple points all at once. (4) This also generates lower readability and so less impact.
So to sum it up: empirical OD gives you the privilege of working much more to obtain way less. This, combined with the “natural selection” of the “publish or perish” explains the scarcity of publications in this field, as authors need either to adapt to more standard ABM formulas or to “perish.” I also personally know an ex-researcher who tried to publish empirical OD until he got fed up and left the field.
Let me make clear that this is a bit of a simplification and that, of course, it is definitely possible to publish empirical work in opinion dynamics even without “perishing.” However, choosing this instead of the traditional ABM approach strongly enhances the difficulty. This is a little like running while carrying extra weight: it is still possible that you will win the race, but the weight strongly decreases the probability of this happening.
I also want to say that while here I am offering an explanation of the puzzles I presented, I do not claim that this is the only possible explanation. Indeed, I am sure that what I am offering here is only part of the full story.
Finally, I want to clarify that I do not believe anyone in the system has bad intentions. Indeed, I think reviewers are in good faith when suggesting empirically-oriented papers to take a more classical approach. However, even with good intentions, we are creating a lot of useless obstacles for an entire research field.
Trying to solve the problem
To address this issue, in the past I have suggested dividing ABM researchers into theoretical and empirically oriented (Carpentras, 2020). The division of research into two streams could help us in developing better standards for both developing toy models and for empirical ABMs.
To give you a practical example, my empirical ABM works usually receive long and detailed comments about the model properties and almost no comment on the nature of the experiment or data analysis. Am I that good in these last two steps? Or maybe reviewers in ABM focus very little on the empirical side of empirical ABMs? While the first explanation would be flattering for me, I am afraid that the reality is better depicted by the second option.
With this in mind, together with other members of the community, we have created a special interest group for Experimental ABM (see http://www.essa.eu.org/sig/sig-experimental-abm/). However, for this to be successful, we really need people to recognize the distinction between these two fields. We need to acknowledge that empirically-related research questions are still valid and not push papers towards the more classical approach.
I really believe empirical OD will raise, but how this will happen is still to decide. Will it be at the cost of many researchers facing bigger struggle or will we develop a more fertile environment? Or maybe some researchers will create an entire new niche outside of the ABM community? The choice is up to us!
Carpentras, D., Maher, P. J., O’Reilly, C., & Quayle, M. (2022). Deriving An Opinion Dynamics Model From Experimental Data. Journal of Artificial Societies & Social Simulation, 25(4). https://www.jasss.org/25/4/4.html
Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/
Castellano, C., Fortunato, S., & Loreto, V. (2009). Statistical physics of social dynamics. Reviews of modern physics, 81(2), 591. DOI: 10.1103/RevModPhys.81.591
Chattoe-Brown, E. (2022). If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1 Feb 2022. https://rofasss.org/2022/02/01/citing-od-models/
Dong, Y., Zhan, M., Kou, G., Ding, Z., & Liang, H. (2018). A survey on the fusion process in opinion dynamics. Information Fusion, 43, 57-65. DOI: 10.1016/j.inffus.2017.11.009
Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4). https://www.jasss.org/20/4/2.html
Keijzer, M. (2022). If you want to be cited, calibrate your agent-based model: a reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation. 9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown
Valori, L., Picciolo, F., Allansdottir, A., & Garlaschelli, D. (2012). Reconciling long-term cultural diversity and short-term collective social behavior. Proceedings of the National Academy of Sciences, 109(4), 1068-1073. DOI: 10.1073/pnas.1109514109
Carpentras, D. (2023) Why we are failing at connecting opinion dynamics to the empirical world. Review of Artificial Societies and Social Simulation, 8 Mar 2023. https://rofasss.org/2023/03/08/od-emprics
© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)
7 thoughts on “Why we are failing at connecting opinion dynamics to the empirical world”
I wonder whether you might be satisfied with my recent chapter (doi: 10.1007/978-3-030-54936-7_6) and other papers which I wrote over the years.
It is definitely fascinating work, and I’m very glad to see other people working on developing the empirical side of opinion dynamics. Actually, also I have already read an early version of this chapter!
However, I’m not sure what you meant by “satisfied.”
Well, you were obviously asking for papers like this, and I wondered whether you see this as something that meets the demands you raised. But I see now that I have worked in the direction of what you think should happen. By the way, you suggest to divide ABM researchers into theoretical and empirically oriented. Currently, this seems to be empirically true, so to speak, but shouldn’t we try to convince members of both groups that theoretical and empirical research should go hand in hand? Otherwise one group does more or less mathematics, whereas the other group counts peas or beans …
Hi Dino, I think there is another issue worth mentioning: the general absence of empirically-supported and theoretically based socio-cognitive mechanisms in opinion dynamics work. I understand that perhaps for predictive models and applications this is less of a concern, but certainly it’s a major gap if we want explanations that are rooted in psychology and cognitive science. I think caring about this gap–and strongly nudging others to care about this gap–would help grow the general awareness that better, more refined, and more grounded explanations matter.
As an early-career researcher looking to help with connecting models to empirical data, I was excited to look at your data. But I can’t find it. Where is the experimental data for the study in the JASSS paper you linked? It doesn’t seem to be on the JASSS page for that study, nor on your website.
Funnily enough, that paper passed through 4 stages of review and nobody asked for the data! Kind of depressing…
I just uploaded them on github
I also added the data dictionary with the full questions, but, if anything is unclear, just let me know!
Regarding the rest of the argument, I completely agree! Actually, with OD we are still at a very early stage. So early that we still don’t even know what our problems are.
For example, recently I started studying the impact of psychometric distortion (i.e. the fact that data are ordinal) in OD. Even if I suppose no measurement error, the ordinality is already enough to transform one model into another. This is an entirely new set of problems and possibilities that we’ll need to understand.
There are plenty of things to discover and explore in empirical OD. But to do that we need an entire community working on these problems. And for that, we need a system that incentivizes this type of research, instead of pushing people away.
Thank you for a very interesting piece. I would say that my experience of reviewing to some extent matches yours. This paper (https://www.socresonline.org.uk/19/1/16.html) was published where it was because JASSS would not accept it (and it now has a younger sibling that is following the same career). But of course although the papers _do_ get published, JASSS readers do not have to change their preconceptions! As another relevant example I have heard someone state with assurance (but on no basis that I can see being on the Editorial Board) that JASSS is a theoretical journal and should not therefore be expected to publish empirical work. How can such views with no grounding and no agreement still have power? I was also told by a reviewer what a project I was actually on was “about” and it was clear that s/he was _not_ on it for that reason! Peer reviewing is vital but I do wonder if reviewers need to be more accountable for the claims they make.
But I think what I would add to your piece from my anecdotal experience is the problem of “tacit” beliefs and different “levels” of scientific endeavour. It is much more permissible to invent your own OD model and test it empirically but when you try to test existing models you are told that you have not “understood” the intentions of those models (even though those intentions, as far as I have been able to discover, are not documented). For example, a view floats about in reviews and seminar discussion that the Zaller-Deffuant model applies to “town hall meetings” or perhaps to experimental data about groups. But I cannot find that “use warning” in any of the papers themselves and, in fact, these papers often discuss “global scale” examples like the rise of Nazism or the spread of fundamentalist Islam. Readers might therefore be forgiven for believing that the presented model is not ruled out from “applying” to phenomena on that scale. Otherwise, to put it politely, the examples chosen are misleading.
But then we come to the different levels of scientific agreement. Personally I think it is fine that some models should be “theoretical” and that, therefore, perhaps direct empirical commitments are not relevant to them. But then such models cannot claim to talk about real phenomena and are obliged to make some scientific commitments of their own: If not empirically, how can one decide beyond “personal taste” that theoretical model x is “better” than theoretical model y? I really get the feeling that the argument is “we don’t like what you tried to do empirically but we won’t do anything else ourselves either”. And then that looks like wanting a blank cheque to “play”. I think it is fine to disagree about the aims of models but all defined model aims should have associated “rigour tests” that are objective. The tests for “empirical ABM” are clear: It is the “Gilbert and Troitzsch box”. But what are the non subjective analogues for other approaches? I cannot find the “methodology of theoretical modelling” stated anywhere.
So the $64,000 question for me is whether empirical ABM will be accepted if it argues better, louder or longer or whether it has to make its own institutional arrangements to get a fairer hearing.
Thank you for your comment!
I actually had similar problems with these toy models. I had a couple of papers rejected because in these papers I was testing the impact of data properties (e.g. ordinality) on the Deffuant model. And, as you mentioned, I have received the comment that these models are not supposed to be used with real data.
(btw, in another publication I used them with real data and obtained quite interesting results!)
Coming from physics, I was also shocked the first time I noticed that almost all models in OD have little to nothing relationship with the empirical world. I now see some benefits of having *some* models exploring phenomena using toy models, but I think the situation is getting a little out of hand.
I think now it’s quite clear that we have virtually infinite possible ways to show some kind of polarization appearing out of uniformly distributed data (i.e. we have too many toy models doing that). So probably now we should stop asking “can we produce polarization with some simple rules?” (as the answer is a clear “yes”) and start asking something like “which of these simple rules produces realistic results?”
Otherwise, as you said, how can we distinguish which model is better? Why should we use the Deffuant model instead of the Hegselman-Kreuse?
I think that paradoxically, at the moment empirical ABM can perform quite well outside of our conventional ABM circle. I really hope we won’t need to “break up” with traditional ABM, and this is why I am actually suggesting this division. But I’m quite sure that empirical ABM will rise in one way or another. There is currently too much demand for it to not flourish.