Tag Archives: opinion dynamics

“One mechanism to rule them all!” A critical comment on an emerging categorization in opinion dynamics

By Sven Banisch

Department for Sociology, Institute of Technology Futures
Karlsruhe Institute of Technology

It has become common in the opinion dynamics community to categorize different models according to how two agents i and j change their opinions oi and oj in interaction (Flache et al. 2017, Lorenz et al. 2021, Keijzer and Mäs 2022). Three major classes have emerged. First, models of assimilation or positive social influence are characterized by a reduction of opinion differences in interaction as achieved, for instance, by classical models with averaging (French 1956, Friedkin and Johnson 2011). Second, in models with repulsion or negative influence agents may be driven further apart if they are already too distant (Jager and Amblard 2005, Flache and Macy 2011). Third, reinforcement models are characterized by the fact that agents on the same side of the opinion spectrum reinforce their opinion and go more extreme (Martins 2008, Banisch and Olbrich 2019, Baumann et al. 2020). While this categorization is useful for differentiating different classes of models along with their assumptions, for assessing if different model implementations belong to the same class, and for understanding the macroscopic phenomena that can be expected, it is not without problems and may lead to misclassification and misunderstanding.

This comment aims to provide a critical — yet constructive — perspective on this emergent theoretical language for model synthesis and comparison. It directly links to a recent comment in this forum (Carpentras 2023) that describes some of the difficulties that researchers face when developing empirically grounded or validated models of opinion dynamics which often “do not conform to the standard framework of ABM papers”. I have made very similar experiences during a long review process for a paper (Banisch and Shamon 2021) that, to my point of view, rigorously advances argument communication theory — and its models — through experimental research. In large part, the process has been so difficult because authors from different branches of opinion dynamics speak different languages and I feel that some conventions may settle us into a “vicious cycle of isolation” (Carpentras 2020) and closure. But rather than suggesting a divide into theoretically and empirically oriented opinion dynamics research, I would like to work towards a common ground for empirical and theoretical ABM research by a more accurate use of opinion dynamics language.

The classification scheme for basic opinion change mechanisms might be particularly problematic for opinion models that take cognitive mechanisms and more complex opinion structures into account. These often more complex models are required in order to capture linguistic associations observed in real debates, or to better link to a specific experimental design. In this note, I will look at argument communication models (ACMs) (Mäs and Flache 2013, Feliciani et al. 2020, Banisch and Olbrich 2021, Banisch and Shamon 2021) to show how theoretically-inspired model classification can be misleading. I will first show that the classical ACM by Mäs and Flache (2013) has been repeatedly misclassified as a reinforcement model while it is purely averaging when looking at the implied attitude changes. Second, only when biased processing is incorporated into argument-induced opinion changes such that agents favor arguments aligned with their opinion, ACMs become reinforcing or contagious (Lorenz et al. 2021). Third, when biases become large, ACMs may feature patterns of opinion adaptation which — according to the above categorization — are considered as negative influence. 

Opinion change functions for the three model classes

Let us start by looking at the opinion change assumptions entailed in “typical” positive and negative influence and reinforcement models. Following Flache et al. (2017) and Lorenz et al. (2021), we will consider opinion change functions of the following form:

Δoi=f(oi,oj).

That is, the opinion change of agent i is given as a function of i’s opinion and the opinion of an interaction partner j. This is sufficient to characterize an ABM with dyadic interaction where repeatedly two agents with two opinions (oi,oj) are chosen at random and f(oi,oj) is applied. Here we deal with continuous opinions in the interval oi∈[-1,1] in the context of which the model categorizations have been mainly introduced. Notice that some authors refer to f as an influence response function, but as this notion has been introduced in the context of discrete choice models (Lopez-Pintado and Watts 2008, Mäs 2021) governing the behavioral response of agents to the behavior in their neighborhood, we will stick to the term opinion change function (OCF) here. OCFs hence map from two opinions to the induced opinion change: [-1,1]2R and we can depict them in form of a contour density vector plot as shown in Figure 1.

The most simple form of a positive influence OCF is weighted averaging:

Δoi=μ(oj-oi).

That is, an agent i approaches the opinion of another agent j by a parameter μ times the distance between i and j. This function is shown on the left of Figure 1. If oj<oi  (above the diagonal where oj=oi)  approaches the opinion of  from below. The opinion change is positive indicating a shift to the right (red shades). If oi<oj (below the diagonal) i approaches j from above implying negative opinion change and shift to the left (blue shades). Hence, agents left to the diagonal will shift rightwards, and agents right to the diagonal will shift to the left.

Macroscopically, these models are well-known to converge to consensus on connected networks. However, Deffuant et al. (2000) and Hegselmann and Krause (2002) introduced bounded confidence to circumvent global convergence — and many others have followed with more sophisticated notions of homophily. This class of models (models with similarity bias in Flache et al. 2017) affects the OCF essentially by setting f=0 for opinion pairs that are beyond a certain distance threshold from the diagonal. I will briefly comment on homophily later.

Negative influence can be seen as an extension of bounded confidence such that opinion pairs that are too distant will lead to a repulsive force driving opinions further apart. As the review by Flache et al. (2017), we rely on the OCF from Jager and Amblard (2005) as the paradigmatic case. However, the function shown in Flache et al. (2017) seems to be slightly mistaken so we resort to the original implementation of negative influence by Jager and Amblard (2005):

That is, if the opinion distance |oioj| is below a threshold u, we have positive influence as before. If the distance |oioj| is larger than a second threshold t, there is repulsive influence such that i is driven away from j. In between these two thresholds, there is a band of no opinion change f(oi,oj)=0 just as for bounded confidence. This function is shown in the middle of Figure 1 (u=0.4 and t=0.7). In this case, we observe a left shift towards a more negative opinion (blue shades) above the diagonal and sufficiently far from it (governed by t). By symmetry, a right shift to a more positive opinion is observed below the diagonal when oi is sufficiently larger than oj. Negative influence is at work in these regions such that an agent i at the right side of the opinion scale (oi<0) will shift towards an even more rightist position when interacting with a leftist agent  with opinion oj>0 (same on the other side).

Notice also that this implementation does not ensure opinions are bound to the interval [-1,1] as negative opinion changes are present even if oi is already at a value of -1. Vice versa for the positive extreme. Typically this is artificially resolved by forcing opinions back to the interval once they exceed it, but a more elegant and psychologically motivated solution has been proposed in Lorenz et al. (2021) by introducing a polarity factor (incorporated below).

Finally, reinforcement models are characterized by the fact that agents on the same side of the opinion scale become stronger in interaction. As pointed out by Lorenz et al. (2021) the most paradigmatic case of reinforcement is simple contagion and the OCF used here for illustration is adopted from their notion:

Δoi=αSign(oj).

That is, agent j signals whether she is in favor (oj>0) or against (oj<0) the object of opinion, and agent i adjusts his opinion by taking a step α in that direction. This means that positive opinion change is observed whenever i meets an agent with an opinion larger than zero. Agent i’s opinion will shift rightwards and become more positive. Likewise, a negative opinion change and shift to the left is observed whenever oj is negative. Notice that, in reinforcement models, opinions assimilate when two agents of opposing opinions interact so that induced opinion changes are similar to positive influence in some regions of the space. As for negative influence, this OCF does not ensure that opinions remain in [-1,1], but see Banisch and Olbrich (2019) for a closely related reinforcement learning model that endogenously remains bound to the interval.

Argument-induced opinion change

Compared to models that fully operate on the level of opinions oi∈[-1,1] and are hence completely specified by an OCF, argument-based models are slightly more complex and the derivation of OCFs from the model rules is not straightforward. But let us first, at least briefly, describe the model as introduced in Banisch and Shamon (2021).

In the model, agents hold a number of M pro- and M counterarguments which may be either zero (disbelief) or one (belief). The opinion of an agent is defined as the number of pro versus con arguments. For instance, if an agent believes 3 pro arguments and only one con argument her opinion will be oi=2. For the purposes of this illustration, we will normalize opinions to lie in between -1 and 1 which is achieved by division through M: oioi/M. In interaction, agent j acts as a sender articulating an argument to a receiving agent i. The receiver  takes over that argument with probability

p beta = 1 / (1 + exp(-beta oi dir(arg)))

where the function dir(arg) designates whether the new argument implies positive or negative opinion change. This probability accounts for the fact that agents are more willing to accept information that coheres with their opinion. The free parameter β models the strength of this bias.

From these rules, we can derive an OCF of the form Δoi=f(oi,oj) by considering (i) the probability that  chooses an argument with a certain direction and (ii) the probability that this argument is new to  (see Banisch and Shamon 2021 on the general approach):

Delta 0i=(oj-oi+(1-oioj)tanh(beta*oi/2)))/4M

Notice that this is an approximation because the ACM is not reducible to the level of opinions. First, there are several combinations of pro and con arguments that give rise to the same opinion (e.g. an opinion of +1 is implied by 4 pro and 3 con arguments as well as by 1 pro and 0 con arguments). Second, the probability that ’s argument is new to  depends on the specific argument strings, and there is a tendency that these strings become correlated over time. These correlations lead to memory effects that become visible in the long convergence times of ACMs (Mäs and Flache 2013, Banisch and Olbrich 2021, Banisch and Shamon 2021). The complete mathematical characterization of these effects is far from trivial and beyond the scope of this comment. However, they do not affect the qualitative picture presented here.

  1. Argument models without bias are averaging.

With that OCF it becomes directly visible that it is incorrect to place the original ACM (without bias) within the class of reinforcement models. No bias means β=0, in which case we obtain:

delta oi=(oj-oi)/4M

That is, we obtain the typical positive influence OCF with μ=1/4M shown on the left of Figure 2.

This may appear counter-intuitive (it did in the reviews) because the ACM by Mäs and Flache (2013) generates the idealtypic pattern of bi-polarization in which two opinion camps approach the extreme ends of the opinion scale. But this macro effect is an effect of homophily and the associated changes in the social interaction structure. It is important to note that homophily does not transform an averaging OCF into a reinforcing one. When implemented as bounded confidence it only cuts off certain regions by setting f(oi,oj)=0. Homophily is a social mechanism that acts at another layer and its reinforcing effect in ACMs is conditional on the social configuration of the entire population. In the models, it generates biased argument pools in a way strongly reminiscent of Sunstein’s law of group polarization (2002). That given, the main result by Mäs and Flache (2013) („differentiation without distancing“) is all the more remarkable! But it is at least misleading to associate it with models that implement reinforcement mechanisms (Martins 2008, Banisch and Olbrich 2019, Baumann et al. 2020).

2. Argument models with moderate bias are reinforcing.

It is only when biased processing is enabled that ACMs become what is called reinforcement models. This is clearly visible on the right of Figure 2 where a bias of β=2 has been used. If, in Figure 1, we accounted for the polarity effect, circumventing that opinions exceed the opinion interval   (Lorenz et al. 2021), the match between the right-hand sides of Figures 1 and 2 would be even more remarkable.

This transition from averaging to reinforcement by biased processing shows that the characterization of models in terms of induced opinion changes (OCF) may be very useful and enables model comparison. Namely, at the macro scale, ACMs with moderate bias behave precisely as other reinforcement models. In a dense group, it will lead to what is called group polarization in psychology: the whole group collectively shifts to an extreme opinion at one side of the spectrum. On networks with communities, these radicalization processes may take different directions in different parts of the network and feature collective-level bi-polarization (Banisch and Olbrich 2019).

  1. Argument models with strong bias may appear as negative influence.

Finally, when the β parameter becomes larger, the ACM leaves the regime of reinforcement models and features patterns that we would associate with negative influence. This is shown in the middle of Figure 2. Under strong biased processing, a leftist agent i with an opinion of (say) oi=-0.75 will shift further to the left when encountering a rightist agent j with an opinion of (say) oj=+0.5. Within the existing classes of models, such a pattern is only possible under negative influence. ACMs with biased processing offer a psychologically compelling alternative, and it is an important empirical question whether observed negative influence effects (Bail et al. 2018) are actually due to repulsive forces or due to cognitive biases in information reception.

The reader will notice that, when looking at the entire OCF in the space spanned by (oi,oj)∈[-1,1]2, there are qualitative differences between the ACM and the OCF defined in Jager and Amblard (2005). The two mechanisms are different and imply different response functions (OCFs). But for some specific opinion pairs the two functions are hardly discernible as shown in the next figure. The blue solid curve shows the OCF of the argument model for β=5 and an agent i interacting with a neutral agent j, i.e. f(oi,0). The ACM with biased processing is aligned with experimental design and entails a ceiling effect so that maximally positive (negative) agents cannot further increase (decrease) their opinion. To enable fair comparison, we introduce the polarity effect used in Lorenz et al. (2021) to the negative influence OCF ensuring that opinions remain within [-1,1]. That is, for the dashed red curve the factor (1- oi2) (cf. Eq. 6 in Lorenz et al. 2021) is multiplied with the function from Jager and Amblard (2005) using u=0.2 and t=0.4. In this specific case, the shapes of the two OCFs are extremely similar. Experimental test would hardly distinguish the two.

Macroscopically, strong biased processing leads to collective bi-polarization even in the absence of homophily (Banisch and Shamon 2021). This insight has been particularly puzzling and mind-boggling to some of the referees. But the reason for this to happen is precisely the fact that ACMs with biased processing may lead to negative influence opinion change phenomena. This indicates, among other things, that one should be very careful to draw collective-level conclusions such as a depolarizing effect of filter bubbles from empirical signatures of negative influence (Bail et al. 2018). While their argumentation seems at least puzzling on the ground of “classical” negative influence models (Mäs and Bischofberger 2015, Keijzer and Mäs 2022), it could be clearly rejected if the empirical negative influence effects are attributed to the cognitive mechanism of biased processing. In ACMs, homophily generally enhances polarization tendencies (Banisch and Shamon 2021).

What to take from here?

Opinion dynamics is at a challenging stage! We have problems with empirical validation (Sobkowicz 2009, Flache et al. 2017) but seem to not sufficiently acknowledge those who advance the field into that direction (Chattoe-Brown 2022, Keijzer 2022, Carpentras 2023). It is greatly thanks to the RofASSS forum that these deficits have become visible. Against that background, this comment is written as a critical one, because developing models with a tight connection to empirical data does not always fit with the core model classes derived from research with a theoretical focus.

The prolonged review process for Banisch and Shamon (2021) — strongly reminiscent of the patterns described by Carpentras (2023) — revealed that there is a certain preference in the community to draw on models building on “opinions” as the smallest and atomic analytical unit. This is very problematic for opinion models that take cognitive mechanisms and complexity into due account. Moreover, we barely see “opinions” in empirical measurements, but rather observe argumentative statements and associations articulated on the web and elsewhere. To my point of view, we have to acknowledge that opinion dynamics is a field that cannot isolate itself from psychology and cognitive science because intra-individual mechanisms of opinion change are at the core of all our models. And just as new phenomena may emerge as we go from individuals to groups or populations, surprises may happen when a cognitive layer of beliefs, arguments, and their associations is underneath. We can treat these emergent effects as mere artifacts of expendable cognitive detail, or we can truly embrace the richness of opinion dynamics as a field spanning multiple levels from cognition to macro social phenomena.

On the other hand, the analysis of the OCF “emerging” from argument exchange also points back to the atomic layer of opinions as a useful reference for model comparisons and synthesis. Specific patterns of opinion updates emerge in any opinion dynamics model however complicated its rules and their implementation might be. For understanding macro effects, more complicated psychological mechanisms may be truly relevant only in so far as they imply qualitatively different OCFs. The functional form of OCFs may serve as an anchor of reference for “model translations” allowing us to better understand the role of cognitive complexity in opinion dynamics models.

What this research comment — clearly overstating at the very front — also aims to show is that modeling based in psychology and cognitive science does not automatically mean we leave behind the principles of parsimony. The ACM with biased processing has only a single effective parameter (β) but is rich enough to span over three very different classes of models. It is averaging if β=0,  it behaves like a reinforcement model with moderate bias (β=2), and may look like negative influence for larger values of . For me, this provides part of an explanation for the misunderstandings that we experienced in the review process for Banisch and Shamon (2021). It’s just inappropriate to talk about ACMs with biased processing within the categories of “classical” models of assimilation, repulsion, and reinforcement. So the review process has been insightful, and I am very grateful that traditional Journals afford such productive spaces of scientific discourse. My main “take-home” from this whole enterprise is that current language enjoins caution to not mix opinion change phenomena with opinion change mechanisms.

Acknowledgements

I am grateful to the Sociology and Computational Social Science group at KIT  — Michael Mäs, Fabio Sartori, and Andreas Reitenbach — for their feedback on a preliminary version of this commentary. I also thank Dino Carpentras for his preliminary reading.

This comment would not have been written without the three anonymous referees at Sociological Methods and Research.

References

Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation20(4),2 http://jasss.soc.surrey.ac.uk/20/4/2.html. DOI:10.18564/jasss.3521

Lorenz, J., Neumann, M., & Schröder, T. (2021). Individual attitude change and societal dynamics: Computational experiments with psychological theories. Psychological Review128(4), 623. https://psycnet.apa.org/doi/10.1037/rev0000291

Keijzer, M. A., & Mäs, M. (2022). The complex link between filter bubbles and opinion polarization. Data Science, 5(2), 139-166. DOI:10.3233/DS-220054

French Jr, J. R. (1956). A formal theory of social power. Psychological review63(3), 181. DOI:10.1037/h0046123

Friedkin, N. E., & Johnsen, E. C. (2011). Social influence network theory: A sociological examination of small group dynamics (Vol. 33). Cambridge University Press.

Jager, W., & Amblard, F. (2005). Uniformity, bipolarization and pluriformity captured as generic stylized behavior with an agent-based simulation model of attitude change. Computational & Mathematical Organization Theory10, 295-303. https://link.springer.com/article/10.1007/s10588-005-6282-2

Flache, A., & Macy, M. W. (2011). Small Worlds and Cultural Polarization. Journal of Mathematical Sociology35, 146-176. https://doi.org/10.1080/0022250X.2010.532261

Martins, A. C. (2008). Continuous opinions and discrete actions in opinion dynamics problems. International Journal of Modern Physics C19(04), 617-624. https://doi.org/10.1142/S0129183108012339

Banisch, S., & Olbrich, E. (2019). Opinion polarization by learning from social feedback. The Journal of Mathematical Sociology43(2), 76-103. https://doi.org/10.1080/0022250X.2018.1517761

Baumann, F., Lorenz-Spreen, P., Sokolov, I. M., & Starnini, M. (2020). Modeling echo chambers and polarization dynamics in social networks. Physical Review Letters124(4), 048301. https://doi.org/10.1103/PhysRevLett.124.048301

Carpentras, D. (2023). Why we are failing at connecting opinion dynamics to the empirical world. 8th March 2023. https://rofasss.org/2023/03/08/od-emprics/

Banisch, S., & Shamon, H. (2021). Biased Processing and Opinion Polarisation: Experimental Refinement of Argument Communication Theory in the Context of the Energy Debate. Available at SSRN 3895117. The most recent version is available as an arXiv preprint arXiv:2212.10117.

Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/

Mäs, M., & Flache, A. (2013). Differentiation without distancing. Explaining bi-polarization of opinions without negative influence. PloS One, 8(11), e74516. https://doi.org/10.1371/journal.pone.0074516

Feliciani, T., Flache, A., & Mäs, M. (2021). Persuasion without polarization? Modelling persuasive argument communication in teams with strong faultlines. Computational and Mathematical Organization Theory, 27, 61-92. https://link.springer.com/article/10.1007/s10588-020-09315-8

Banisch, S., & Olbrich, E. (2021). An Argument Communication Model of Polarization and Ideological Alignment. Journal of Artificial Societies and Social Simulation, 24(1). https://www.jasss.org/24/1/1.html
DOI: 10.18564/jasss.4434

Lorenz, J., Neumann, M., & Schröder, T. (2021). Individual attitude change and societal dynamics: Computational experiments with psychological theories. Psychological Review, 128(4), 623. https://psycnet.apa.org/doi/10.1037/rev0000291

Mäs, M. (2021). Interactions. In Research Handbook on Analytical Sociology (pp. 204-219). Edward Elgar Publishing.

Lopez-Pintado, D., & Watts, D. J. (2008). Social influence, binary decisions and collective dynamics. Rationality and Society, 20(4), 399-443. https://doi.org/10.1177/1043463108096787

Deffuant, G., Neau, D., Amblard, F., & Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 3(01n04), 87-98.

Hegselmann, R., & Krause, U. (2002). Opinion Dynamics and Bounded Confidence Models, Analysis and Simulation. Journal of Artificial Societies and Social Simulation, 5(3),2. https://jasss.soc.surrey.ac.uk/5/3/2.html

Sunstein, C. R. (2002). The Law of Group Polarization. The Journal of Political Philosophy, 10(2), 175-195. https://dx.doi.org/10.2139/ssrn.199668

Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. F., … & Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37), 9216-9221. https://doi.org/10.1073/pnas.1804840115

Mäs, M., & Bischofberger, L. (2015). Will the personalization of online social networks foster opinion polarization? Available at SSRN 2553436. https://dx.doi.org/10.2139/ssrn.2553436

Sobkowicz, P. (2009). Modelling opinion formation with physics tools: Call for closer link with reality. Journal of Artificial Societies and Social Simulation, 12(1), 11. https://www.jasss.org/12/1/11.html

Chattoe-Brown, E. (2022). If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1 Feb 2022. https://rofasss.org/2022/02/01/citing-od-models/

Keijzer, M. (2022). If you want to be cited, calibrate your agent-based model: a reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation.  9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown


Banisch, S. (2023) “One mechanism to rule them all!” A critical comment on an emerging categorization in opinion dynamics. Review of Artificial Societies and Social Simulation, 26 Apr 2023. https://rofasss.org/2023/04/26/onemechanism


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Why we are failing at connecting opinion dynamics to the empirical world

By Dino Carpentras

ETH Zürich – Department of Humanities, Social and Political Sciences (GESS)

The big mystery

Opinion dynamics (OD) is field dedicated to studying the dynamic evolution of opinions and is currently facing some extremely cryptic mysteries. Since 2009 there have been multiple calls for OD models to be strongly grounded in empirical data (Castellano et al., 2009, Valori et al., 2012, Flache et al., 2017; Dong et al., 2018), however the number of articles moving in this direction is still extremely limited. This is especially puzzling when compared with the increase in the number of publications in this field (see Fig 1). Another surprising issue, which extends also beyond OD, is that validated models are not cited as often as we would expect them to be (Chattoe-Brown, 2022; Kejjzer, 2022).

Some may argue that this could be explained by a general lack of people interested in the empirical side of opinion dynamics. However, the World seems in desperate need of empirically grounded OD models that could help us in shape policies on topics such as vaccination and climate change. Thus, it is very surprising to see that almost nobody is interested in meeting such a big and pressing demand.

In this short piece, I will share my experience both as a writer and as a reviewer for empirical OD papers, as well as the information I gathered from discussions with other researchers in similar roles. This will help us understand much better what is going on in the world of empirical OD and, more in general, in the empirical parts of agent-based modelling (ABM) related to psychological phenomena.

fig 1 rofasss

Publications containing the term “opinion dynamics” in abstract or title. Total 2,527. Obtained from dimensions.ai

Theoretical versus empirical OD

The main issue I have noticed with works in empirical OD is that these papers do not conform to the standard framework of ABM papers. Indeed, in “classical” ABM we usually try to address research questions like:

  1. Can we develop a toy model to show how variables X and Y are linked?
  2. Can we explain some macroscopic phenomenon as the result of agents’ interaction?
  3. What happens to the outputs of a popular model if we add a new variable?

However, empirical papers do not fit into this framework. Indeed, empirical ABM papers ask questions such as:

  1. How accurate are the predictions made by a certain model when compared with data?
  2. How close is the micro-dynamic to the experimental data?
  3. How can we refine previous models to improve their predicting ability?

Unfortunately, many reviewers do not view the latter questions as genuine research inquiries, ending up in pushing the authors to modify their papers to meet the first set of questions.

For instance, my empirical works often receive the critique that “the research question is not clear”, even though the question was explicitly stated in the main text, abstract and even in the title (See, for example “Deriving An Opinion Dynamics Model From Experimental Data”, Carpentras et al. 2022). Similarly, once a reviewer acknowledged that the experiment presented in the paper was an interesting addition to it, but they requested me to demonstrate why it was useful. Notice that, also in this case, the paper was on developing a model from the dynamical behavior observed in an experiment; therefore, the experiment was not just “an add on”, but core of the paper. I also have reviewed some empirical OD papers where the authors are asked, by other reviewers, to showcase how their model informs us about the world in a novel way.

As we will see in a moment, this approach does not just make authors’ life harder, but it also generates a cascade of consequences on the entire field of opinion dynamics. But to better understand our world, let us move first to a fictitious scenario.

A quick tale of natural selection of researcher

Let us now imagine a hypothetical world where people have almost no knowledge of the principles of physics. However, to keep the thought experiment simple, let us also suppose they have already developed the peer-review process. Of course, this fictious scenario is far from being realistic, but it should still help us understand what is going on with empirical OD.

In this world, a scientist named Alice writes a paper suggesting that there is an upward force when objects enter water. She also shows that many objects can float on water, therefore “validating” her model. The community is excited about this new paper which took Alice 6 months to write.

Now, consider another scientist named Bob. Bob, inspired by Alice’s paper, in 6 months conducts a series of experiments demonstrating that when an object is submerged in water, it experiences an upward force that is proportional to its submerged volume. This pushes knowledge forward as Bob does not just claim that this force exists, but he shows how this force has some clear quantitative relationship to the volume of the object.

However, when reviewers read Bob’s work, they are unimpressed. They question the novelty of his research and fail to see the specific research question he is attempting to address. After all, Alice already showed that this force exists, so what is new in this paper? One of the reviewers suggests that Bob should show how his study may impact their understanding of the world.

As a result, Bob spends an additional six months to demonstrate that he could technically design a floating object made out of metal (i.e. a ship). He also describes the advantages for society if such an object was invented. Unfortunately, one of the reviewers is extremely skeptical as metal is known to be extremely heavy and should not float in water, and requests additional proof.

After multiple revisions, Bob’s work is eventually published. However, the publication process takes significantly longer than Alice’s work, and the final version of the paper addresses a variety of points, including empirical validation, the feasibility of constructing a metal boat, and evidence to support this claim. Consequently, the paper becomes densely technical, making it challenging for most people to read and understand.

At the end, Bob is left with a single paper which is hardly readable (and therefore citable), while Alice, in the meanwhile, published many other easier-to-read papers having a much bigger impact.

Solving the mystery of empirical opinion dynamics

The previous sections helped us in understanding the following points: (1) validation and empirical grounding are often not seen as a legitimate research goal by many members of the ABM community. (2) This leads to bigger struggle when trying to publish this kind of research, and (3) reviewers often try to push the paper into the more classic research questions, possibly resulting in a monster-paper which tries to address multiple points all at once. (4) This also generates lower readability and so less impact.

So to sum it up: empirical OD gives you the privilege of working much more to obtain way less. This, combined with the “natural selection” of the “publish or perish” explains the scarcity of publications in this field, as authors need either to adapt to more standard ABM formulas or to “perish.” I also personally know an ex-researcher who tried to publish empirical OD until he got fed up and left the field.

Some clarifications

Let me make clear that this is a bit of a simplification and that, of course, it is definitely possible to publish empirical work in opinion dynamics even without “perishing.” However, choosing this instead of the traditional ABM approach strongly enhances the difficulty. This is a little like running while carrying extra weight: it is still possible that you will win the race, but the weight strongly decreases the probability of this happening.

I also want to say that while here I am offering an explanation of the puzzles I presented, I do not claim that this is the only possible explanation. Indeed, I am sure that what I am offering here is only part of the full story.

Finally, I want to clarify that I do not believe anyone in the system has bad intentions. Indeed, I think reviewers are in good faith when suggesting empirically-oriented papers to take a more classical approach. However, even with good intentions, we are creating a lot of useless obstacles for an entire research field.

Trying to solve the problem

To address this issue, in the past I have suggested dividing ABM researchers into theoretical and empirically oriented (Carpentras, 2020). The division of research into two streams could help us in developing better standards for both developing toy models and for empirical ABMs.

To give you a practical example, my empirical ABM works usually receive long and detailed comments about the model properties and almost no comment on the nature of the experiment or data analysis. Am I that good in these last two steps? Or maybe reviewers in ABM focus very little on the empirical side of empirical ABMs? While the first explanation would be flattering for me, I am afraid that the reality is better depicted by the second option.

With this in mind, together with other members of the community, we have created a special interest group for Experimental ABM (see http://www.essa.eu.org/sig/sig-experimental-abm/). However, for this to be successful, we really need people to recognize the distinction between these two fields. We need to acknowledge that empirically-related research questions are still valid and not push papers towards the more classical approach.

I really believe empirical OD will raise, but how this will happen is still to decide. Will it be at the cost of many researchers facing bigger struggle or will we develop a more fertile environment? Or maybe some researchers will create an entire new niche outside of the ABM community? The choice is up to us!

References

Carpentras, D., Maher, P. J., O’Reilly, C., & Quayle, M. (2022). Deriving An Opinion Dynamics Model From Experimental Data. Journal of Artificial Societies & Social Simulation, 25(4). https://www.jasss.org/25/4/4.html

Carpentras, D. (2020) Challenges and opportunities in expanding ABM to other fields: the example of psychology. Review of Artificial Societies and Social Simulation, 20th December 2021. https://rofasss.org/2021/12/20/challenges/

Castellano, C., Fortunato, S., & Loreto, V. (2009). Statistical physics of social dynamics. Reviews of modern physics, 81(2), 591. DOI: 10.1103/RevModPhys.81.591

Chattoe-Brown, E. (2022). If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1 Feb 2022. https://rofasss.org/2022/02/01/citing-od-models/

Dong, Y., Zhan, M., Kou, G., Ding, Z., & Liang, H. (2018). A survey on the fusion process in opinion dynamics. Information Fusion, 43, 57-65. DOI: 10.1016/j.inffus.2017.11.009

Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4). https://www.jasss.org/20/4/2.html

Keijzer, M. (2022). If you want to be cited, calibrate your agent-based model: a reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation.  9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown

Valori, L., Picciolo, F., Allansdottir, A., & Garlaschelli, D. (2012). Reconciling long-term cultural diversity and short-term collective social behavior. Proceedings of the National Academy of Sciences, 109(4), 1068-1073. DOI: 10.1073/pnas.1109514109


Carpentras, D. (2023) Why we are failing at connecting opinion dynamics to the empirical world. Review of Artificial Societies and Social Simulation, 8 Mar 2023. https://rofasss.org/2023/03/08/od-emprics


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

If you want to be cited, calibrate your agent-based model: A Reply to Chattoe-Brown

By Marijn A. Keijzer

This is a reply to a previous comment, (Chattoe-Brown 2022).

The social simulation literature has called on its proponents to enhance the quality and realism of their contributions through systematic validation and calibration (Flache et al., 2017). Model validation typically refers to assessments of how well the predictions of their agent-based models (ABMs) map onto empirically observed patterns or relationships. Calibration, on the other hand, is the process of enhancing the realism of the model by parametrizing it based on empirical data (Boero & Squazzoni, 2005). We would expect that presenting a validated or calibrated model serves as a signal of model quality, and would thus be a desirable characteristic of a paper describing an ABM.

In a recent contribution to RofASSS, Edmund Chattoe-Brown provocatively argued that model validation does not bear fruit for researchers interested in boosting their citations. In a sample of articles from JASSS published on opinion dynamics he observed that “the sample clearly divides into non-validated research with more citations and validated research with fewer” (Chattoe-Brown, 2022). Well-aware of the bias and limitations of the sample at hand, Chattoe-Brown calls on refutation of his hypothesis. An analysis of the corpus of articles in Web of Science, presented here, could serve that goal.

To test whether there exists an effect of model calibration and/or validation on the citation counts of papers, I compare citation counts of a larger number of original research articles on agent-based models published in the literature. I extracted 11,807 entries from Web of Science by searching for items that contained the phrases “agent-based model”, “agent-based simulation” or “agent-based computational model” in its abstract.[1] I then labeled all items that mention “validate” in its abstract as validated ABMs and those that mention “calibrate” as calibrated ABMs. This measure if rather crude, of course, as descriptions containing phrases like “we calibrated our model” or “others should calibrate our model” are both labeled as calibrated models. However, if mentioning that future research should calibrate or validate the model is not related to citations counts (which I would argue it indeed is not), then this inaccuracy does not introduce bias.

The shares of entries that mention calibration or validation are somewhat small. Overall, just 5.62% of entries mention validation, 3.21% report a calibrated model and 0.65% fall in both categories. The large sample size, however, will still enable the execution of proper statistical analysis and hypothesis testing.

How are mentions of calibration and validation in the abstract related to citation counts at face value? Bivariate analyses show only minor differences, as revealed in Figure 1. In fact, the distribution of citations for validated and non-validated ABMs (panel A) is remarkably similar. Wilcoxon tests with continuity correction—the nonparametric version of the simple t test—corroborate their similarity (W = 3,749,512, p = 0.555). The differences in citations between calibrated and non-calibrated models appear, albeit still small, more pronounced. Calibrated ABMs are cited slightly more often (panel B), as also supported by a bivariate test (W = 1,910,772, p < 0.001).

Picture 1

Figure 1. Distributions of number of citations of all the entries in the dataset for validated (panel A) and calibrated (panel B) ABMs and their averages with standard errors over years (panels C and D)

Age of the paper might be a more important determinant of citation counts, as panels C and D of Figure 1 suggest. Clearly, the age of a paper should be important here, because older papers have had much more opportunity to get cited. In particular, papers younger than 10 years seem to not have matured enough for its citation rates to catch up to older articles. When comparing the citation counts of purely theoretical models with calibrated and validated versions, this covariate should not be missed, because the latter two are typically much younger. In other words, the positive relationship between model calibration/validation and citation counts could be hidden in the bivariate analysis, as model calibration and validation are recent trends in ABM research.

I run a Poisson regression on the number of citations as explained by whether they are validated and calibrated (simultaneously) and whether they are both. The age of the paper is taken into account, as well as the number of references that the paper uses itself (controlling for reciprocity and literature embeddedness, one might say). Finally, the fields in which the papers have been published, as registered by Web of Science, have been added to account for potential differences between fields that explains both citation counts and conventions about model calibration and validation.

Table 1 presents the results from the four models with just the main effects of validation and calibration (model 1), the interaction of validation and calibration (model 2) and the full model with control variables (model 3).

Table 1. Poisson regression on the number of citations

# Citations
(1) (2) (3)
Validated -0.217*** -0.298*** -0.094***
(0.012) (0.014) (0.014)
Calibrated 0.171*** 0.064*** 0.076***
(0.014) (0.016) (0.016)
Validated x Calibrated 0.575*** 0.244***
(0.034) (0.034)
Age 0.154***
(0.0005)
Cited references 0.013***
(0.0001)
Field included No No Yes
Constant 2.553*** 2.556*** 0.337**
(0.003) (0.003) (0.164)
Observations 11,807 11,807 11,807
AIC 451,560 451,291 301,639
Note: *p<0.1; **p<0.05; ***p<0.01

The results from the analyses clearly suggest a negative effect of model validation and a positive effect of model calibration on the likelihood of being cited. The hypothesis that was so “badly in need of refutation” (Chattoe-Brown, 2022) will remain unrefuted for now. The effect does turn positive, however, when the abstract makes mention of calibration as well. In both the controlled (model 3) and uncontrolled (model 2) analyses, combining the effects of validation and calibration yields a positive coefficient overall.[2]

The controls in model 3 substantially affect the estimates from the three main factors of interest, while remaining in expected directions themselves. The age of a paper indeed helps its citation count, and so does the number of papers the item cites itself. The fields, furthermore, take away from the main effects somewhat, too, but not to a problematic degree. In an additional analysis, I have looked at the relationship between the fields and whether they are more likely to publish calibrated or validated models and found no substantial relationships. Citation counts will differ between fields, however. The papers in our sample are more often cited in, for example, hematology, emergency medicine and thermodynamics. The ABMs in the sample coming from toxicology, dermatology and religion are on the unlucky side of the equation, receiving less citations on average. Finally, I have also looked at papers published in JASSS specifically, due to the interest of Chattoe-Brown and the nature of this outlet. Surprisingly, the same analyses run on the subsample of these papers (N=376) showed a negative relationship between citation counts and model calibration/validation. Does the JASSS readership reveal its taste for artificial societies?

In sum, I find support for the hypothesis of Chattoe-Brown (2022) on the negative relationship between model validation and citations counts for papers presenting ABMs. If you want to be cited, you should not validate your ABM. Calibrated ABMs, on the other hand, are more likely to receive citations. What is more, ABMs that were both calibrated and validated are most the most successful papers in the sample. All conclusions were drawn considering (i.e. controlling for) the effects of age of the paper, the number of papers the paper cited itself, and (citation conventions in) the field in which it was published.

While the patterns explored in this and Chattoe-Brown’s recent contribution are interesting, or even puzzling, they should not distract from the goal of moving towards realistic agent-based simulations of social systems. In my opinion, models that combine rigorous theory with strong empirical foundations are instrumental to the creation of meaningful and purposeful agent-based models. Perhaps the results presented here should just be taken as another sign that citation counts are a weak signal of academic merit at best.

Data, code and supplementary analyses

All data and code used for this analysis, as well as the results from the supplementary analyses described in the text, are available here: https://osf.io/x9r7j/

Notes

[1] Note that the hyphen between “agent” and “based” does not affect the retrieved corpus. Both contributions that mention “agent based” and “agent-based” were retrieved.

[2] A small caveat to the analysis of the interaction effect is that the marginal improvement of model 2 upon model 1 is rather small (AIC difference of 269). This is likely (partially) due to the small number of papers that mention both calibration and validation (N=77).

Acknowledgements

Marijn Keijzer acknowledges IAST funding from the French National Research Agency (ANR) under the Investments for the Future (Investissements d’Avenir) program, grant ANR-17-EURE-0010.

References

Boero, R., & Squazzoni, F. (2005). Does empirical embeddedness matter? Methodological issues on agent-based models for analytical social science. Journal of Artificial Societies and Social Simulation, 8(4), 1–31. https://www.jasss.org/8/4/6.html

Chattoe-Brown, E. (2022) If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1st Feb 2022. https://rofasss.org/2022/02/01/citing-od-models

Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4). https://doi.org/10.18564/jasss.3521


Keijzer, M. (2022) If you want to be cited, calibrate your agent-based model: Reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation, 9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation

By Edmund Chattoe-Brown

As part of a previous research project, I collected a sample of the Opinion Dynamics (hereafter OD) models published in JASSS that were most highly cited in JASSS. The idea here was to understand what styles of OD research were most influential in the journal. In the top 50 on 19.10.21 there were eight such articles. Five were self-contained modelling exercises (Hegselmann and Krause 2002, 58 citations, Deffuant et al. 2002, 35 citations, Salzarulo 2006, 13 citations, Deffuant 2006, 13 citations and Urbig et al. 2008, 9 citations), two were overviews of OD modelling (Flache et al. 2017, 13 citations and Sobkowicz 2009, 10 citations) and one included an OD example in an article mainly discussing the merits of cellular automata modelling (Hegselmann and Flache 1998, 12 citations). In order to get in to the top 50 on that date you had to achieve at least 7 citations. In parallel, I have been trying to identify Agent-Based Models that are validated (undergo direct comparison of real and equivalent simulated data). Based on an earlier bibliography (Chattoe-Brown 2020) which I extended to the end of 2021 for JASSS and articles which were described as validated in the highly cited articles listed above, I managed to construct a small and unsystematic sample of validated OD models. (Part of the problem with a systematic sample is that validated models are not readily searchable as a distinct category and there are too many OD models overall to make reading them all feasible. Also, I suspect, validated models just remain rare in line with the larger scale findings of Dutton and Starbuck (1971, p. 130, table 1) and discouragingly, much more recently, Angus and Hassani-Mahmooei (2015, section 4.5, figure 9). Obviously, since part of the sample was selected by total number of citations, one cannot make a comparison on that basis, so instead I have used the best possible alternative (given the limitations of the sample) and compared articles on citations per year. The problem here is that attempting validated modelling is relatively new while older articles inevitably accumulate citations however slowly. But what I was trying to discover was whether new validated models could be cited at a much higher annual rate without reaching the top 50 (or whether, conversely, older articles could have a high enough total citations to get into the top 50 without having a particularly impressive annual citation rate.) One would hope that, ultimately, validated models would tend to receive more citations than those that were not validated (but see the rather disconcerting related findings of Serra-Garcia and Gneezy 2021). Table 1 shows the results sorted by citations per year.

Article Status Number of JASSS Citations[1] Number of Years[2] Citations Per Year
Bernardes et al. 2002 Validated 1 20 0.05
Bernardes et al. 2001 Validated 2 21 0.096
Fortunato and Castellano 2007 Validated 2 15 0.13
Caruso and Castorina 2005 Validated 4 17 0.24
Chattoe-Brown 2014 Validated 2 8 0.25
Brousmiche et al. 2016 Validated 2 6 0.33
Hegselmann and Flache 1998 Non-Validated 12 24 0.5
Urbig et al. 2008 Non-Validated 9 14 0.64
Sobkowicz 2009 Non-Validated 10 13 0.77
Deffuant 2006 Non-Validated 13 16 0.81
Salzarulo 2006 Non-Validated 13 16 0.81
Duggins 2017 Validated 5 5 1
Deffuant et al. 2002 Non-Validated 35 20 1.75
Flache et al. 2017 Non-Validated 13 5 2.6
Hegselmann and Krause 2002 Non-Validated 58 20 2.9

Table 1. Annual Citation Rates for OD Articles Highly Cited in JASSS (Systematic Sample) and Validated OD Articles in or Cited in JASSS (Unsystematic Sample)

With the notable (and potentially encouraging) exception of Duggins (2017), the most recent validated OD model I have been able to discover in JASSS, the sample clearly divides into non-validated research with more citations and validated research with fewer. The position of Duggins (2017) might suggest greater recent interest in validated OD models. Unfortunately, however, qualitative analysis of the citations suggests that these are not cited as validated models per se (and thus as a potential improvement over non-validated models) but merely as part of general classes of OD model (like those involving social networks or repulsion – moving away from highly discrepant opinions). This tendency to cite validated models without acknowledging that they are validated (and what the implications of that might be) is widespread in the articles I looked at.

Obviously, there is plenty wrong with this analysis. Even looking at citations per annum we are arguably still partially sampling on the dependent variable (articles selected for being widely cited prove to be widely cited!) and the sample of validated OD models is unsystematic (though in fairness the challenges of producing a systematic sample are significant.[3]) But the aim here is to make a distinctive use of RoFASSS as a rapid mode of permanent publication and to think differently about science. If I tried to publish this in a peer reviewed journal, the amount of labour required to satisfy reviewers about the research design would probably be prohibitive (even if it were possible). As a result, the case to answer about this apparent (and perhaps undesirable) pattern in data might never see the light of day.

But by publishing quickly in RoFASSS without the filter of peer review I actively want my hypothesis to be rejected or replaced by research based on a better design (and such research may be motivated precisely by my presenting this interesting pattern with all its imperfections). When it comes to scientific progress, the chance to be clearly wrong now could be more useful than the opportunity to be vaguely right at some unknown point in the future.

Acknowledgements

This analysis was funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5 (PI: Professor Bruce Edmonds, Centre for Policy Modelling, Manchester Metropolitan University: https://gtr.ukri.org/projects?ref=ES%2FS015159%2F1).

Notes

[1] Note that the validated OD models had their citations counted manually while the high total citation articles had them counted automatically. This may introduce some comparison error but there is no reason to think that either count will be terribly inaccurate.

[2] Including the year of publication and the current year (2021).

[3] Note, however, that there are some checks and balances on sample quality. Highly successful validated OD models would have shown up independently in the top 50. There is thus an upper bound to the impact of the articles I might have missed in manually constructing my “version 1” bibliography. The unsystematic review of 47 articles by Sobkowicz (2009) also checks independently on the absence of validated OD models in JASSS to that date and confirms the rarity of such articles generally. Only four of the articles that he surveys are significantly empirical.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16, <http://jasss.soc.surrey.ac.uk/18/4/16.html>. doi:10.18564/jasss.2952

Bernardes, A. T., Costa, U. M. S., Araujo, A. D. and Stauffer, D. (2001) ‘Damage Spreading, Coarsening Dynamics and Distribution of Political Votes in Sznajd Model on Square Lattice’, International Journal of Modern Physics C: Computational Physics and Physical Computation, 12(2), February, pp. 159-168. doi:10.1140/e10051-002-0013-y

Bernardes, A. T., Stauffer, D. and Kertész, J. (2002) ‘Election Results and the Sznajd Model on Barabasi Network’, The European Physical Journal B: Condensed Matter and Complex Systems, 25(1), January, pp. 123-127. doi:10.1142/S0129183101001584

Brousmiche, Kei-Leo, Kant, Jean-Daniel, Sabouret, Nicolas and Prenot-Guinard, François (2016) ‘From Beliefs to Attitudes: Polias, A Model of Attitude Dynamics Based on Cognitive Modelling and Field Data’, Journal of Artificial Societies and Social Simulation, 19(4), October, article 2, <https://www.jasss.org/19/4/2.html>. doi:10.18564/jasss.3161

Caruso, Filippo and Castorina, Paolo (2005) ‘Opinion Dynamics and Decision of Vote in Bipolar Political Systems’, arXiv > Physics > Physics and Society, 26 March, version 2. doi:10.1142/S0129183105008059

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1), February, article 16, <https://www.socresonline.org.uk/19/1/16.html>. doi:0.5153/sro.3315

Chattoe-Brown Edmund (2020) ‘A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation: Version 1’, CPM Report CPM-20-216, 12 June, <http://cfpm.org/discussionpapers/256>

Deffuant, Guillaume (2006) ‘Comparing Extremism Propagation Patterns in Continuous Opinion Models’, Journal of Artificial Societies and Social Simulation, 9(3), June, article 8, <https://www.jasss.org/9/3/8.html>.

Deffuant, Guillaume, Amblard, Frédéric, Weisbuch, Gérard and Faure, Thierry (2002) ‘How Can Extremism Prevail? A Study Based on the Relative Agreement Interaction Model’, Journal of Artificial Societies and Social Simulation, 5(4), October, article 1, <https://www.jasss.org/5/4/1.html>.

Duggins, Peter (2017) ‘A Psychologically-Motivated Model of Opinion Change with Applications to American Politics’, Journal of Artificial Societies and Social Simulation, 20(1), January, article 13, <http://jasss.soc.surrey.ac.uk/20/1/13.html>. doi:10.18564/jasss.3316

Dutton, John M. and Starbuck, William H. (1971) ‘Computer Simulation Models of Human Behavior: A History of an Intellectual Technology’, IEEE Transactions on Systems, Man, and Cybernetics, SMC-1(2), April, pp. 128-171. doi:10.1109/TSMC.1971.4308269

Flache, Andreas, Mäs, Michael, Feliciani, Thomas, Chattoe-Brown, Edmund, Deffuant, Guillaume, Huet, Sylvie and Lorenz, Jan (2017) ‘Models of Social Influence: Towards the Next Frontiers’, Journal of Artificial Societies and Social Simulation, 20(4), October, article 2, <http://jasss.soc.surrey.ac.uk/20/4/2.html>. doi:10.18564/jasss.3521

Fortunato, Santo and Castellano, Claudio (2007) ‘Scaling and Universality in Proportional Elections’, Physical Review Letters, 99(13), 28 September, article 138701. doi:10.1103/PhysRevLett.99.138701

Hegselmann, Rainer and Flache, Andreas (1998) ‘Understanding Complex Social Dynamics: A Plea For Cellular Automata Based Modelling’, Journal of Artificial Societies and Social Simulation, 1(3), June, article 1, <https://www.jasss.org/1/3/1.html>.

Hegselmann, Rainer and Krause, Ulrich (2002) ‘Opinion Dynamics and Bounded Confidence Models, Analysis, and Simulation’, Journal of Artificial Societies and Social Simulation, 5(3), June, article 2, <http://jasss.soc.surrey.ac.uk/5/3/2.html>.

Salzarulo, Laurent (2006) ‘A Continuous Opinion Dynamics Model Based on the Principle of Meta-Contrast’, Journal of Artificial Societies and Social Simulation, 9(1), January, article 13, <http://jasss.soc.surrey.ac.uk/9/1/13.html>.

Serra-Garcia, Marta and Gneezy, Uri (2021) ‘Nonreplicable Publications are Cited More Than Replicable Ones’, Science Advances, 7, 21 May, article eabd1705. doi:10.1126/sciadv.abd1705

Sobkowicz, Pawel (2009) ‘Modelling Opinion Formation with Physics Tools: Call for Closer Link with Reality’, Journal of Artificial Societies and Social Simulation, 12(1), January, article 11, <http://jasss.soc.surrey.ac.uk/12/1/11.html>.

Urbig, Diemo, Lorenz, Jan and Herzberg, Heiko (2008) ‘Opinion Dynamics: The Effect of the Number of Peers Met at Once’, Journal of Artificial Societies and Social Simulation, 11(2), March, article 4, <http://jasss.soc.surrey.ac.uk/11/2/4.html>.


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974)

By Edmund Chattoe-Brown

Since this is new venture, we need to establish conventions. Since JASSS has been running since 1998 (twenty years!) it is reasonable to argue that something un-cited in JASSS throughout that period has effectively been forgotten by the ABM community. This contribution by Grémy is actually a single chapter in a book otherwise by Boudon (a bibliographical oddity that may have contributed to its neglect. Grémy also appears to have published mostly in French, which may also have had an effect. An English summary of his contribution to simulation might be another useful item for RofASSS.) Boudon gets 6 hits on the JASSS search engine (as of 31.05.18), none of which mention simulation and Gremy gets no hits (as does Grémy: unfortunately it is hard to tell how online search engines “cope with” accents and thus whether this is a “real” result).

Since this book is still readily available as a mass-market paperback, I will not reprise the argument of the simulation here (and its limitations relative to existing ABM methodology could be a future RofASSS contribution). Nonetheless, even approximately empirical modelling in the mid-seventies is worthy of note and the article is early to say other important things (for example about simulation being able to avoid “technical assumptions” – made for solubility rather than realism).

The point of this contribution is to draw attention to an argument that I have only heard twice (and only found once in print) namely that we should look at the form of real data as an initial justification for using ABM at all (please correct me if there are earlier or better examples). Grémy (1974, p. 210) makes the point that initial incongruities between the attitudes that people hold (altruistic versus selfish) and their career choices (counsellor versus corporate raider) can be resolved in either direction as time passes (he knows this because Boudon analysed some data collected by Rosenberg at two points from US university students) as well as remaining unresolved and, as such, cannot readily be explained by some sort of “statistical trend” (that people become more selfish as they get older or more altruistic as they become more educated). He thus hypothesises (reasonably it seems to me) that the data requires a model of some sort of dynamic interaction process that Grémy then simulates, paying some attention to their survey results both in constraining the model and analysing its behaviour.

This seems to me an important methodological practice to rescue from neglect. (It is widely recognised anecdotally that people tend to use the research methods they know and like rather than the ones that are suitable.) Elsewhere (Chattoe-Brown 2014), inspired by this argument, I have shown how even casually accessed attitude change data really looks nothing like the output of the (very popular) Zaller-Deffuant model of opinion change (very roughly, 228 hits in JASSS for Deffuant, 8 for Zaller and 9 for Zaller-Deffuant though hyphens sometimes produce unreliable results for online search engines too.) The attitude of the ABM community to data seems to be rather uncomfortable. Perhaps support in theory and neglect in practice would sum it up (Angus and Hassani-Mahmooei 2015, Table 5 in section 4.5). But if our models can’t even “pass first base” with existing real data (let alone be calibrated and validated) should we be too surprised if what seems plausible to us does not seem plausible to social scientists in substantive domains (and thus diminishes their interest in ABM as a “real method?”) Even if others in the ABM community disagree with my emphasis on data (and I know that they do) I think this is a matter that should be properly debated rather than just left floating about in coffee rooms (as such this is what we intend RofASSS to facilitate). As W. C. Fields is reputed to have said (though actually the phrase appears to have been common currency), we would wish to avoid ABM being just “Another good story ruined by an eyewitness”.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4):16.

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1):16.

Gremy, Jean-Paul (1974) ‘Simulation Techniques’, in Boudon, Raymond, The Logic of Sociological Explanation (Harmondsworth: Penguin), chapter 11:209-227.


Chattoe-Brown, E. (2018) A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974). Review of Artificial Societies and Social Simulation, 1st June 2018. https://rofasss.org/2018/06/01/ecb/