Tag Archives: opiniondynamics

Is The Journal of Artificial Societies and Social Simulation Parochial? What Might That Mean? Why Might It Matter?

By Edmund Chattoe-Brown

Introduction

The Journal of Artificial Societies and Social Simulation (hereafter JASSS) retains a distinctive position amongst journals publishing articles on social simulation and Agent-Based Modelling. Many journals have published a few Agent-Based Models, some have published quite a few but it is hard to name any other journal that predominantly does this and has consistently done so over two decades. Using Web of Science on 25.07.22, there are 5540 hits including the search term <“agent-based model”> anywhere in their text. JASSS does indeed have the most of any single journal with 268 hits (5% of the total to the nearest integer). The basic search returns about 200 distinct journals and about half of these have 10 hits or less. Since this search is arranged by hit count, this means that the unlisted journals have even fewer hits than those listed i. e. less than 7 per journal. This supports the claim that the great majority of journals have very limited engagement with Agent-Based Modelling. Note that the point here is to evidence tendencies effectively and not to claim that this specific search term tells us the precise relative frequency of articles on the subject of Agent-Based Modelling in different journals.

This being so, it seems reasonable – and desirable for other practical reasons like being entirely open access, online and readily searchable – to use JASSS as a sample – though clearly not necessarily a representative sample – of what may be happening in Agent-Based Modelling more generally. This is the case study approach (Yin 2009) where smaller samples may be practically unavoidable to discuss richer or more complex phenomena like the actual structures of arguments rather than something quantitative like, say, the number of sources cited by each article.

This piece is motivated by the scepticism that some reviewers have displayed about such a case study approach focused on JASSS and conclusions drawn from it. It is actually quite strange to have the editors and reviewers of a journal argue against its ability to tell us anything useful about wider Agent-Based Modelling research even as a starting point (particularly since this approach has been used in articles previously published in the journal, see for example, Meyer et al. 2009 and Hauke et al. 2017). Of course, it is a given that different journals have unique editorial policies, distinct reviewer pools and so on. Though this may mean, for example, that journals only irregularly publishing Agent-Based Models are actually less typical because it is more arbitrary who reviews for them and there may therefore be less reviewing skill and consensus about the value of articles involved. Anecdotally, I have found this to be true in medical journals where excellent articles rub shoulders with much more problematic ones in a small overall pool. The point of my argument is not to claim that JASSS can really stand in for ABM research as a whole – which it plainly cannot – but that, if the case study approach is to be accepted at all, JASSS is one of the few journals that successfully qualifies for it on empirically justifiable grounds. Conversely, given the potentially distinctive character of journals and the wide spread of Agent-Based Modelling, attempts at representative sampling may be very challenging in resource terms.

Method and Results

Again, using Web of Science on 04.07.22, I searched for the most highly cited articles containing the string “opinion dynamics”. I am well aware that this will not capture all articles that actually have opinion dynamics as their subject matter but this is not the intention. The intention is to describe a reproducible and measurable procedure correlated with the importance of articles so my results can be checked, criticised and extended. Comparing results based on other search terms would be part of that process. Then I took the first ten distinct journals that could be identified from this set of articles in order of citation count. The idea here was to see what journals had published the most important articles in the field overall – at least as identified by this particular search term – and then follow up their coverage of opinion dynamics generally. In addition, for each journal, I accessed the top 50 most cited articles and then checked how many articles containing the string “opinion dynamics” featured in that top 50. The idea here was to assess the extent to which opinion dynamics articles were important to the impact of a particular journal. Table 1 shows the results of this analysis.

Journal Title “opinion dynamics” Articles in the Top 50 Most Cited Most Highly Cited “opinion dynamics” Article Citations Number of Articles Containing the String “opinion dynamics”
Reviews of Modern Physics 0 2380 1
JASSS 6 1616 64
International Journal of Modern Physics C 4 376 72
Dynamic Games and Applications 1 338 5
Physical Review Letters 0 325 5
Global Challenges 1 272 1
IEEE Transactions on Automatic Control 0 269 38
SIAM Review 0 258 2
Central European Journal of Operations Research 1 241 1
Physica A: Statistical Mechanics and Its Applications 0 231 143

Table 1. The Coverage, Commitment and Importance of Different Journals in Regard to “opinion dynamics”: Top Ten by Citation Count of Most Influential Article.

This list attempts to provide two somewhat separate assessments of a journal with regard to “opinion dynamics”. The first is whether it has a substantial body of articles on the topic: Coverage. The second is whether, by the citation levels of the journal generally, “opinion dynamics” models are important to it: Commitment. These journals have been selected on a third dimension, their ability to contribute at least one very influential article to the literature as a whole: Importance.

The resulting patterns are interesting in several ways. Firstly, JASSS appears unique in this sample in being a clearly social science journal rather than a physical science journal or one dealing with instrumental problems like operations research or automatic control. It is an interesting corollary how many “opinion dynamics” models in a physics journal will have been reviewed by social scientists or modellers with a social science orientation at least. This is part of a wider question about whether, for example, physics journals are mainly interested in these models as formal systems rather than as having likely application to real societies. Secondly, 3 journals out of 10 have only a single “opinion dynamics” article – and a further journal has only 2 – which are nonetheless, extremely highly cited relative to such articles as a whole. It is unclear whether this “only one but what a one” pattern has any wider significance. It should also be noted that the most highly cited article in JASSS is four times more highly cited than the next most cited. Only 4 of these journals out of 10 could really be said to have a usable sample of such articles for case study analysis. Thirdly, only 2 journals out of 10 have a significant number of articles sufficiently important that they appear in the top 50 most cited and 5 journals have no “opinion dynamics” articles in their top 50 most cited at all. This makes the point that a journal can have good coverage of the topic and contain at least one highly cited article without “opinion dynamics” necessarily being a commitment of the journal.

Thus it seems that to be a journal contributing at least one influential article to the field as a whole, to have several articles that are amongst the most cited by that journal and to have a non-trivial number of articles overall is unusual. Only one other journal in the top 10 meets all three criteria (International Journal of Physics C). This result is corroborated in Table 2 which carries out the same analysis for all additional journals containing at least one highly cited “opinion dynamics” article (with an arbitrary cut off of at least 100 citations for that article). There prove to be fourteen such journals in addition to the ten above.

Journal Title “opinion dynamics” Articles in the Top 50 Most Cited Most Highly Cited “opinion dynamics” Article Citations Number of Articles Containing the String “opinion dynamics”
Mathematics of Operations Research 1 215 2
Information Sciences 0 186 14
Physica D: Nonlinear Phenomena 0 182 4
Journal of Complex Networks 1 177 5
Annual Reviews in Control 2 165 4
Information Fusion 0 154 11
IEEE Transactions on Control of Network Systems 3 151 12
Automatica 0 141 32
Public Opinion Quarterly 0 132 5
Physical Review E 0 129 74
SIAM Journal on Control and Optimization 0 127 13
Europhysics Letters 0 116 3
Knowledge-Based Systems 0 112 5
Scientific Reports 0 111 26

Table 2. The Coverage, Commitment and Importance of Different Journals in Regard to “opinion dynamics”: All Remaining Distinct Journals whose most important “opinion dynamics” article receives at least 100 citations.

Table 2 confirms the dominance of physical science journals and those solving instrumental problems as opposed to those evidently dealing with the social sciences: A few terms like complex networks are ambivalent in this regard however. Further it confirms the scarcity of journals that simultaneously contribute at least one influential article to the wider field, have a sensibly sized sample of articles on this topic – so that provisional but nonetheless empirical hypotheses might be derived from a case study – and have “opinion dynamics” articles in their top 50 most cited articles as a sign of the importance of the topic to the journal and its readers. To some extent, however, the latter confirmation is an unavoidable artefact of the sampling strategy. As the most cited article becomes less highly cited, the chance it will appear in the top 50 most cited for a particular journal will almost certainly fall unless the journal is very new or generally not highly cited.

As a third independent check, I again used Web of Science to identify all journals which had – somewhat arbitrarily – at least 30 articles on “opinion dynamics”, giving some sense of their contribution. Only two more journals (see Table 3) not already occurring in the two tables above were identified. Generally, this analysis considers only journal articles and not conference proceedings and book chapter serials whose peer review status is less clear/comparable.

Journal Title “opinion dynamics” Articles in the Top 50 Most Cited Most Highly Cited “opinion dynamics” Article Citations Number of Articles Containing the String “opinion dynamics”
Advances in Complex Systems 5 54 42
Plos One 0 53 32

Table 3. The Coverage, Commitment and Importance of Different Journals: All Journals with at Least 30 “opinion dynamics” hits not already listed in Tables 1 and 2.

This cross check shows that while the additional journals do have sample of articles large enough to form the basis for a case study, they either have not yet contributed a really influential article to the wider field – less than half the number of citations of the journals which qualify for Tables 1 and 2, do not have a high commitment to opinion dynamics – in terms of impact within the journal and among its readers – or both.

Before concluding this analysis, it is worth briefly reflecting on what these three criteria jointly tell us – though other criteria could also be used in further research. By sampling on highly cited articles we focus on journals that have managed to go beyond their core readership and influence the field as a whole. There is a danger that journals that have never done this are merely “talking to themselves” and may therefore form a less effective basis for a case study speaking to the field as a whole. By attending to the number of articles in the top 50 for the journal, we get a sense of whether the topic is central (or only peripheral) to that journal/its readership and, again, journals where the topic is central stand a chance of being better case studies than those where it is peripheral. The criteria for having enough articles is simply a practical one for conducting a meaningful case study. Researchers using different methods may disagree about how many instances you need to draw useful conclusions but there is general agreement that it is more than one!

Analysis and Conclusions

The present article was motivated by an attempt to evaluate the claim that JASSS may be parochial and therefore not constitute a suitable basis for provisional hypotheses generated by case study analysis of its articles. Although the argument presented here is clearly rough and ready – and could be improved on by subsequent researchers – it does not appear to support this claim. JASSS actually seems to be one of very few journals – arguably the only social science one – that simultaneously has made at least one really influential contribution to the wider field of opinion dynamics, has a large enough number of articles on the topic for plausible generalisation and has quite a few such articles in its top 50, which shows the importance of the topic to the journal and its wider readership. Unless one wishes to reject case study analysis altogether, there are – in fact – very few other journals on which it can effectively be done for this topic.

But actually, my main conclusion is a wider reflection on peer reviewing, sampling and scientific progress based on reviewer resistance to the case study approach. There are 1386 articles with the search term “opinion dynamics” in Web of Science as of 25.07.22. It is clearly not realistic for one article – or even one book – to analyse all that content, particularly qualitatively. This being so we have to consider what is practical and adequate to generate hypotheses suitable for publication and further development of research along these lines. Case studies of single journals are not the only strategy but do have a recognised academic tradition in methodology (Brown 2008). We could sample randomly from the population of articles but I have never yet seen such a qualitative analysis based on sampling and it is not clear whether it would be any better received by potential reviewers. (In particular, with many journals each having only a few examples of Agent-Based Models, realistically low sampling rates would leave many journals unrepresented altogether which would be a problem if they had distinctive approaches.)  Most journals – including JASSS – have word limits and this restricts how much you can report. Qualitative analysis is more drawn-out than quantitative analysis which limits this research style further in terms of practical sample sizes. Both reading whole articles for analysis and writing up the resulting conclusions takes more resources of time and word count. As long as one does not claim that a qualitative analysis from JASSS can stand for all Agent-Based Modelling – but is merely a properly grounded hypothesis for further investigation – and shows ones working properly to support that further investigation, it isn’t really clear why that shouldn’t be sufficient for publication. Particularly as I have now shown that JASSS isn’t notably parochial along several potentially relevant dimensions. If a reviewer merely conjectures that your results won’t generalise, isn’t the burden of proof then on them to do the corresponding analysis and publish it? Otherwise the danger is that we are setting conjecture against actual evidence – however imperfect – and this runs the risk of slowing scientific progress by favouring research compatible with traditionally approved perspectives in publication. It might be useful to revisit the everyday idea of burden of proof in assessing the arguments of reviewers. What does it take in terms of evidence and argument (rather than simply power) for a comment by a reviewer to scientifically require an answer? It is a commonplace that a disproved hypothesis is more valuable to science than a mere conjecture or something that cannot be proven one way or another. One reason for this is that scientific procedure illustrates methodological possibility as well as generating actual results. A sample from JASSS may not stand for all research but it shows how a conclusion might ultimately be reached for all research if the resources were available and the administrative constraints of academic publishing could be overcome.

As I have argued previously (Chattoe-Brown 2022), and has now been pleasingly illustrated (Keijzer 2022), this situation may create an important and distinctive role for RofASSS. It may be valuable to get hypotheses, particularly ones that potentially go against the prevailing wisdom, “out there” so they can subsequently be tested more rigorously rather than having to wait until the framer of the hypothesis can meet what may be a counsel of perfection from peer reviewers. Another issue with reviewing is a tendency to say what will not do rather than what will do. This rather the puts the author at the mercy of reviewers during the revision process. RofASSS can also be used to hive off “contextual” analyses – like this one regarding what it might mean for a journal to be parochial – so that they can be developed in outline for the general benefit of the Agent-Based Modelling community – rather than having to add length to specific articles depending on the tastes of particular reviewers.

Finally, as should be obvious, I have only suggested that JASSS is not parochial in regard to articles involving the string “opinion dynamics”. However, I have also illustrated how this kind of analysis could be done systematically for different topics to justify the claim that a particular journal can serve as a reasonable basis for a case study.

Acknowledgements

This analysis was funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5.

References

Brown, Patricia Anne (2008) ‘A Review of the Literature on Case Study Research’, Canadian Journal for New Scholars in Education/Revue Canadienne des Jeunes Chercheures et Chercheurs en Éducation, 1(1), July, pp. 1-13, https://journalhosting.ucalgary.ca/index.php/cjnse/article/view/30395.

Chattoe-Brown, E. (2022) ‘If You Want to Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly in Need of Refutation’, Review of Artificial Societies and Social Simulation, 1st Feb 2022. https://rofasss.org/2022/02/01/citing-od-models

Hauke, Jonas, Lorscheid, Iris and Meyer, Matthias (2017) ‘Recent Development of Social Simulation as Reflected in JASSS Between 2008 and 2014: A Citation and Co-Citation Analysis’, Journal of Artificial Societies and Social Simulation, 20(1), 5. https://www.jasss.org/20/1/5.html. doi:10.18564/jasss.3238

Keijzer, M. (2022) ‘If You Want to be Cited, Calibrate Your Agent-Based Model: Reply to Chattoe-Brown’, Review of Artificial Societies and Social Simulation, 9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown

Meyer, Matthias, Lorscheid, Iris and Troitzsch, Klaus G. (2009) ‘The Development of Social Simulation as Reflected in the First Ten Years of JASSS: A Citation and Co-Citation Analysis’, Journal of Artificial Societies and Social Simulation, 12(4), 12,. https://www.jasss.org/12/4/12.html.

Yin, R. K. (2009) Case Study Research: Design and Methods, fourth edition (Thousand Oaks, CA: Sage).


Chattoe-Brown, E. (2022) Is The Journal of Artificial Societies and Social Simulation Parochial? What Might That Mean? Why Might It Matter? Review of Artificial Societies and Social Simulation, 10th Sept 2022. https://rofasss.org/2022/09/10/is-the-journal-of-artificial-societies-and-social-simulation-parochial-what-might-that-mean-why-might-it-matter/


 

If you want to be cited, calibrate your agent-based model: A Reply to Chattoe-Brown

By Marijn A. Keijzer

This is a reply to a previous comment, (Chattoe-Brown 2022).

The social simulation literature has called on its proponents to enhance the quality and realism of their contributions through systematic validation and calibration (Flache et al., 2017). Model validation typically refers to assessments of how well the predictions of their agent-based models (ABMs) map onto empirically observed patterns or relationships. Calibration, on the other hand, is the process of enhancing the realism of the model by parametrizing it based on empirical data (Boero & Squazzoni, 2005). We would expect that presenting a validated or calibrated model serves as a signal of model quality, and would thus be a desirable characteristic of a paper describing an ABM.

In a recent contribution to RofASSS, Edmund Chattoe-Brown provocatively argued that model validation does not bear fruit for researchers interested in boosting their citations. In a sample of articles from JASSS published on opinion dynamics he observed that “the sample clearly divides into non-validated research with more citations and validated research with fewer” (Chattoe-Brown, 2022). Well-aware of the bias and limitations of the sample at hand, Chattoe-Brown calls on refutation of his hypothesis. An analysis of the corpus of articles in Web of Science, presented here, could serve that goal.

To test whether there exists an effect of model calibration and/or validation on the citation counts of papers, I compare citation counts of a larger number of original research articles on agent-based models published in the literature. I extracted 11,807 entries from Web of Science by searching for items that contained the phrases “agent-based model”, “agent-based simulation” or “agent-based computational model” in its abstract.[1] I then labeled all items that mention “validate” in its abstract as validated ABMs and those that mention “calibrate” as calibrated ABMs. This measure if rather crude, of course, as descriptions containing phrases like “we calibrated our model” or “others should calibrate our model” are both labeled as calibrated models. However, if mentioning that future research should calibrate or validate the model is not related to citations counts (which I would argue it indeed is not), then this inaccuracy does not introduce bias.

The shares of entries that mention calibration or validation are somewhat small. Overall, just 5.62% of entries mention validation, 3.21% report a calibrated model and 0.65% fall in both categories. The large sample size, however, will still enable the execution of proper statistical analysis and hypothesis testing.

How are mentions of calibration and validation in the abstract related to citation counts at face value? Bivariate analyses show only minor differences, as revealed in Figure 1. In fact, the distribution of citations for validated and non-validated ABMs (panel A) is remarkably similar. Wilcoxon tests with continuity correction—the nonparametric version of the simple t test—corroborate their similarity (W = 3,749,512, p = 0.555). The differences in citations between calibrated and non-calibrated models appear, albeit still small, more pronounced. Calibrated ABMs are cited slightly more often (panel B), as also supported by a bivariate test (W = 1,910,772, p < 0.001).

Picture 1

Figure 1. Distributions of number of citations of all the entries in the dataset for validated (panel A) and calibrated (panel B) ABMs and their averages with standard errors over years (panels C and D)

Age of the paper might be a more important determinant of citation counts, as panels C and D of Figure 1 suggest. Clearly, the age of a paper should be important here, because older papers have had much more opportunity to get cited. In particular, papers younger than 10 years seem to not have matured enough for its citation rates to catch up to older articles. When comparing the citation counts of purely theoretical models with calibrated and validated versions, this covariate should not be missed, because the latter two are typically much younger. In other words, the positive relationship between model calibration/validation and citation counts could be hidden in the bivariate analysis, as model calibration and validation are recent trends in ABM research.

I run a Poisson regression on the number of citations as explained by whether they are validated and calibrated (simultaneously) and whether they are both. The age of the paper is taken into account, as well as the number of references that the paper uses itself (controlling for reciprocity and literature embeddedness, one might say). Finally, the fields in which the papers have been published, as registered by Web of Science, have been added to account for potential differences between fields that explains both citation counts and conventions about model calibration and validation.

Table 1 presents the results from the four models with just the main effects of validation and calibration (model 1), the interaction of validation and calibration (model 2) and the full model with control variables (model 3).

Table 1. Poisson regression on the number of citations

# Citations
(1) (2) (3)
Validated -0.217*** -0.298*** -0.094***
(0.012) (0.014) (0.014)
Calibrated 0.171*** 0.064*** 0.076***
(0.014) (0.016) (0.016)
Validated x Calibrated 0.575*** 0.244***
(0.034) (0.034)
Age 0.154***
(0.0005)
Cited references 0.013***
(0.0001)
Field included No No Yes
Constant 2.553*** 2.556*** 0.337**
(0.003) (0.003) (0.164)
Observations 11,807 11,807 11,807
AIC 451,560 451,291 301,639
Note: *p<0.1; **p<0.05; ***p<0.01

The results from the analyses clearly suggest a negative effect of model validation and a positive effect of model calibration on the likelihood of being cited. The hypothesis that was so “badly in need of refutation” (Chattoe-Brown, 2022) will remain unrefuted for now. The effect does turn positive, however, when the abstract makes mention of calibration as well. In both the controlled (model 3) and uncontrolled (model 2) analyses, combining the effects of validation and calibration yields a positive coefficient overall.[2]

The controls in model 3 substantially affect the estimates from the three main factors of interest, while remaining in expected directions themselves. The age of a paper indeed helps its citation count, and so does the number of papers the item cites itself. The fields, furthermore, take away from the main effects somewhat, too, but not to a problematic degree. In an additional analysis, I have looked at the relationship between the fields and whether they are more likely to publish calibrated or validated models and found no substantial relationships. Citation counts will differ between fields, however. The papers in our sample are more often cited in, for example, hematology, emergency medicine and thermodynamics. The ABMs in the sample coming from toxicology, dermatology and religion are on the unlucky side of the equation, receiving less citations on average. Finally, I have also looked at papers published in JASSS specifically, due to the interest of Chattoe-Brown and the nature of this outlet. Surprisingly, the same analyses run on the subsample of these papers (N=376) showed a negative relationship between citation counts and model calibration/validation. Does the JASSS readership reveal its taste for artificial societies?

In sum, I find support for the hypothesis of Chattoe-Brown (2022) on the negative relationship between model validation and citations counts for papers presenting ABMs. If you want to be cited, you should not validate your ABM. Calibrated ABMs, on the other hand, are more likely to receive citations. What is more, ABMs that were both calibrated and validated are most the most successful papers in the sample. All conclusions were drawn considering (i.e. controlling for) the effects of age of the paper, the number of papers the paper cited itself, and (citation conventions in) the field in which it was published.

While the patterns explored in this and Chattoe-Brown’s recent contribution are interesting, or even puzzling, they should not distract from the goal of moving towards realistic agent-based simulations of social systems. In my opinion, models that combine rigorous theory with strong empirical foundations are instrumental to the creation of meaningful and purposeful agent-based models. Perhaps the results presented here should just be taken as another sign that citation counts are a weak signal of academic merit at best.

Data, code and supplementary analyses

All data and code used for this analysis, as well as the results from the supplementary analyses described in the text, are available here: https://osf.io/x9r7j/

Notes

[1] Note that the hyphen between “agent” and “based” does not affect the retrieved corpus. Both contributions that mention “agent based” and “agent-based” were retrieved.

[2] A small caveat to the analysis of the interaction effect is that the marginal improvement of model 2 upon model 1 is rather small (AIC difference of 269). This is likely (partially) due to the small number of papers that mention both calibration and validation (N=77).

Acknowledgements

Marijn Keijzer acknowledges IAST funding from the French National Research Agency (ANR) under the Investments for the Future (Investissements d’Avenir) program, grant ANR-17-EURE-0010.

References

Boero, R., & Squazzoni, F. (2005). Does empirical embeddedness matter? Methodological issues on agent-based models for analytical social science. Journal of Artificial Societies and Social Simulation, 8(4), 1–31. https://www.jasss.org/8/4/6.html

Chattoe-Brown, E. (2022) If You Want To Be Cited, Don’t Validate Your Agent-Based Model: A Tentative Hypothesis Badly In Need of Refutation. Review of Artificial Societies and Social Simulation, 1st Feb 2022. https://rofasss.org/2022/02/01/citing-od-models

Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4). https://doi.org/10.18564/jasss.3521


Keijzer, M. (2022) If you want to be cited, calibrate your agent-based model: Reply to Chattoe-Brown. Review of Artificial Societies and Social Simulation, 9th Mar 2022. https://rofasss.org/2022/03/09/Keijzer-reply-to-Chattoe-Brown


 

Does It Take Two (And A Creaky Search Engine) To Make An Outstation? Hunting Highly Cited Opinion Dynamics Articles in the Journal of Artificial Societies and Social Simulation (JASSS)

By Edmund Chattoe-Brown

In an important article, Squazzoni and Casnici (2013) raise the issue of how social simulation (as manifested in the Journal of Artificial Societies and Social Simulation – hereafter JASSS – the journal that has probably published the most of this kind of research for longest) cites and is cited in the wider scientific community. They discuss this in terms of social simulation being a potential “outstation” of social science (but better integrated into physical science and computing). This short note considers the same argument in reverse. As an important site of social simulation research, is it the case that JASSS is effectively representing research done more widely across the sciences?

The method used to investigate this was extremely simple (and could thus easily be extended and replicated). On 28.08.21, using the search term “opinion dynamics” in “all fields”, all sources from Web of Science (www.webofknowledge.com, hereafter WOS) that were flagged as “highly cited” were selected as a sample. For each article (only articles turned out to be highly cited), the title was searched in JASSS and the number of hits recorded. Common sense was applied in this search process to maximise the chances of success. So if a title had two sub clauses, these were searched jointly as quotations (to avoid the “hits” being very sensitive to the reproduction of punctuation linking clauses.) In addition, the title of the journal in which the article appeared was searched to give a wider sense of how well the relevant journal is known is JASSS.

However, now we come to the issue of the creaky search engine (as well as other limitations of quick and dirty searches). Obviously searching for the exact title will not find variants of that title with spelling mistakes or attempts to standardise spelling (i. e. changing behavior to behaviour). Further, it turns out that the Google search engine (which JASSS uses) does not promise the consistency that often seems to be assumed for it (http://jdebp.uk/FGA/google-result-counts-are-a-meaningless-metric.html). For example, when I searched for “SIAM Review” I mostly got 77 hits, rather often 37 hits and very rarely 0 or 1 hits. (PDFs are available for three of these outcomes from the author but the fourth could not be reproduced to be recorded in the time available.) This result occurred when another search took place seconds after the first so it is not, for example, a result of substantive changes to the content of JASSS. To deal with this problem I tried to confirm the presence of a particular article by searching jointly for all its co-authors. Mostly this approach gave a similar result (but where it does not it is noted in the table below). In addition, wherever there were a relatively large number of hits for a specific search, some of these were usually not the ones intended. (For example no hit on the term “global challenges” actually turned out to be for the journal Global Challenges.) In addition, JASSS often gives an oddly inconsistent number of hits for a specific article: It may appear as PDF and HTML as well as in multiple indices or may occur just once. (This discouraged attempts to go from hits to the specific number of unique articles citing these WOS sources. As it turns out, this additional detail would have added little to the headline result.)

The term “opinion dynamics” was chosen somewhat arbitrarily (for reasons connected with other research) and it is not claimed that this term is even close to a definitive way of capturing any models connected with opinion/attitude change. Nonetheless, it is clear that the number of hits and the type of articles reported on WOS (which is curated and quality controlled) are sufficient (and sufficiently relevant) for this to be a serviceable search term to identify a solid field of research in JASSS (and elsewhere). I shall return to this issue.

The results, shown in the table below are striking on several counts. (All these sources are fully cited in the references at the end of this article.) Most noticeably, JASSS is barely citing a significant number of articles that are very widely cited elsewhere. Because these are highly cited in WOS this cannot be because they are too new or too inaccessible. The second point is the huge discrepancy in citation for the one article on the WOS list that appears in JASSS itself (Flache et al. 2017). Thirdly, although some of these articles appear in journals that JASSS otherwise does not cite (like Global Challenges and Dynamic Games and Applications) others appear in journals that are known to JASSS and generally cited (like SIAM Review).

Reference WOS Citations Article Title Hits in JASSS Journal Title Hits in JASSS
Acemoglu and Ozdaglar (2011) 301 0 (1 based on joint authors) 2
Motsch and Tadmor (2014) 214 0 77
Van Der Linden et al. (2017) 191 0 6 (but none for the journal)
Acemoğlu et al. (2013) 186 1 2 (but 1 article)
Proskurnikov et al. (2016) 165 0 9
Dong et al. (2017) 147 0 48 (but rather few for the journal)
Jia et al. (2015) 118 0 77
Dong et al. (2018) 117 0 (1 based on joint authors) 48 (but rather few for the journal)
Flache et al. (2017) 86 58 (17 based on joint authors) N/A
Urena et al. (2019) 72 0 6
Bu et al. (2020) 56 0 5
Zhang et al. (2020) 55 0 33 (but only some of these are for the journal)
Xiong et al. (2020) 28 0 1
Carrillo et al. (2020) 13 0 0

One possible interpretation of this result is simply that none of the most highly cited articles in WOS featuring the term “opinion dynamics” happen to be more than incidentally relevant to the scientific interests of JASSS. On consideration, however, this seems a rather improbable coincidence. Firstly, these articles were chosen exactly because they are highly cited so we would have to explain how they could be perceived as so useful generally but specifically not in JASSS. Secondly, the same term (“opinion dynamics”) consistently generates 254 hits in JASSS, suggesting that the problem isn’t a lack of overlap in terminology or research interests.

This situation, however, creates a problem for more conclusive explanation. The state of affairs here is not that these articles are being cited and then rejected on scientific grounds given the interests of JASSS (thus providing arguments I could examine). It is that they are barely being cited at all. Unfortunately, it is almost impossible to establish why something is not happening. Perhaps JASSS authors are not aware of these articles to begin with. Perhaps they are aware but do not see the wider scientific value of critiquing them or attempting to engage with their irrelevance in print.

But, given that the problem is non citation, my concern can be made more persuasive (perhaps as persuasive as it can be given problems of convincingly explaining an absence) by investigating the articles themselves. (My thanks are due to Bruce Edmonds for encouraging me to strengthen the argument in this way.) There are definitely some recurring patterns in this sample. Firstly, a significant proportion of the articles are highly mathematical and, therefore (as Agent-Based Modelling often criticises) rely on extreme simplifying assumptions and toy examples. Even here, however, it is not self-evident that such articles should not be cited in JASSS merely because they are mathematical. JASSS has itself published relatively mathematical articles and, if an article contains a mathematical model that could be “agentised” (thus relaxing its extreme assumptions) which is no less empirical than similar models in JASSS (or has particularly interesting behaviours) then it is hard to see why this should not be discussed by at least a few JASSS authors. A clear example of this is provided by Acemoğlu et al. (2013) which argues that existing opinion dynamics models fail to produce the ongoing fluctuations of opinion observed in real data (see, for example, Figures 1-3 in Chattoe-Brown 2014 which also raises concerns about the face validity of popular social simulations of opinion dynamics). In fact, the assumptions of this model could easily be questioned (and real data involves turning points and not just fluctuations) but the point is that JASSS articles are not citing it and rejecting it based on argument but simply not citing it. A model capable of generating ongoing opinion fluctuations (however imperfect) is simply too important to the current state of opinion dynamics research in social simulation not to be considered at all. Another (though less conclusive) example is Motsch and Tadmor (2014) which presents a model suggesting (counter intuitively) that interaction based on heterophily can better achieve consensus than interaction based on homophily. Of course one can reject such an assumption on empirical grounds but JASSS is not currently doing that (and in fact the term heterophily is unknown in the journal except for the title of a cited article.)

Secondly, there are also a number of articles which, while not providing important results seem no less plausible or novel than typical OD articles that are published in JASSS. For example, Jia et al. (2015) add self-appraisal and social power to a standard OD model. Between debates, agents amend the efficacy they believe that they and others have in terms of swaying the outcome and take that into account going forward. Proskurnikov et al. (2016) present the results of a model in which agents can have negative ties with each other (as well as the more usual positive ones) and thus consider the coevolution of positive/negative sentiments and influence (describing what they call hostile camps i. e. groups with positive ties to each other and negative ties to other groups). This is distinct from the common repulsive effect in OD models where agents do not like the opinions of others (rather than disliking the others themselves.)

Finally, both Dong et al. (2017) and Zhang et al. (2020) reach for the idea (through modelling) that experts and leaders in OD models may not just be randomly scattered through the population as types but may exist because of formal organisations or accidents of social structure: This particular agent is either deliberately appointed to have more influence or happens to have it because of their network position.

On a completely different tack, two articles (Dong et al. 2018 and Acemoglu and Ozdaglar 2011) are literature reviews or syntheses on relevant topics and it is hard to see how such broad ranging articles could have so little value to OD research in JASSS.

It will be admitted that some of the articles in the sample are hard to evaluate with certainty. Mathematical approaches often seem to be more interested in generating mathematics than in justifying its likely value. This is particularly problematic when combined with a suggestion that the product of the research may be instrumental algorithms (designed to get things done) rather than descriptive ones (designed to understand social behaviour). An example of this is several articles which talk about achieving consensus without really explaining whether this is a technical goal (for example in a neural network) or a social phenomenon and, if the latter, whether this places constraints on what it legitimate: You can reach consensus by debate but not by shooting dissenters!

But as well as specific ideas in specific models, this sample of articles also suggest a different emphasis from those currently found within JASSS OD research. For example, there is much more interest in deliberately achieving consensus (and the corresponding hazards of manipulation or misinformation impeding that.) Reading these articles collectively gives a sense that JASSS OD models are very much liberal democratic: Agents honestly express their views (or at most are somewhat reticent to protect themselves.) They decently expect the will of the people to prevail. They do not lie strategically to sway the influential, spread rumours to discredit the opinions of opponents or flood the debate with bots. Again, this darker vision is no more right a priori than the liberal democratic one but JASSS should at least be engaging with articles modelling (or providing data on – see Van Der Linden et al. 2017) such phenomena in an OD context. (Although misinformation is mentioned in some OD articles in JASSS it does not seem to be modelled. There also seems to be another surprising glitch in the search engine which considers the term “fake news” to be a hit for misinformation!) This also puts a new slant on an ongoing challenge in OD research, identifying a plausible relationship between fact and opinion. Is misinformation a different field of research (on the grounds that opinions can never be factually wrong) or is it possible for the misinformed to develop mis-opinions? (Those that they would change if what they knew changed.) Is it really the case that Brexiteers, for example, are completely indifferent to the economic consequences which will reveal themselves or did they simply have mistaken beliefs about how high those costs might turn out to be which will cause them to regret their decision at some later stage?

Thus to sum up, while some of the articles in the sample can be dismissed as either irrelevant to JASSS or having a potential relevance that is hard to establish, the majority cannot reasonably be regarded in this way (and a few are clearly important to the existing state of OD research.) While we cannot explain why these articles are not in fact cited, we can thus call into question one possible (Panglossian) explanation for the observed pattern (that they are not cited because they have nothing to contribute).

Apart from the striking nature of the result and its obvious implication (if social simulators want to be cited more widely they need to make sure they are also citing the work of others appropriately) this study has two wider (related) implications for practice.

Firstly, systematic literature reviewing (see, for example, Hansen et al. 2019 – not published in JASSS) needs to be better enforced in social simulation: “Systematic literature review” gets just 7 hits in JASSS. It is not enough to cite just what you happen to have read or models that resemble your own, you need to be citing what the community might otherwise not be aware of or what challenges your own model assumptions. (Although, in my judgement, key assumptions of Acemoğlu et al. 2013 are implausible I don’t think that I could justify non subjectively that they are any more implausible than those of those of the Zaller-Deffuant model – Malarz et al. 2011 – given the huge awareness discrepancy which the two models manifest in social simulation.)

Secondly, we need to rethink the nature of literature reviewing as part of progressive research. I have used “opinion dynamics” here not because it is the perfect term to identify all models of opinion and attitude change but because it throws up enough hits to show that this term is widely used in social simulation. Because I have clearly stated my search term, others can critique it and extend my analysis using other relevant terms like “opinion change” or “consensus formation”. A literature review that is just a bunch of arbitrary stuff cannot be critiqued or improved systematically (rather than nit-picked for specific omissions – as reviewers often do – and even then the critique can’t tell what should have been included if there are no clearly stated search criteria.) It should not be possible for JASSS (and the social simulation community it represents) simply to disregard articles as potentially important in their implications for OD as Acemoğlu et al. (2013). Even if this article turned out to be completely wrong-headed, we need to have enough awareness of it to be able to say why before setting it aside. (Interestingly, the one citation it does receive in JASSS can be summarised as “there are some other model broadly like this” with no detailed discussion at all – and thus no clear statement of how the model presented in the citing article adds to previous models – but uninformative citation is a separate problem.)

Acknowledgements

This article as part of “Towards Realistic Computational Models of Social Influence Dynamics” a project funded through ESRC (ES/S015159/1) by ORA Round 5.

References

Acemoğlu, Daron and Ozdaglar, Asuman (2011) ‘Opinion Dynamics and Learning in Social Networks’, Dynamic Games and Applications, 1(1), March, pp. 3-49. doi:10.1007/s13235-010-0004-1

Acemoğlu, Daron, Como, Giacomo, Fagnani, Fabio and Ozdaglar, Asuman (2013) ‘Opinion Fluctuations and Disagreement in Social Networks’, Mathematics of Operations Research, 38(1), February, pp. 1-27. doi:10.1287/moor.1120.0570

Bu, Zhan, Li, Hui-Jia, Zhang, Chengcui, Cao, Jie, Li, Aihua and Shi, Yong (2020) ‘Graph K-Means Based on Leader Identification, Dynamic Game, and Opinion Dynamics’, IEEE Transactions on Knowledge and Data Engineering, 32(7), July, pp. 1348-1361. doi:10.1109/TKDE.2019.2903712

Carrillo, J. A., Gvalani, R. S., Pavliotis, G. A. and Schlichting, A. (2020) ‘Long-Time Behaviour and Phase Transitions for the Mckean–Vlasov Equation on the Torus’, Archive for Rational Mechanics and Analysis, 235(1), January, pp. 635-690. doi:10.1007/s00205-019-01430-4

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1), February, article 16, <http://www.socresonline.org.uk/19/1/16.html&gt;. doi:10.5153/sro.3315

Dong, Yucheng, Ding, Zhaogang, Martínez, Luis and Herrera, Francisco (2017) ‘Managing Consensus Based on Leadership in Opinion Dynamics’, Information Sciences, 397-398, August, pp. 187-205. doi:10.1016/j.ins.2017.02.052

Dong, Yucheng, Zhan, Min, Kou, Gang, Ding, Zhaogang and Liang, Haiming (2018) ‘A Survey on the Fusion Process in Opinion Dynamics’, Information Fusion, 43, September, pp. 57-65. doi:10.1016/j.inffus.2017.11.009

Flache, Andreas, Mäs, Michael, Feliciani, Thomas, Chattoe-Brown, Edmund, Deffuant, Guillaume, Huet, Sylvie and Lorenz, Jan (2017) ‘Models of Social Influence: Towards the Next Frontiers’, Journal of Artificial Societies and Social Simulation, 20(4), October, article 2, <http://jasss.soc.surrey.ac.uk/20/4/2.html&gt;. doi:10.18564/jasss.3521

Hansen, Paula, Liu, Xin and Morrison, Gregory M. (2019) ‘Agent-Based Modelling and Socio-Technical Energy Transitions: A Systematic Literature Review’, Energy Research and Social Science, 49, March, pp. 41-52. doi:10.1016/j.erss.2018.10.021

Jia, Peng, MirTabatabaei, Anahita, Friedkin, Noah E. and Bullo, Francesco (2015) ‘Opinion Dynamics and the Evolution of Social Power in Influence Networks’, SIAM Review, 57(3), pp. 367-397. doi:10.1137/130913250

Malarz, Krzysztof, Gronek, Piotr and Kulakowski, Krzysztof (2011) ‘Zaller-Deffuant Model of Mass Opinion’, Journal of Artificial Societies and Social Simulation, 14(1), 2, <https://www.jasss.org/14/1/2.html&gt;. doi:10.18564/jasss.1719

Motsch, Sebastien and Tadmor, Eitan (2014) ‘Heterophilious Dynamics Enhances Consensus’, SIAM Review, 56(4), pp. 577-621. doi:10.1137/120901866

Proskurnikov, Anton V., Matveev, Alexey S. and Cao, Ming (2016) ‘Opinion Dynamics in Social Networks With Hostile Camps: Consensus vs. Polarization’, IEEE Transactions on Automatic Control, 61(6), June, pp. 1524-1536. doi:10.1109/TAC.2015.2471655

Squazzoni, Flaminio and Casnici, Niccolò (2013) ‘Is Social Simulation a Social Science Outstation? A Bibliometric Analysis of the Impact of JASSS’, Journal of Artificial Societies and Social Simulation, 16(1), 10, <http://jasss.soc.surrey.ac.uk/16/1/10.html&gt;. doi:10.18564/jasss.2192

Ureña, Raquel, Chiclana, Francisco, Melançon, Guy and Herrera-Viedma, Enrique (2019) ‘A Social Network Based Approach for Consensus Achievement in Multiperson Decision Making’, Information Fusion, 47, May, pp. 72-87. doi:10.1016/j.inffus.2018.07.006

Van Der Linden, Sander, Leiserowitz, Anthony, Rosenthal, Seth and Maibach, Edward (2017) ‘Inoculating the Public against Misinformation about Climate Change’, Global Challenges, 1(2), 27 February, article 1600008. doi:10.1002/gch2.201600008

Xiong, Fei, Wang, Ximeng, Pan, Shirui, Yang, Hong, Wang, Haishuai and Zhang, Chengqi (2020) ‘Social Recommendation With Evolutionary Opinion Dynamics’, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50(10), October, pp. 3804-3816. doi:10.1109/TSMC.2018.2854000

Zhang, Zhen, Gao, Yuan and Li, Zhuolin (2020) ‘Consensus Reaching for Social Network Group Decision Making by Considering Leadership and Bounded Confidence’, Knowledge-Based Systems, 204, 27 September, article 106240. doi:10.1016/j.knosys.2020.106240


Chattoe-Brown, E. (2021) Does It Take Two (And A Creaky Search Engine) To Make An Outstation? Hunting Highly Cited Opinion Dynamics Articles in the Journal of Artificial Societies and Social Simulation (JASSS). Review of Artificial Societies and Social Simulation, 19th August 2021. https://rofasss.org/2021/08/19/outstation/