Tag Archives: edmundchattoebrown

Does It Take Two (And A Creaky Search Engine) To Make An Outstation? Hunting Highly Cited Opinion Dynamics Articles in the Journal of Artificial Societies and Social Simulation (JASSS)

By Edmund Chattoe-Brown

In an important article, Squazzoni and Casnici (2013) raise the issue of how social simulation (as manifested in the Journal of Artificial Societies and Social Simulation – hereafter JASSS – the journal that has probably published the most of this kind of research for longest) cites and is cited in the wider scientific community. They discuss this in terms of social simulation being a potential “outstation” of social science (but better integrated into physical science and computing). This short note considers the same argument in reverse. As an important site of social simulation research, is it the case that JASSS is effectively representing research done more widely across the sciences?

The method used to investigate this was extremely simple (and could thus easily be extended and replicated). On 28.08.21, using the search term “opinion dynamics” in “all fields”, all sources from Web of Science (www.webofknowledge.com, hereafter WOS) that were flagged as “highly cited” were selected as a sample. For each article (only articles turned out to be highly cited), the title was searched in JASSS and the number of hits recorded. Common sense was applied in this search process to maximise the chances of success. So if a title had two sub clauses, these were searched jointly as quotations (to avoid the “hits” being very sensitive to the reproduction of punctuation linking clauses.) In addition, the title of the journal in which the article appeared was searched to give a wider sense of how well the relevant journal is known is JASSS.

However, now we come to the issue of the creaky search engine (as well as other limitations of quick and dirty searches). Obviously searching for the exact title will not find variants of that title with spelling mistakes or attempts to standardise spelling (i. e. changing behavior to behaviour). Further, it turns out that the Google search engine (which JASSS uses) does not promise the consistency that often seems to be assumed for it (http://jdebp.uk/FGA/google-result-counts-are-a-meaningless-metric.html). For example, when I searched for “SIAM Review” I mostly got 77 hits, rather often 37 hits and very rarely 0 or 1 hits. (PDFs are available for three of these outcomes from the author but the fourth could not be reproduced to be recorded in the time available.) This result occurred when another search took place seconds after the first so it is not, for example, a result of substantive changes to the content of JASSS. To deal with this problem I tried to confirm the presence of a particular article by searching jointly for all its co-authors. Mostly this approach gave a similar result (but where it does not it is noted in the table below). In addition, wherever there were a relatively large number of hits for a specific search, some of these were usually not the ones intended. (For example no hit on the term “global challenges” actually turned out to be for the journal Global Challenges.) In addition, JASSS often gives an oddly inconsistent number of hits for a specific article: It may appear as PDF and HTML as well as in multiple indices or may occur just once. (This discouraged attempts to go from hits to the specific number of unique articles citing these WOS sources. As it turns out, this additional detail would have added little to the headline result.)

The term “opinion dynamics” was chosen somewhat arbitrarily (for reasons connected with other research) and it is not claimed that this term is even close to a definitive way of capturing any models connected with opinion/attitude change. Nonetheless, it is clear that the number of hits and the type of articles reported on WOS (which is curated and quality controlled) are sufficient (and sufficiently relevant) for this to be a serviceable search term to identify a solid field of research in JASSS (and elsewhere). I shall return to this issue.

The results, shown in the table below are striking on several counts. (All these sources are fully cited in the references at the end of this article.) Most noticeably, JASSS is barely citing a significant number of articles that are very widely cited elsewhere. Because these are highly cited in WOS this cannot be because they are too new or too inaccessible. The second point is the huge discrepancy in citation for the one article on the WOS list that appears in JASSS itself (Flache et al. 2017). Thirdly, although some of these articles appear in journals that JASSS otherwise does not cite (like Global Challenges and Dynamic Games and Applications) others appear in journals that are known to JASSS and generally cited (like SIAM Review).

Reference WOS Citations Article Title Hits in JASSS Journal Title Hits in JASSS
Acemoglu and Ozdaglar (2011) 301 0 (1 based on joint authors) 2
Motsch and Tadmor (2014) 214 0 77
Van Der Linden et al. (2017) 191 0 6 (but none for the journal)
Acemoğlu et al. (2013) 186 1 2 (but 1 article)
Proskurnikov et al. (2016) 165 0 9
Dong et al. (2017) 147 0 48 (but rather few for the journal)
Jia et al. (2015) 118 0 77
Dong et al. (2018) 117 0 (1 based on joint authors) 48 (but rather few for the journal)
Flache et al. (2017) 86 58 (17 based on joint authors) N/A
Urena et al. (2019) 72 0 6
Bu et al. (2020) 56 0 5
Zhang et al. (2020) 55 0 33 (but only some of these are for the journal)
Xiong et al. (2020) 28 0 1
Carrillo et al. (2020) 13 0 0

One possible interpretation of this result is simply that none of the most highly cited articles in WOS featuring the term “opinion dynamics” happen to be more than incidentally relevant to the scientific interests of JASSS. On consideration, however, this seems a rather improbable coincidence. Firstly, these articles were chosen exactly because they are highly cited so we would have to explain how they could be perceived as so useful generally but specifically not in JASSS. Secondly, the same term (“opinion dynamics”) consistently generates 254 hits in JASSS, suggesting that the problem isn’t a lack of overlap in terminology or research interests.

This situation, however, creates a problem for more conclusive explanation. The state of affairs here is not that these articles are being cited and then rejected on scientific grounds given the interests of JASSS (thus providing arguments I could examine). It is that they are barely being cited at all. Unfortunately, it is almost impossible to establish why something is not happening. Perhaps JASSS authors are not aware of these articles to begin with. Perhaps they are aware but do not see the wider scientific value of critiquing them or attempting to engage with their irrelevance in print.

But, given that the problem is non citation, my concern can be made more persuasive (perhaps as persuasive as it can be given problems of convincingly explaining an absence) by investigating the articles themselves. (My thanks are due to Bruce Edmonds for encouraging me to strengthen the argument in this way.) There are definitely some recurring patterns in this sample. Firstly, a significant proportion of the articles are highly mathematical and, therefore (as Agent-Based Modelling often criticises) rely on extreme simplifying assumptions and toy examples. Even here, however, it is not self-evident that such articles should not be cited in JASSS merely because they are mathematical. JASSS has itself published relatively mathematical articles and, if an article contains a mathematical model that could be “agentised” (thus relaxing its extreme assumptions) which is no less empirical than similar models in JASSS (or has particularly interesting behaviours) then it is hard to see why this should not be discussed by at least a few JASSS authors. A clear example of this is provided by Acemoğlu et al. (2013) which argues that existing opinion dynamics models fail to produce the ongoing fluctuations of opinion observed in real data (see, for example, Figures 1-3 in Chattoe-Brown 2014 which also raises concerns about the face validity of popular social simulations of opinion dynamics). In fact, the assumptions of this model could easily be questioned (and real data involves turning points and not just fluctuations) but the point is that JASSS articles are not citing it and rejecting it based on argument but simply not citing it. A model capable of generating ongoing opinion fluctuations (however imperfect) is simply too important to the current state of opinion dynamics research in social simulation not to be considered at all. Another (though less conclusive) example is Motsch and Tadmor (2014) which presents a model suggesting (counter intuitively) that interaction based on heterophily can better achieve consensus than interaction based on homophily. Of course one can reject such an assumption on empirical grounds but JASSS is not currently doing that (and in fact the term heterophily is unknown in the journal except for the title of a cited article.)

Secondly, there are also a number of articles which, while not providing important results seem no less plausible or novel than typical OD articles that are published in JASSS. For example, Jia et al. (2015) add self-appraisal and social power to a standard OD model. Between debates, agents amend the efficacy they believe that they and others have in terms of swaying the outcome and take that into account going forward. Proskurnikov et al. (2016) present the results of a model in which agents can have negative ties with each other (as well as the more usual positive ones) and thus consider the coevolution of positive/negative sentiments and influence (describing what they call hostile camps i. e. groups with positive ties to each other and negative ties to other groups). This is distinct from the common repulsive effect in OD models where agents do not like the opinions of others (rather than disliking the others themselves.)

Finally, both Dong et al. (2017) and Zhang et al. (2020) reach for the idea (through modelling) that experts and leaders in OD models may not just be randomly scattered through the population as types but may exist because of formal organisations or accidents of social structure: This particular agent is either deliberately appointed to have more influence or happens to have it because of their network position.

On a completely different tack, two articles (Dong et al. 2018 and Acemoglu and Ozdaglar 2011) are literature reviews or syntheses on relevant topics and it is hard to see how such broad ranging articles could have so little value to OD research in JASSS.

It will be admitted that some of the articles in the sample are hard to evaluate with certainty. Mathematical approaches often seem to be more interested in generating mathematics than in justifying its likely value. This is particularly problematic when combined with a suggestion that the product of the research may be instrumental algorithms (designed to get things done) rather than descriptive ones (designed to understand social behaviour). An example of this is several articles which talk about achieving consensus without really explaining whether this is a technical goal (for example in a neural network) or a social phenomenon and, if the latter, whether this places constraints on what it legitimate: You can reach consensus by debate but not by shooting dissenters!

But as well as specific ideas in specific models, this sample of articles also suggest a different emphasis from those currently found within JASSS OD research. For example, there is much more interest in deliberately achieving consensus (and the corresponding hazards of manipulation or misinformation impeding that.) Reading these articles collectively gives a sense that JASSS OD models are very much liberal democratic: Agents honestly express their views (or at most are somewhat reticent to protect themselves.) They decently expect the will of the people to prevail. They do not lie strategically to sway the influential, spread rumours to discredit the opinions of opponents or flood the debate with bots. Again, this darker vision is no more right a priori than the liberal democratic one but JASSS should at least be engaging with articles modelling (or providing data on – see Van Der Linden et al. 2017) such phenomena in an OD context. (Although misinformation is mentioned in some OD articles in JASSS it does not seem to be modelled. There also seems to be another surprising glitch in the search engine which considers the term “fake news” to be a hit for misinformation!) This also puts a new slant on an ongoing challenge in OD research, identifying a plausible relationship between fact and opinion. Is misinformation a different field of research (on the grounds that opinions can never be factually wrong) or is it possible for the misinformed to develop mis-opinions? (Those that they would change if what they knew changed.) Is it really the case that Brexiteers, for example, are completely indifferent to the economic consequences which will reveal themselves or did they simply have mistaken beliefs about how high those costs might turn out to be which will cause them to regret their decision at some later stage?

Thus to sum up, while some of the articles in the sample can be dismissed as either irrelevant to JASSS or having a potential relevance that is hard to establish, the majority cannot reasonably be regarded in this way (and a few are clearly important to the existing state of OD research.) While we cannot explain why these articles are not in fact cited, we can thus call into question one possible (Panglossian) explanation for the observed pattern (that they are not cited because they have nothing to contribute).

Apart from the striking nature of the result and its obvious implication (if social simulators want to be cited more widely they need to make sure they are also citing the work of others appropriately) this study has two wider (related) implications for practice.

Firstly, systematic literature reviewing (see, for example, Hansen et al. 2019 – not published in JASSS) needs to be better enforced in social simulation: “Systematic literature review” gets just 7 hits in JASSS. It is not enough to cite just what you happen to have read or models that resemble your own, you need to be citing what the community might otherwise not be aware of or what challenges your own model assumptions. (Although, in my judgement, key assumptions of Acemoğlu et al. 2013 are implausible I don’t think that I could justify non subjectively that they are any more implausible than those of those of the Zaller-Deffuant model – Malarz et al. 2011 – given the huge awareness discrepancy which the two models manifest in social simulation.)

Secondly, we need to rethink the nature of literature reviewing as part of progressive research. I have used “opinion dynamics” here not because it is the perfect term to identify all models of opinion and attitude change but because it throws up enough hits to show that this term is widely used in social simulation. Because I have clearly stated my search term, others can critique it and extend my analysis using other relevant terms like “opinion change” or “consensus formation”. A literature review that is just a bunch of arbitrary stuff cannot be critiqued or improved systematically (rather than nit-picked for specific omissions – as reviewers often do – and even then the critique can’t tell what should have been included if there are no clearly stated search criteria.) It should not be possible for JASSS (and the social simulation community it represents) simply to disregard articles as potentially important in their implications for OD as Acemoğlu et al. (2013). Even if this article turned out to be completely wrong-headed, we need to have enough awareness of it to be able to say why before setting it aside. (Interestingly, the one citation it does receive in JASSS can be summarised as “there are some other model broadly like this” with no detailed discussion at all – and thus no clear statement of how the model presented in the citing article adds to previous models – but uninformative citation is a separate problem.)

References

Acemoğlu, Daron and Ozdaglar, Asuman (2011) ‘Opinion Dynamics and Learning in Social Networks’, Dynamic Games and Applications, 1(1), March, pp. 3-49. doi:10.1007/s13235-010-0004-1

Acemoğlu, Daron, Como, Giacomo, Fagnani, Fabio and Ozdaglar, Asuman (2013) ‘Opinion Fluctuations and Disagreement in Social Networks’, Mathematics of Operations Research, 38(1), February, pp. 1-27. doi:10.1287/moor.1120.0570

Bu, Zhan, Li, Hui-Jia, Zhang, Chengcui, Cao, Jie, Li, Aihua and Shi, Yong (2020) ‘Graph K-Means Based on Leader Identification, Dynamic Game, and Opinion Dynamics’, IEEE Transactions on Knowledge and Data Engineering, 32(7), July, pp. 1348-1361. doi:10.1109/TKDE.2019.2903712

Carrillo, J. A., Gvalani, R. S., Pavliotis, G. A. and Schlichting, A. (2020) ‘Long-Time Behaviour and Phase Transitions for the Mckean–Vlasov Equation on the Torus’, Archive for Rational Mechanics and Analysis, 235(1), January, pp. 635-690. doi:10.1007/s00205-019-01430-4

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1), February, article 16, <http://www.socresonline.org.uk/19/1/16.html&gt;. doi:10.5153/sro.3315

Dong, Yucheng, Ding, Zhaogang, Martínez, Luis and Herrera, Francisco (2017) ‘Managing Consensus Based on Leadership in Opinion Dynamics’, Information Sciences, 397-398, August, pp. 187-205. doi:10.1016/j.ins.2017.02.052

Dong, Yucheng, Zhan, Min, Kou, Gang, Ding, Zhaogang and Liang, Haiming (2018) ‘A Survey on the Fusion Process in Opinion Dynamics’, Information Fusion, 43, September, pp. 57-65. doi:10.1016/j.inffus.2017.11.009

Flache, Andreas, Mäs, Michael, Feliciani, Thomas, Chattoe-Brown, Edmund, Deffuant, Guillaume, Huet, Sylvie and Lorenz, Jan (2017) ‘Models of Social Influence: Towards the Next Frontiers’, Journal of Artificial Societies and Social Simulation, 20(4), October, article 2, <http://jasss.soc.surrey.ac.uk/20/4/2.html&gt;. doi:10.18564/jasss.3521

Hansen, Paula, Liu, Xin and Morrison, Gregory M. (2019) ‘Agent-Based Modelling and Socio-Technical Energy Transitions: A Systematic Literature Review’, Energy Research and Social Science, 49, March, pp. 41-52. doi:10.1016/j.erss.2018.10.021

Jia, Peng, MirTabatabaei, Anahita, Friedkin, Noah E. and Bullo, Francesco (2015) ‘Opinion Dynamics and the Evolution of Social Power in Influence Networks’, SIAM Review, 57(3), pp. 367-397. doi:10.1137/130913250

Malarz, Krzysztof, Gronek, Piotr and Kulakowski, Krzysztof (2011) ‘Zaller-Deffuant Model of Mass Opinion’, Journal of Artificial Societies and Social Simulation, 14(1), 2, <https://www.jasss.org/14/1/2.html&gt;. doi:10.18564/jasss.1719

Motsch, Sebastien and Tadmor, Eitan (2014) ‘Heterophilious Dynamics Enhances Consensus’, SIAM Review, 56(4), pp. 577-621. doi:10.1137/120901866

Proskurnikov, Anton V., Matveev, Alexey S. and Cao, Ming (2016) ‘Opinion Dynamics in Social Networks With Hostile Camps: Consensus vs. Polarization’, IEEE Transactions on Automatic Control, 61(6), June, pp. 1524-1536. doi:10.1109/TAC.2015.2471655

Squazzoni, Flaminio and Casnici, Niccolò (2013) ‘Is Social Simulation a Social Science Outstation? A Bibliometric Analysis of the Impact of JASSS’, Journal of Artificial Societies and Social Simulation, 16(1), 10, <http://jasss.soc.surrey.ac.uk/16/1/10.html&gt;. doi:10.18564/jasss.2192

Ureña, Raquel, Chiclana, Francisco, Melançon, Guy and Herrera-Viedma, Enrique (2019) ‘A Social Network Based Approach for Consensus Achievement in Multiperson Decision Making’, Information Fusion, 47, May, pp. 72-87. doi:10.1016/j.inffus.2018.07.006

Van Der Linden, Sander, Leiserowitz, Anthony, Rosenthal, Seth and Maibach, Edward (2017) ‘Inoculating the Public against Misinformation about Climate Change’, Global Challenges, 1(2), 27 February, article 1600008. doi:10.1002/gch2.201600008

Xiong, Fei, Wang, Ximeng, Pan, Shirui, Yang, Hong, Wang, Haishuai and Zhang, Chengqi (2020) ‘Social Recommendation With Evolutionary Opinion Dynamics’, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50(10), October, pp. 3804-3816. doi:10.1109/TSMC.2018.2854000

Zhang, Zhen, Gao, Yuan and Li, Zhuolin (2020) ‘Consensus Reaching for Social Network Group Decision Making by Considering Leadership and Bounded Confidence’, Knowledge-Based Systems, 204, 27 September, article 106240. doi:10.1016/j.knosys.2020.106240


Chattoe-Brown, E. (2021) Does It Take Two (And A Creaky Search Engine) To Make An Outstation? Hunting Highly Cited Opinion Dynamics Articles in the Journal of Artificial Societies and Social Simulation (JASSS). Review of Artificial Societies and Social Simulation, 19th August 2021. https://rofasss.org/2021/08/19/outstation/


 

The role of population scale in compartmental models of COVID-19 transmission

By Christopher J. Watts1,*, Nigel Gilbert2, Duncan Robertson3, 4, Laurence T. Droy5, Daniel Ladley6and Edmund Chattoe-Brown5

*Corresponding author, 12 Manor Farm Cottages, Waresley, Sandy, SG19 3BZ, UK, 2Centre for Research in Social Simulation (CRESS), University of Surrey, Guildford GU2 7XH, UK, 3School of Business and Economics, Loughborough University, Loughborough, UK, 4St Catherine’s College, University of Oxford, Oxford, UK, 5School of Media, Communication and Sociology, University of Leicester, UK, 6University of Leicester School of Business, University of Leicester, Leicester, LE17RH, UK

(A contribution to the: JASSS-Covid19-Thread)

Compartmental models of COVID-19 transmission have been used to inform policy, including the decision to temporarily reduce social contacts among the general population (“lockdown”). One such model is a Susceptible-Exposed-Infectious-Removed (SEIR) model developed by a team at the London School of Hygiene and Tropical Medicine (hereafter, “the LSHTM model”, Davies et al., 2020a). This was used to evaluate the impact of several proposed interventions on the numbers of cases, deaths, and intensive care unit (ICU) hospital beds required in the UK. We wish here to draw attention to behaviour common to this and other compartmental models of diffusion, namely their sensitivity to the size of the population simulated and the number of seed infections within that population. This sensitivity may compromise any policy advice given.

We therefore describe below the essential details of the LSHTM model, our experiments on its sensitivity, and why they matter to its use in policy making.

The LSHTM model

Compartmental models of disease transmission divide members of a population according to their disease states, including at a minimum people who are “susceptible” to a disease, and those who are “infectious”. Susceptible individuals make social contact with others within the same population at given rates, with no preference for the other’s disease state, spatial location, or social networks (the “universal mixing” assumption). Social contacts result in infections with a chance proportional to the fraction of the population who are currently infectious. Perhaps to reduce the implausibility of the universal mixing assumption, the LSHTM model is run for each of 186 county-level administrative units (“counties”, having an average size of 357,000 people), instead of a single run covering the whole UK population (66.4 million). Each county receives the same seed infection schedule: two new infections per day for 28 days. The 186 county time series are then summed to form a time series for the UK. There are no social contacts between counties, and the 186 county-level runs are independent of each other. Outputs from the model include total and peak cases and deaths, ICU and non-ICU hospital bed occupancy, and the time to peak cases, all reported for the UK as a whole.

Interventions are modelled as 12-week reductions in contact rates, and, in the first experiment, scheduled to commence 6 weeks prior to the peak in UK cases with no intervention. Further experiments shift the start of the intervention, and trigger the intervention upon reaching a given number of ICU beds, rather than a specific time.

Studying sensitivity to population size

The 186 counties vary in their population sizes, from Isles of Scilly (2,242 people) to West Midlands (2.9 million). We investigated whether the variation in population size led to differences in model behaviour. The LSHTM model files were cloned from https://github.com/cmmid/covid-UK , while the data analysis was performed using our own scripts posted at https://github.com/innovative-simulator/PopScaleCompartmentModels .

A graph showing Peak week infections against population size (on a log scale). The peak week looks increasing linear (with the log population scale), but there is a uniform increase in peak week with more seed infections.The figure above shows the results of running the LSHTM model with populations of various sizes, each point being an average of 10 repetitions. The time, in weeks, to the peak in cases forms a linear trend with the base-10 logarithm of population. A linear regression line fitted to these points gives Peak Week = 2.70 log10(Population) – 2.80, with R2 = 0.999.

To help understand this relationship, we then compared the seeding used by the LSHTM team, i.e. 2 infectious persons per day for 28 days, to two forms of reduced seeding, 1 per day for 28 days, and 2 per day for 14 days. Halving the seeding is similar in effect, but not identical to, doubling the population size.

Deterministic versions of other compartmental models of transmission (SIR, SEIR, SI) confirmed the relation between population size and time of occurrence to be a common feature of such models. See the R and Excel files at: https://github.com/innovative-simulator/PopScaleCompartmentModels .

For the simplest, the SI model, the stock of infectious people is described by the logistic function.I(t)=N/(1+exp(-u*C*(t-t*)))Here N is the population size, u susceptibility, and C the contact rate. If I(0)=s, the number of seed infections, then it can be shown that the peak in new infections, I(t*), occurs at timet*=ln(N/s-1)/(u*C)

Hence, for N/s >> 1, the time to peak cases, t*, correlates well with log10N/s.

As well as peak cases, analogous sensitivity was found for the timing of peaks in infections and hospital admissions, and for reaching critical levels, such as the hospital bed capacity as a proportion of the population. In contrast, the heights of peaks, and totals of cases, deaths and beds were constant percentages of population when population size was varied.

Why the unit of population matters

Davies et al. (2020a) make forecasts of both the level of peak cases and the timing of their occurrence. Despite showing that two counties can vary in their results (Davies et al., 2020a, p. 6), and mentioning in the supplementary material some effects of changing the seeding schedule (Davies et al., 2020b, p. 5), they do not mention any sensitivity to population size. But, as we have shown here, given the same number and timing of seed infections, the county with the smallest population will peak in cases earlier than the one with the largest. This sensitivity to population size affects the arguments of Davies et al. in several ways.

Firstly, Davies et al. produce their forecasts for the UK by summing county-level time series. But counties with out-of-sync peaks will sum to produce a shorter, flatter peak for the UK, than would have been achieved by synchronous county peaks. Thus the forecasts of peak cases for the UK are being systematically biased down.

Secondly, timing is important for the effectiveness of the interventions. As Davies et al. note in relation to their experiment on shifting the start time of the intervention, an intervention can be too early or too late. It is too early if, when it ends after 12 weeks, the majority of the population is still susceptible to any remaining infectious cases, and a serious epidemic can still occur. At the other extreme, an intervention can be too late if it starts when most of the epidemic has already occurred.

A timing problem also threatens if the intervention is triggered by the occupancy of ICU beds reaching some critical level. This level will be reached for the UK or average county later than for a small county. Thus the problem extends beyond the timing of peaks to affect other aspects of a policy supported by the model.

Our results imply that an intervention timed optimally for a UK-level, or average county-level, cases peak, as well as an intervention triggered by a UK-level beds occupancy threshold, may be less effective for counties with far-from-average sizes.

There are multiple ways of resolving these issues, including re-scaling seed infections in line with size of population unit, simulating the UK directly rather than as a sum of counties, and rejecting compartmental models in favour of network- or agent-based models. A discussion of the respective pros and cons of these alternatives requires a longer paper. For now, we note that compartmental models remain quick and cheap to design, fit, and study. The issues with Davies et al. (2020a) we have drawn attention to here highlight (1) the importance of adequate sensitivity testing, (2) the need for care when choosing at which scale to model and how to seed an infection, and (3) the problems that can stem from uniform national policy interventions, rather than ones targeted at a more local level.

References

Davies, N. G., Kucharski, A. J., Eggo, R. M., Gimma, A., Edmunds, W. J., Jombart, T., . . . Liu, Y. (2020a). Effects of non-pharmaceutical interventions on COVID-19 cases, deaths, and demand for hospital services in the UK: a modelling study. The Lancet Public Health, 5(7), e375-e385. doi:10.1016/S2468-2667(20)30133-X

Davies, N. G., Kucharski, A. J., Eggo, R. M., Gimma, A., Edmunds, W. J., Jombart, T., . . . Liu, Y. (2020b). Supplement to Davies et al. (2020b). https://www.thelancet.com/cms/10.1016/S2468-2667(20)30133-X/attachment/cee85e76-cffb-42e5-97b6-06a7e1e2379a/mmc1.pdf


Watts, C.J., Gilbert, N., Robertson, D., Droy, L.T., Ladley, D and Chattoe-Brown, E. (2020) The role of population scale in compartmental models of COVID-19 transmission. Review of Artificial Societies and Social Simulation, 14th August 2020. https://rofasss.org/2020/08/14/role-population-scale/


 

A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation

By Edmund Chattoe-Brown

The Motivation

Research that confronts models with data is still sufficiently rare that it is hard to get a representative sense of how it is done and how convincing the results are simply by “background reading”. One way to advance good quality empirical modelling is therefore simply to make it more visible in quantity. With this in mind I have constructed (building on the work of Angus and Hassani-Mahmooei 2015) the first version of a bibliography listing all ABM attempting empirical validation in JASSS between 1998 and 2019 (along with a few other example) – which generates 68 items in all. Each entry gives a full reference and also describes what comparisons are made and where in the article they occur. In addition the document contains a provisional bibliography of articles giving advice or technical support to validation and lists three survey articles that categorise large samples of simulations by their relationships to data (which served as actual or potential sources for the bibliography).

With thanks to Bruce Edmonds, this first version of the bibliography has been made available as a Centre for Policy Modelling Discussion Paper CPM-20-216, which can be downloaded http://cfpm.org/discussionpapers/256.

The Argument

It may seem quite surprising to focus only on validation initially but there is an argument (Chattoe-Brown 2019) which says that this is a more fundamental challenge to the quality of a model than calibration. A model that cannot track real data well, even when its parameters are tuned to do so is clearly a fundamentally inadequate model. Only once some measure of validation has been achieved can we decide how “convincing” it is (comparing independent empirical calibration with parameter tuning for example). Arguably, without validation, we cannot really be sure whether a model tells us anything about the real world at all (no matter how plausible any narrative about its assumptions may appear). This can be seen as a consequence of the arguments about complexity routinely made by ABM practitioners as the plausibility of the assumptions does not map intuitively onto the plausibility of the outputs.

The Uses

Although these are covered in the preface to the bibliography in greater detail, such a sample has a number of scientific uses which I hope will form the basis for further research.

  • To identify (and justify) good and bad practice, thus promoting good practice.
  • To identify (and then perhaps fill) gaps in the set of technical tools needed to support validation (for example involving particular sorts of data).
  • To test the feasibility and value of general advice offered on validation to date and refine it in the face of practical challenges faced by analysis of real cases.
  • To allow new models to demonstrably outperform the levels of validation achieved by existing models (thus creating the possibility for progressive empirical research in ABM).
  • To support agreement about the effective use of the term validation and to distinguish it from related concepts (like verification) and potentially unhelpful (for example ambiguous or rhetorically loaded) uses

The Plan

Because of the labour involved and the diversity of fields in which ABM have now been used over several decades, an effective bibliography on this kind cannot be the work of a single author (or even a team of authors). My plan is thus to solicit (fully credited) contributions and regularly release new versions of the bibliography – with new co-authors as appropriate. (This publishing model is intended to maintain the quality and suitability for citation of the resulting document relative to the anarchy that sometimes arises in genuine communal authorship!) All of the following contributions will be gratefully accepted for the next revision (on which I am already working myself in any event)

  • References to new surveys or literature reviews that categorise significant samples of ABM research by their relationship to data.
  • References for proposed new entries to the bibliography in as much detail as possible.
  • Proposals to delete incorrectly categorised entries. (There are a small number of cases where I have found it very difficult to establish exactly what the authors did in the name of validation, partly as a result of confusing or ambiguous terminology.)
  • Proposed revisions to incorrect or “unfair” descriptions of existing entries (ideally by the authors of those pieces).
  • Offers of collaboration for a proposed companion bibliography on calibration. Ultimately this will lead to a (likely very small) sample of calibrated and validated ABM (which are often surprisingly little cited given their importance to the credibility of the ABM “project” – see, for example, Chattoe-Brown (2018a, 2018b).

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16. <http://jasss.soc.surrey.ac.uk/18/4/16.html> doi:10.18564/jasss.2952

Chattoe-Brown, Edmund (2018a) ‘Query: What is the Earliest Example of a Social Science Simulation (that is Nonetheless Arguably an ABM) and Shows Real and Simulated Data in the Same Figure or Table?’ Review of Artificial Societies and Social Simulation, 11 June. https://rofasss.org/2018/06/11/ecb/

Chattoe-Brown, Edmund (2018b) ‘A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974)’, Review of Artificial Societies and Social Simulation, 1 June. https://rofasss.org/2018/06/01/ecb/

Chattoe-Brown, Edmund (2019) ‘Agent Based Models’, in Atkinson, Paul, Delamont, Sara, Cernat, Alexandru, Sakshaug, Joseph W. and Williams, Richard A. (eds.) SAGE Research Methods Foundations. doi:10.4135/9781526421036836969


Chattoe-Brown, E. (2020) A Bibliography of ABM Research Explicitly Comparing Real and Simulated Data for Validation. Review of Artificial Societies and Social Simulation, 12th June 2020. https://rofasss.org/2020/06/12/abm-validation-bib/


 

The Policy Context of Covid19 Agent-Based Modelling

By Edmund Chattoe-Brown

(A contribution to the: JASSS-Covid19-Thread)

In the recent discussions about the role of ABM and COVID, there seems to be an emphasis on the purely technical dimensions of modelling. This obviously involves us “playing to our strengths” but unfortunately it may reduce the effectiveness that our potential policy contributions can make. Here are three contextual aspects of policy for consideration to provide a contrast/corrective.

What is “Good” Policy?

Obviously from a modelling perspective good policy involves achieving stated goals. So a model that suggests a lower death rate (or less taxing of critical care facilities) under one intervention rather than another is a potential argument for that intervention. (Though of course how forceful the argument is depends on the quality of the model.) But the problem is that policy is predominantly a political and not a technical process (related arguments are made by Edmonds 2020). The actual goals by which a policy is evaluated may not be limited to the obvious technical ones (even if that is what we hear most about in the public sphere) and, most problematically, there may be goals which policy makers are unwilling to disclose. Since we do not know what these goals are, we cannot tell whether their ends are legitimate (having to negotiate privately with the powerful to achieve anything) or less so (getting re-elected as an end in itself).

Of course, by its nature (being based on both power and secrecy), this problem may be unfixable but even awareness of it may change our modelling perspective in useful ways. Firstly, when academic advice is accused of irrelevance, the academics can only ever be partly to blame. You can only design good policy to the extent that the policy maker is willing to tell you the full evaluation function (to the extent that they know it of course). Obviously, if policy is being measured by things you can’t know about, your advice is at risk of being of limited value. Secondly, with this is mind, we may be able to gain some insight into the hidden agenda of policy by looking at what kind of suggestions tend to be accepted and rejected. Thirdly, once we recognise that there may be “unknown unknowns” we can start to conjecture intelligently about what these could be and take some account of them in our modelling strategies. For example, how many epidemic models consider the financial costs of interventions even approximately? Is the idea that we can and will afford whatever it takes to reduce deaths a blind spot of the “medical model?”

When and How to Intervene

There used to be an (actually rather odd) saying: “You can’t get a baby in a month by making nine women pregnant”. There has been a huge upsurge in interest regarding modelling and its relationship to policy since start of the COVID crisis (of which this theme is just one example) but realising the value of this interest currently faces significant practical problems. Data collection is even harder than usual (as is scholarship in general), there is a limit to how fast good research can ever be done, peer review takes time and so on. The question here is whether any amount of rushing around at the present moment will compensate for neglected activities when scholarship was easier and had more time (an argument also supported by Bithell 2018). The classic example is the muttering in the ABM community about the Ferguson model being many thousands of lines of undocumented C code. Now we are in a crisis, even making the model available was a big ask, let alone making it easier to read so that people might “heckle” it. But what stopped it being available, documented, externally validated and so on before COVID? What do we need to do so that next time there is a pandemic crisis, which there surely will be, “we” (the modelling community very broadly defined) are able to offer the government a “ready” model that has the best features of various modelling techniques, evidence of unfudgeable quality against data, relevant policy scenarios and so on? (Specifically, how will ABM make sure it deserves to play a fit part in this effort?) Apart from the models themselves, what infrastructures, modelling practices, publishing requirements and so on do we need to set up and get working well while we have the time? In practice, given the challenges of making effective contributions right now (and the proliferation of research that has been made available without time for peer review may be actively harmful), this perspective may be the most important thing we can realistically carry into the “post lockdown” world.

What Happens Afterwards?

ABM has taken such a long time to “get to” policy based on data that looking further than the giving of such advice simply seems to have been beyond us. But since policy is what actually happens, we have a serious problem with counterfactuals. If the government decides to “flatten the curve” rather than seek “herd immunity” then we know how the policy implemented relates to the model “findings” (for good or ill) but not how the policy that was not implemented does. Perhaps the outturn of the policy that looked worse in the model would actually have been better had it been implemented?

Unfortunately (this is not a typo), we are about to have an unprecedently large social data set of comparative experiments in the nature and timing of epidemiological interventions, but ABM needs to be ready and willing to engage with this data. I think that ABM probably has a unique contribution to make in “endogenising” the effects of policy implementation and compliance (rather than seeing these, from a “model fitting” perspective, as structural changes to parameter values) but to make this work, we need to show much more interest in data than we have to date.

In 1971, Dutton and Starbuck, in a worryingly neglected article (cited only once in JASSS since 1998 and even then not in respect of model empirics) reported that 81% of the models they surveyed up to 1969 could not achieve even qualitative measurement in both calibration and validation (with only 4% achieving quantitative measurement in both). As a very rough comparison (but still the best available), Angus and Hassani-Mahmooei (2015) showed that just 13% of articles in JASSS published between 2010 and 2012 displayed “results elements” both from the simulation and using empirical material (but the reader cannot tell whether these are qualitative or quantitative elements or whether their joint presence involves comparison as ABM methodology would indicate). It would be hard to make the case that the situation in respect to ABM and data has therefore improved significantly in 4 decades and it is at least possible that it has got worse!

For the purposes of policy making (in the light of the comments above), what matters of course is not whether the ABM community believes that models without data continue to make a useful contribution but whether policy makers do.

References

Angus, S. D. and Hassani-Mahmooei, B. (2015) “Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012, Journal of Artificial Societies and Social Simulation, 18(4), 16. doi:10.18564/jasss.2952

Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb

Dutton, John M. and Starbuck, William H. (1971) Computer Simulation Models of Human Behavior: A History of an Intellectual Technology. IEEE Transactions on Systems, Man, and Cybernetics, SMC-1(2), 128–171. doi:10.1109/tsmc.1971.4308269

Edmonds, B. (2020) What more is needed for truly democratically accountable modelling? Review of Artificial Societies and Social Simulation, 2nd May 2020. https://rofasss.org/2020/05/02/democratically-accountable-modelling/


Chattoe-Brown, E. (2020) The Policy Context of Covid19 Agent-Based Modelling. Review of Artificial Societies and Social Simulation, 4th May 2020. https://rofasss.org/2020/05/04/policy-context/


 

Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM

By Sebastian Achter, Melania Borit, Edmund Chattoe-Brown, Christiane Palaretti and Peer-Olaf Siebers

The initiative presented below arose from a Lorentz Center workshop on Integrating Qualitative and Quantitative Evidence using Social Simulation (8-12 April 2019, Leiden, the Netherlands). At the beginning of this workshop, the attenders divided themselves into teams aiming to work on specific challenges within the broad domain of the workshop topic. Our team took up the challenge of looking at “Rigour, Transparency, and Reuse”. The aim that emerged from our initial discussions was to create a framework for augmenting rigour and transparency (RAT) of data use in ABM when both designing, analysing and publishing such models.

One element of the framework that the group worked on was a roadmap of the modelling process in ABM, with particular reference to the use of different kinds of data. This roadmap was used to generate the second element of the framework: A protocol consisting of a set of questions, which, if answered by the modeller, would ensure that the published model was as rigorous and transparent in terms of data use, as it needs to be in order for the reader to understand and reproduce it.

The group (which had diverse modelling approaches and spanned a number of disciplines) recognised the challenges of this approach and much of the week was spent examining cases and defining terms so that the approach did not assume one particular kind of theory, one particular aim of modelling, and so on. To this end, we intend that the framework should be thoroughly tested against real research to ensure its general applicability and ease of use.

The team was also very keen not to “reinvent the wheel”, but to try develop the RAT approach (in connection with data use) to augment and “join up” existing protocols or documentation standards for specific parts of the modelling process. For example, the ODD protocol (Grimm et al. 2010) and its variants are generally accepted as the established way of documenting ABM but do not request rigorous documentation/justification of the data used for the modelling process.

The plan to move forward with the development of the framework is organised around three journal articles and associated dissemination activities:

  • A literature review of best (data use) documentation and practice in other disciplines and research methods (e.g. PRISMA – Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
  • A literature review of available documentation tools in ABM (e.g. ODD and its variants, DOE, the “Info” pane of NetLogo, EABSS)
  • An initial statement of the goals of RAT, the roadmap, the protocol and the process of testing these resources for usability and effectiveness
  • A presentation, poster, and round table at SSC 2019 (Mainz)

We would appreciate suggestions for items that should be included in the literature reviews, “beta testers” and critical readers for the roadmap and protocol (from as many disciplines and modelling approaches as possible), reactions (whether positive or negative) to the initiative itself (including joining it!) and participation in the various activities we plan at Mainz. If you are interested in any of these roles, please email Melania Borit (melania.borit@uit.no).

References

Grimm, V., Berger, U., DeAngelis, D. L., Polhill, J. G., Giske, J. and Railsback, S. F. (2010) ‘The ODD Protocol: A Review and First Update’, Ecological Modelling, 221(23):2760–2768. doi:10.1016/j.ecolmodel.2010.08.019


Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. and Siebers, P.-O.(2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/


 

Query: What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table?

By Edmund Chattoe-Brown

On one level this is a straightforward request. The earliest convincing example I have found is Hägerstrand (1965, p. 381) an article that seems to be undeservedly neglected because it is also the earliest example of a simulation I have been able to identify that demonstrates independent calibration and validation (Gilbert and Troitzsch 2005, p. 17).1

However, my attempts to find the earliest examples are motivated two more substantive issues (which may help to focus the search for earlier candidates). Firstly, what is the value of a canon (and giving due intellectual credit) for the success of ABM? The Schelling model is widely known and taught but it is not calibrated and validated. If a calibrated and validated model already existed in 1965, should it not be more widely cited? If we mostly cite a non-empirical model, might we give the impression that this is all that ABM can do? Also, failing to cite an article means that it cannot form the basis for debate. Is the Hägerstrand model in some sense “better” or “more important” than the Schelling model? This is a discussion we cannot have without awareness of the Hägerstrand model in the first place.

The second (and related) point regards the progress made by ABM and how those outside the community might judge it. Looking at ABM research now, the great majority of models appear to be non-empirical (Angus and Hassani-Mahmooei 2015, Table 5 in section 4.5). Without citations of articles like Hägerstrand (and even Clarkson and Meltzer), the non-expert reader of ABM might be led to conclude that it is too early (or too difficult) to produce such calibrated and validated models. But if this was done 50 years ago, and is not being much publicised, might we be using up our credibility as a “new” field still finding its feet?) If there are reasons for not doing, or not wanting to do, what Hägerstrand managed, let us be obliged to be clear what they are and not simply hide behind widespread neglect of such examples2.)

Notes

  1. I have excluded an even earlier example of considerable interest (Clarkson and Meltzer 1960 which also includes an attempt at calibration and validation but has never been cited in JASSS) for two reasons. Firstly, it deals with the modelling of a single agent and therefore involves no interaction. Secondly, it appears that the validation may effectively be using the “same” data as the calibration in that protocols elicited from an investment officer regarding portfolio selection are then tested against choices made by that same investment officer.
  2. And, of course, this is a vicious circle because in our increasingly pressurised academic world, people only tend to read and cite what is already cited.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16, .

Clarkson, Geoffrey P. and Meltzer, Allan H. (1960) ‘Portfolio Selection: A Heuristic Approach, The Journal of Finance, 15(4), December, pp. 465-480.

Gilbert, Nigel and Troitzsch, Klaus G. (2005) Simulation for the Social Scientist, 2nd edition (Buckingham: Open University Press).

Hägerstrand, Torsten (1965) ‘A Monte Carlo Approach to Diffusion’, Archives Européennes de Sociologie, 6(1), May, Special Issue on Simulation in Sociology, pp. 43-67.


Chattoe-Brown, E. (2018) What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table? Review of Artificial Societies and Social Simulation, 11th June 2018. https://rofasss.org/2018/06/11/ecb/


 

A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974)

By Edmund Chattoe-Brown

Since this is new venture, we need to establish conventions. Since JASSS has been running since 1998 (twenty years!) it is reasonable to argue that something un-cited in JASSS throughout that period has effectively been forgotten by the ABM community. This contribution by Grémy is actually a single chapter in a book otherwise by Boudon (a bibliographical oddity that may have contributed to its neglect. Grémy also appears to have published mostly in French, which may also have had an effect. An English summary of his contribution to simulation might be another useful item for RofASSS.) Boudon gets 6 hits on the JASSS search engine (as of 31.05.18), none of which mention simulation and Gremy gets no hits (as does Grémy: unfortunately it is hard to tell how online search engines “cope with” accents and thus whether this is a “real” result).

Since this book is still readily available as a mass-market paperback, I will not reprise the argument of the simulation here (and its limitations relative to existing ABM methodology could be a future RofASSS contribution). Nonetheless, even approximately empirical modelling in the mid-seventies is worthy of note and the article is early to say other important things (for example about simulation being able to avoid “technical assumptions” – made for solubility rather than realism).

The point of this contribution is to draw attention to an argument that I have only heard twice (and only found once in print) namely that we should look at the form of real data as an initial justification for using ABM at all (please correct me if there are earlier or better examples). Grémy (1974, p. 210) makes the point that initial incongruities between the attitudes that people hold (altruistic versus selfish) and their career choices (counsellor versus corporate raider) can be resolved in either direction as time passes (he knows this because Boudon analysed some data collected by Rosenberg at two points from US university students) as well as remaining unresolved and, as such, cannot readily be explained by some sort of “statistical trend” (that people become more selfish as they get older or more altruistic as they become more educated). He thus hypothesises (reasonably it seems to me) that the data requires a model of some sort of dynamic interaction process that Grémy then simulates, paying some attention to their survey results both in constraining the model and analysing its behaviour.

This seems to me an important methodological practice to rescue from neglect. (It is widely recognised anecdotally that people tend to use the research methods they know and like rather than the ones that are suitable.) Elsewhere (Chattoe-Brown 2014), inspired by this argument, I have shown how even casually accessed attitude change data really looks nothing like the output of the (very popular) Zaller-Deffuant model of opinion change (very roughly, 228 hits in JASSS for Deffuant, 8 for Zaller and 9 for Zaller-Deffuant though hyphens sometimes produce unreliable results for online search engines too.) The attitude of the ABM community to data seems to be rather uncomfortable. Perhaps support in theory and neglect in practice would sum it up (Angus and Hassani-Mahmooei 2015, Table 5 in section 4.5). But if our models can’t even “pass first base” with existing real data (let alone be calibrated and validated) should we be too surprised if what seems plausible to us does not seem plausible to social scientists in substantive domains (and thus diminishes their interest in ABM as a “real method?”) Even if others in the ABM community disagree with my emphasis on data (and I know that they do) I think this is a matter that should be properly debated rather than just left floating about in coffee rooms (as such this is what we intend RofASSS to facilitate). As W. C. Fields is reputed to have said (though actually the phrase appears to have been common currency), we would wish to avoid ABM being just “Another good story ruined by an eyewitness”.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4):16.

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1):16.

Gremy, Jean-Paul (1974) ‘Simulation Techniques’, in Boudon, Raymond, The Logic of Sociological Explanation (Harmondsworth: Penguin), chapter 11:209-227.


Chattoe-Brown, E. (2018) A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974). Review of Artificial Societies and Social Simulation, 1st June 2018. https://rofasss.org/2018/06/01/ecb/