Tag Archives: review

RofASSS to encourage reproduction reports and reviews of old papers&books

Reproducing simulation models is essential for verifying them and critiquing them. This involves a lot more work than one would think (Axtell & al. 1996) and can reveal surprising flaws, even in the simplest of models (e.g. Edmonds & Hales 2003). Such reproduction is especially vital if the model outcomes are likely to affect people’s lives (Chattoe-Brown & al. 2021).

Whilst substantial pieces of work – where there is extensive analysis or extension – can be submitted to JASSS/CMOT, some such reports might be much simpler and not justify a full journal paper. Thus RofASSS has decided to encourage researchers to submit reports of reproductions here – however simple or complicated.

Similarly, JASSS, CMOT etc. do publish book reviews, but these tend to be of recent books. Although new books are of obvious interest to those at the cutting edge of research, it often happens that important papers & books are forgotten or overlooked. At RofASSS we would like to encourage reviews of any relevant book or paper, however old.

References

Axtell, R., Axelrod, R., Epstein, J. M., & Cohen, M. D. (1996). Aligning simulation models: A case study and results. Computational & Mathematical Organization Theory, 1, 123-141. DOI: 10.1007/BF01299065

Edmonds, B., & Hales, D. (2003). Replication, replication and replication: Some hard lessons from model alignment. Journal of Artificial Societies and Social Simulation, 6(4), 11. https://jasss.soc.surrey.ac.uk/6/4/11.html

Chattoe-Brown, E. Gilbert, N., Robertson, D. A. & Watts, C. (2021) Reproduction as a Means of Evaluating Policy Models: A Case Study of a COVID-19 Simulation. medRxiv 2021.01.29.21250743; DOI: 10.1101/2021.01.29.21250743

Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”

By Frank Dignum

This is a reply to a review in JASSS (Chattoe-Brown 2021) of (Dignum 2021).

Before responding to some of the specific concerns of Edmund I would like to thank him for the thorough review. I am especially happy with his conclusion that the book is solid enough to make it a valuable contribution to scientific progress in modelling crises. That was the main aim of the book and it seems that is achieved. I want to reiterate what we already remarked in the book; we do not claim that we have the best or only way of developing an Agent-Based Model (ABM) for crises. Nor do we claim that our simulations were without limitations. But we do think it is an extensive foundation from which others can start, either picking up some bits and pieces, deviating from it in specific ways or extending it in specific ways.

The concerns that are expressed by Edmund are certainly valid. I agree with some of them, but will nuance some others. First of all the concern about the fact that we seem to abandon the NetLogo implementation and move to Repast. This fact does not make the ABM itself any less valid! In itself it is also an important finding. It is not possible to scale such a complex model in NetLogo beyond around two thousand agents. This is not just a limitation of our particular implementation, but a more general limitation of the platform. It leads to the important challenge to get more computer scientists involved to develop platforms for social simulations that both support the modelers adequately and provide efficient and scalable implementations.

That the sheer size of the model and the results make it difficult to trace back the importance and validity of every factor on the results is completely true. We have tried our best to highlight the most important aspects every time. But, this leaves questions as to whether we make the right selection of highlighted aspects. As an illustration to this, we have been busy for two months to justify our results of the simulations of the effectiveness of the track and tracing apps. We basically concluded that we need much better integrated analysis tools in the simulation platform. NetLogo is geared towards creating one simulation scenario, running the simulation and analyzing the results based on a few parameters. This is no longer sufficient when we have a model with which we can create many scenarios and have many parameters that influence a result. We used R now to interpret the flood of data that was produced with every scenario. But, R is not really the most user friendly tool and also not specifically meant for analyzing the data from social simulations.

Let me jump to the third concern of Edmund and link it to the analysis of the results as well. While we tried to justify the results of our simulation on the effectiveness of the track and tracing app we compared our simulation with an epidemiological based model. This is described in chapter 12 of the book. Here we encountered the difference in assumed number of contacts per day a person has with other persons. One can take the results, as quoted by Edmund as well, of 8 or 13 from empirical work and use them in the model. However, the dispute is not about the number of contacts a person has per day, but what counts as a contact! For the COVID-19 simulations standing next to a person in the queue in a supermarket for five minutes can count as a contact, while such a contact is not a meaningful contact in the cited literature. Thus, we see that what we take as empirically validated numbers might not at all be the right ones for our purpose. We have tried to justify all the values of parameters and outcomes in the context for which the simulations were created. We have also done quite some sensitivity analyses, which we did not all report on just to keep the volume of the book to a reasonable size. Although we think we did a proper job in justifying all results, that does not mean that one can have different opinions on the value that some parameters should have. It would be very good to check the influence on the results of changes in these parameters. This would also progress scientific insights in the usefulness of complex models like the one we made!

I really think that an ABM crisis response should be institutional. That does not mean that one institution determines the best ABM, but rather that the ABM that is put forward by that institution is the result of a continuous debate among scientists working on ABM’s for that type of crisis. For us, one of the more important outcomes of the ASSOCC project is that we really need much better tools to support the types of simulations that are needed for a crisis situation. However, it is very difficult to develop these tools as a single group. A lot of the effort needed is not publishable and thus not valued in an academic environment. I really think that the efforts that have been put in platforms such as NetLogo and Repast are laudable. They have been made possible by some generous grants and institutional support. We argue that this continuous support is also needed in order to be well equipped for a next crisis. But we do not argue that an institution would by definition have the last word in which is the best ABM. In an ideal case it would accumulate all academic efforts as is done in the climate models, but even more restricted models would still be better than just having a thousand individuals all claiming to have a useable ABM while governments have to react quickly to a crisis.

The final concern of Edmund is about the empirical scale of our simulations. This is completely true! Given the scale and details of what we can incorporate we can only simulate some phenomena and certainly not everything around the COVID-19 crisis. We tried to be clear about this limitation. We had discussions about the Unity interface concerning this as well. It is in principle not very difficult to show people walking in the street, taking a car or a bus, etc. However, we decided to show a more abstract representation just to make clear that our model is not a complete model of a small town functioning in all aspects. We have very carefully chosen which scenarios we can realistically simulate and give some insights in reality from. Maybe we should also have discussed more explicitly all the scenarios that we did not run with the reasons why they would be difficult or unrealistic in our ABM. One never likes to discuss all the limitations of one’s labor, but it definitely can be very insightful. I have made up for this a little bit by submitting an to a special issue on predictions with ABM in which I explain in more detail, which should be the considerations to use a particular ABM to try to predict some state of affairs. Anyone interested to learn more about this can contact me.

To conclude this response to the review, I again express my gratitude for the good and thorough work done. The concerns that were raised are all very valuable to concern. What I tried to do in this response is to highlight that these concerns should be taken as a call to arms to put effort in social simulation platforms that give better support for creating simulations for a crisis.

References

Dignum, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Springer. DOI:10.1007/978-3-030-76397-8

Chattoe-Brown, E. (2021) A review of “Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis”. Journal of Artificial Society and Social Simulation. 24(4). https://www.jasss.org/24/4/reviews/1.html


Dignum, F. (2020) Response to the review of Edmund Chattoe-Brown of the book “Social Simulations for a Crisis”. Review of Artificial Societies and Social Simulation, 4th Nov 2021. https://rofasss.org/2021/11/04/dignum-review-response/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974)

By Edmund Chattoe-Brown

Since this is new venture, we need to establish conventions. Since JASSS has been running since 1998 (twenty years!) it is reasonable to argue that something un-cited in JASSS throughout that period has effectively been forgotten by the ABM community. This contribution by Grémy is actually a single chapter in a book otherwise by Boudon (a bibliographical oddity that may have contributed to its neglect. Grémy also appears to have published mostly in French, which may also have had an effect. An English summary of his contribution to simulation might be another useful item for RofASSS.) Boudon gets 6 hits on the JASSS search engine (as of 31.05.18), none of which mention simulation and Gremy gets no hits (as does Grémy: unfortunately it is hard to tell how online search engines “cope with” accents and thus whether this is a “real” result).

Since this book is still readily available as a mass-market paperback, I will not reprise the argument of the simulation here (and its limitations relative to existing ABM methodology could be a future RofASSS contribution). Nonetheless, even approximately empirical modelling in the mid-seventies is worthy of note and the article is early to say other important things (for example about simulation being able to avoid “technical assumptions” – made for solubility rather than realism).

The point of this contribution is to draw attention to an argument that I have only heard twice (and only found once in print) namely that we should look at the form of real data as an initial justification for using ABM at all (please correct me if there are earlier or better examples). Grémy (1974, p. 210) makes the point that initial incongruities between the attitudes that people hold (altruistic versus selfish) and their career choices (counsellor versus corporate raider) can be resolved in either direction as time passes (he knows this because Boudon analysed some data collected by Rosenberg at two points from US university students) as well as remaining unresolved and, as such, cannot readily be explained by some sort of “statistical trend” (that people become more selfish as they get older or more altruistic as they become more educated). He thus hypothesises (reasonably it seems to me) that the data requires a model of some sort of dynamic interaction process that Grémy then simulates, paying some attention to their survey results both in constraining the model and analysing its behaviour.

This seems to me an important methodological practice to rescue from neglect. (It is widely recognised anecdotally that people tend to use the research methods they know and like rather than the ones that are suitable.) Elsewhere (Chattoe-Brown 2014), inspired by this argument, I have shown how even casually accessed attitude change data really looks nothing like the output of the (very popular) Zaller-Deffuant model of opinion change (very roughly, 228 hits in JASSS for Deffuant, 8 for Zaller and 9 for Zaller-Deffuant though hyphens sometimes produce unreliable results for online search engines too.) The attitude of the ABM community to data seems to be rather uncomfortable. Perhaps support in theory and neglect in practice would sum it up (Angus and Hassani-Mahmooei 2015, Table 5 in section 4.5). But if our models can’t even “pass first base” with existing real data (let alone be calibrated and validated) should we be too surprised if what seems plausible to us does not seem plausible to social scientists in substantive domains (and thus diminishes their interest in ABM as a “real method?”) Even if others in the ABM community disagree with my emphasis on data (and I know that they do) I think this is a matter that should be properly debated rather than just left floating about in coffee rooms (as such this is what we intend RofASSS to facilitate). As W. C. Fields is reputed to have said (though actually the phrase appears to have been common currency), we would wish to avoid ABM being just “Another good story ruined by an eyewitness”.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4):16.

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1):16.

Gremy, Jean-Paul (1974) ‘Simulation Techniques’, in Boudon, Raymond, The Logic of Sociological Explanation (Harmondsworth: Penguin), chapter 11:209-227.


Chattoe-Brown, E. (2018) A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974). Review of Artificial Societies and Social Simulation, 1st June 2018. https://rofasss.org/2018/06/01/ecb/