Category Archives: Content

For the comments etc. themselves

A bad assumption: a simpler model is more general

By Bruce Edmonds

If one adds in some extra detail to a general model it can become more specific — that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general – it is just you can imagine it would be more general.

To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) – it is much less general than the original, because it is true for far fewer cases.

This is not very surprising – a claim that a model has general validity is a very strong claim – it is unlikely to be achieved by arm-chair reflection or by merely leaving out most of the observed processes.

Only under some special conditions does simplification result in greater generality:

  • When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
  • When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
  • When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)

In other cases, where you compare like with like (i.e. you don’t move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.

References

Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.

Edmonds, B. (2007) Simplicity is Not Truth-Indicative. In Gershenson, C.et al. (2007) Philosophy and Complexity. World Scientific, 65-80.

Edmonds, B. (2017) Different Modelling Purposes. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 39-58.

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144.


Edmonds, B. (2018) A bad assumption: a simpler model is more general. Review of Artificial Societies and Social Simulation, 28th August 2018. https://rofasss.org/2018/08/28/be-2/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Continuous model development: a plea for persistent virtual worlds

By Mike Bithell

Consider the following scenario:-

A policy maker has a new idea and wishes to know what might be the effect of implementing the associated policy or regulation. They ask an agent-based modeller for help. The modeller replies that the situation looks interesting. They will start a project to develop a new model from scratch and it will take three years. The policy maker replies they want the results tomorrow afternoon. On being informed that this is not possible (or that the model will of necessity be bad) the policy maker looks elsewhere.

Clearly this will not do. Yet it seems, at present that every new problem leads to the development of a new “hero” ABM developed from the ground up. I would like to argue that for practical policy problems we need a different approach., one in which persistent models are developed that outlast the life of individual research projects, and are continuously developed, updated and challenged against the kinds of multiple data streams that are now becoming available in the social realm.

By way of comparison consider the case of global weather and climate models. These are large models developed over many years. They are typically hundreds of thousands of lines of code, and are difficult for any single individual to fully comprehend. Their history goes back to the early 20th century, when Richardson made the first numerical weather forecast for Europe, doing all the calculations by hand. Despite the forecast being incorrect (a better understanding of how to set up initial conditions was needed) he was not deterred: His vision of future forecasts involved a large room full of “computers” (i.e. people) each calculating the numerics for their part of the globe and pooling the results to enable forecasting in real time (Richardson 1922). With the advent of digital computing in the 1950s these models began to be developed systematically, and their skill at representing the weather and climate has undergone continuous improvement (see e.g. Lynch 2006). At the present time there are perhaps a few tens of such models that operate globally, with various strengths and weaknesses,. Their development is very far from complete: The systems they represent are complex, and the models very complicated, but they gain their effectiveness through being run continually, tested and re-tested against data,, with new components being repeatedly improved and developed by multiple teams over the last 50 years. They are not simple to set up and run, but they persist over time and remain close to the state-of-the –art and to the research community.

I suggest that we need something like this in agent-based modelling. A suite of communally developed models that are not abstract, but that represent substantial real systems, such as large cities, countries or regions,; that are persistent and continually developed, on a code base that is largely stable; and more importantly undergo continual testing and validation. At the moment this last part of the loop is not typically closed: models are developed and scenarios proposed, but the model is not then updated in the light of new evidence, and then re-used and extended: the PhD has finished, or the project ended, and the next new problem leads to a new model. Persistent models, being repeatedly run by many, would gradually have bugs and inconsistencies discovered and corrected(although new ones would also inevitably be introduced), could be very complicated because continually tested, and continually available for interpretation and development of understanding, and become steadily better documented. Accumulated sets of results would show their strengths and weaknesses for particular kinds of issues, and where more work was most urgently needed.

In this way when, say ,the mayor London wanted to know the effect of a given policy, a set of state-of the-art models of London would already exist which could be used to test out the policy given the best available current knowledge. The city model would be embedded in a lager model or models of the UK, or even the EU, so as to be sure that boundary conditions would not be a problem, and to see what the wider unanticipated consequences might be. The output from such models might be very uncertain: “forecasts” (saying what will happen, as opposed to what kind of things might happen) would not be the goal, but the history of repeated testing and output would demonstrate what level of confidence was warranted in the types of behaviour displayed by the results: preferably this would at least be better than random guesswork. Nor would such a set of models rule out or substitute for other kinds of model: idealised, theoretical, abstract and applied case studies would still be needed to develop understanding and new ideas.

The kind of development of models for policy is already taking place in to an extent (see e.g. Waldrop 2018), but is currently very limited. However, in the face of current urgent and pressing problems, such as climate change, eco-system destruction, global financial insecurity, continuing widespread poverty and failure to approach sustainable development goals in any meaningful way, the ad-hoc make-a-new-model every time approach is inadequate. To build confidence in ABM as a tool that can be relied on for real world policy we need persistent virtual worlds.

References

Lynch, P. (2006). The Emergence of Numerical Weather Prediction: Richardson’s Dream. Cambridge: Cambridge University Press.

Richardson, L. F. (1922). Weather Prediction by Numerical Process (reprinted 2007). Cambridge: Cambridge University Press.

Waldrop, M. (2018). Free Agents. Science, 13, 360, 144-147. DOI: 10.1126/science.360.6385.144


Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Why the social simulation community should tackle prediction

By Gary Polhill

(Part of the Prediction-Thread)

On 4 May 2002, Scott Moss (2002) reported in the Proceedings of the National Academy of Sciences of the United States of America that he had recently approached the e-mail discussion list of the International Institute of Forecasters to ask whether anyone had an example of a correct econometric forecast of an extreme event. None of the respondents were able to provide a satisfactory answer.

As reported by Hassan et al. (2013), on 28 April 2009, Scott Moss asked a similar question of the members of the SIMSOC mailing list: “Does anyone know of a correct, real-time, model-based policy-impact forecast?” [1] No-one responded with such an example, and Hassan et al. note that the ensuing discussion questioned why we are bothering with agent-based models (ABMs). Papers such as Epstein’s (2008) suggest this is not an uncommon conversation.

On 23 March 2018, I wrote an email [2] to the SIMSOC mailing list asking for expressions of interest in a prediction competition to be held at the Social Simulation Conference in Stockholm in 2018. I received two such expressions, and consequently announced on 10 May 2018 that the competition would go ahead. [3] By 22 May 2018, however, one of the two had pulled out because of lack of data, and I contacted the list to say the competition would be replaced with a workshop. [4]

Why the problem with prediction? As Edmonds (2017), discussing different modelling purposes, says, prediction is extremely challenging in the type of complex social system in which an agent-based model would justifiably be applied. He doesn’t go as far as stating that prediction is impossible; but with Aodha (2017, p. 819) he says, in the final chapter of the same book, that modellers should “stop using the word predict” and policymakers should “stop expecting the word predict”. At a minimum, this suggests a strong aversion to prediction within the social simulation community.

Nagel (1979) gives attention to why prediction is hard in the social sciences. Not least amongst the reasons offered is the fact that social systems may adapt according to predictions made – whether those predictions are right or wrong. Nagel gives two examples of this: suicidal predictions are those in which a predicted event does not happen because steps are taken to avert the predicted event; self-fulfilling prophecies are events that occur largely because they have been predicted, but arguably would not have occurred otherwise.

The advent of empirical ABM, as hailed by Janssen and Ostrom’s (2006) editorial introduction to a special issue of Ecology and Society on the subject, naturally raises the question of using ABMs to make predictions, at least insofar as “predict” in this context means using an ABM to generate new knowledge about the empirical world that can be tested by observing it. There are various reasons why developing ABMs with the purpose of prediction is a goal worth pursuing. Three of them are:

  • Developing predictions, Edmonds (2017) notes, is an iterative process, requiring testing and adapting a model against various data. Engaging with such a process with ABMs offers vital opportunities to learn and develop methodology, not least on the collection and use of data in ABMs, but also in areas such as model design, calibration, validation and sensitivity analysis. We should expect, or at least be prepared for, our predictions to fail often. Then, the value is in what we learn from these failures, both about the systems we are modelling, and about the approach taken.
  • There is undeniably a demand for predictions in complex social systems. That demand will not go away just because a small group of people claim that prediction is impossible. A key question is how we want that demand to be met. Presumably at least some of the people engaged in empirical ABM have chosen an agent-based approach over simpler, more established alternatives because they believe ABMs to be sufficiently better to be worth the extra effort of their development. We don’t know whether ABMs can be better at prediction, but such knowledge would at least be useful.
  • Edmonds (2017) says that predictions should be reliable and useful. Reliability pertains both to having a reasonable comprehension of the conditions of application of the model, and to the predictions being consistently right when the conditions apply. Usefulness means that the knowledge the prediction supplies is of value with respect to its accuracy. For example, a weather forecast stating that tomorrow the mean temperature on the Earth’s surface will be between –100 and +100 Celsius is not especially useful (at least to its inhabitants). However, a more general point is that we are accustomed to predictions being phrased in particular ways because of the methods used to generate them. Attempting prediction using ABM may lead to a situation in which we develop different language around prediction, which in turn could have added benefits: (a) gaining a better understanding of what ABM offers that other approaches do not; (b) managing the expectations of those who demand predictions regarding what predictions should look like.

Prediction is not the only reason to engage in a modelling exercise. However, in future if the social simulation community is asked for an example of a correct prediction of an ABM, it would be desirable to be able to point to a body of research and methodology that has been developed as a result of trying to achieve this aim, and ideally to be able to supply a number of examples of success. This would be better than a fraught conversation about the point of modelling, and consequent attempts to divert attention to all of the other reasons to build an ABM that aren’t to do with prediction. To this end, it would be good if the social simulation community embraced the challenge, and provided a supportive environment to those with the courage to take it on.

Notes

  1. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;fb704db4.0904 (Cited in Hassan et al. (2013))
  2. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;14ecabbf.1803
  3. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SIMSOC;1802c445.1805
  4. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;ffe62b05.1805

References

Aodha, L. n. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 801-822.

Edmonds, B. (2017) Different modelling purposes. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 39-58.

Epstein, J. (2008) Why model? Journal of Artificial Societies and Social Simulation 11 (4), 12. http://jasss.soc.surrey.ac.uk/11/4/12.html

Hassan, S., Arroyo, J., Galán, J. M., Antunes, L. and Pavón, J. (2013) Asking the oracle: introducing forecasting principles into agent-based modelling. Journal of Artificial Societies and Social Simulation 16 (3), 13. http://jasss.soc.surrey.ac.uk/16/3/13.html

Janssen, M. A. and Ostrom, E. (2006) Empirically based, agent-based models. Ecology and Society 11 (2), 37. http://www.ecologyandsociety.org/vol11/iss2/art37/

Moss, S. (2002) Policy analysis from first principles. Proceedings of the National Academy of Sciences of the United States of America 99 (suppl. 3), 7267-7274. http://doi.org/10.1073/pnas.092080699

Nagel, E. (1979) The Structure of Science: Problems in the Logic of Scientific Explanation. Hackett Publishing Company.


Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0) 

The “Formalist Fallacy”

By Bruce Edmonds

This is the tendency to believe theories more if they are formalised (e.g. as sets of mathematical equations or computer simulations).

This can be simply an effect of Kuhn’s “Theoretical Spectacles” (1962) — due to the fact we can clearly see how a complex mechanism might result in some particular outcomes (due to the formal model) then we project this onto the world. That is, we fit our perception of some part of the world into the conception illustrated by the model. This is the opposite way to how science is supposed to work, where the model should be adjusted (or rejected) in light of the evidence.

Another reason for more readily accepting theories expressed in terms of mathematics is that maths has status. It used to be the case that mathematical models were the only practical formal technique, which is why science became associated with maths. Thus you are much more likely to be published in many journals if your paper is expressed mathematically, regardless of whether the formalism is used to prove or calculate anything.

If an idea is expressed in informal ways then we are freer to express doubt, as we have an instinctual idea of how slippery natural language statements can be. We know that humans are lazy and thus have a tendency to believe their own ideas, unless pretty well forced to change (e.g. by evidence). It should be the case that making ideas precise makes them easier to disprove (as in Popper 1963) but this is only the case if the mapping between the model and what it refers to is also precise. Otherwise one is free to imagine how a model could apply, giving the illusion of generality.

For example, Eckhart Arnold (2005) shows, in detail, how game theoretical models based on around the ‘Prisoner’s Dilemma’ (e.g. Axelrod 1984) fail to have empirical relevance. Other abstract models that have had many citations but do not seem to connect well to evidence include: (Schelling 1971), Hegselmann & Krause (2002) and Deffaunt et al (2002). Each of these is simple, formal but has interesting outcomes. As a result they seem apparently irresistible to other researchers with many citations and influence but no direct modelling relation with the observed world. This contrasts with modelling papers which compare simulated and real-world data (Chattoe-Brown 2018).

Do not mistake me – I think formalising ideas is very useful. It makes sharing the ideas without error or reinterpretation possible, allowing a community of researchers to critique, improve, check, and apply them (Edmonds 2000). It should also be easier to check if they actually work – for example if they do predict some unknown and measurable aspects of an observed system. It is just that formalism, of itself, does not make them more likely to be true (or the resulting models useful for anything that reliably relates to the observed world) but we are more likely to think they are, due to our tendency to project what we clearly understand.

References

Arnold, E. (2008). Explaining altruism: A simulation-based approach and its limits (Vol. 11). PhD Thesis. Walter de Gruyter. http://www.phil-fak.uni-duesseldorf.de/fileadmin/Redaktion/Institute/Philosophie/Theoretische_Philosophie/Allgemein/Hilfskraefte/Explaining_Altruism-colored_figures.pdf

Axelrod, Robert. 1984. The Evolution of Cooperation. Basic Books.

Chattoe-Brown, E. (2018) What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table? Review of Artificial Societies and Social Simulation, 11th June 2018. https://roasss.wordpress.com/2018/06/11/ecb/

Deffuant, G., Amblard, F., Weisbuch, G. and Faure, T. (2002) How can extremism prevail? A study based on the relative agreement interaction model. Journal of Artificial Societies and Social Simulation 5(4), 1. http://jasss.soc.surrey.ac.uk/5/4/1.html

Edmonds, B. (2000) The Purpose and Place of Formal Systems in the Development of Science, CPM Report 00-75, MMU, UK. http://cfpm.org/cpmrep75.html

Hegselmann, R. and Krause, U. (2002). Opinion Dynamics and Bounded Confidence Models, Analysis and Simulation. Journal of Artificial Societies and Social Simulation, 5(3), 2. http://jasss.soc.surrey.ac.uk/5/3/2.html

Kuhn, T.S. (1962) The Structure of Scientific Revolutions. University of Chicago Press.

Popper, K. (1963). Conjectures and refutations: the growth of scientific knowledge. London: Routledge.

Schelling, T. C. (1971). Dynamic models of segregation. Journal of mathematical sociology, 1(2), 143-186.


Edmonds, B. (2018) The "formalist fallacy". Review of Artificial Societies and Social Simulation, 11th June 2018. https://rofasss.org/2018/07/20/be/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Query: What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table?

By Edmund Chattoe-Brown

On one level this is a straightforward request. The earliest convincing example I have found is Hägerstrand (1965, p. 381) an article that seems to be undeservedly neglected because it is also the earliest example of a simulation I have been able to identify that demonstrates independent calibration and validation (Gilbert and Troitzsch 2005, p. 17).1

However, my attempts to find the earliest examples are motivated two more substantive issues (which may help to focus the search for earlier candidates). Firstly, what is the value of a canon (and giving due intellectual credit) for the success of ABM? The Schelling model is widely known and taught but it is not calibrated and validated. If a calibrated and validated model already existed in 1965, should it not be more widely cited? If we mostly cite a non-empirical model, might we give the impression that this is all that ABM can do? Also, failing to cite an article means that it cannot form the basis for debate. Is the Hägerstrand model in some sense “better” or “more important” than the Schelling model? This is a discussion we cannot have without awareness of the Hägerstrand model in the first place.

The second (and related) point regards the progress made by ABM and how those outside the community might judge it. Looking at ABM research now, the great majority of models appear to be non-empirical (Angus and Hassani-Mahmooei 2015, Table 5 in section 4.5). Without citations of articles like Hägerstrand (and even Clarkson and Meltzer), the non-expert reader of ABM might be led to conclude that it is too early (or too difficult) to produce such calibrated and validated models. But if this was done 50 years ago, and is not being much publicised, might we be using up our credibility as a “new” field still finding its feet?) If there are reasons for not doing, or not wanting to do, what Hägerstrand managed, let us be obliged to be clear what they are and not simply hide behind widespread neglect of such examples2.)

Notes

  1. I have excluded an even earlier example of considerable interest (Clarkson and Meltzer 1960 which also includes an attempt at calibration and validation but has never been cited in JASSS) for two reasons. Firstly, it deals with the modelling of a single agent and therefore involves no interaction. Secondly, it appears that the validation may effectively be using the “same” data as the calibration in that protocols elicited from an investment officer regarding portfolio selection are then tested against choices made by that same investment officer.
  2. And, of course, this is a vicious circle because in our increasingly pressurised academic world, people only tend to read and cite what is already cited.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16, .

Clarkson, Geoffrey P. and Meltzer, Allan H. (1960) ‘Portfolio Selection: A Heuristic Approach, The Journal of Finance, 15(4), December, pp. 465-480.

Gilbert, Nigel and Troitzsch, Klaus G. (2005) Simulation for the Social Scientist, 2nd edition (Buckingham: Open University Press).

Hägerstrand, Torsten (1965) ‘A Monte Carlo Approach to Diffusion’, Archives Européennes de Sociologie, 6(1), May, Special Issue on Simulation in Sociology, pp. 43-67.


Chattoe-Brown, E. (2018) What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table? Review of Artificial Societies and Social Simulation, 11th June 2018. https://rofasss.org/2018/06/11/ecb/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974)

By Edmund Chattoe-Brown

Since this is new venture, we need to establish conventions. Since JASSS has been running since 1998 (twenty years!) it is reasonable to argue that something un-cited in JASSS throughout that period has effectively been forgotten by the ABM community. This contribution by Grémy is actually a single chapter in a book otherwise by Boudon (a bibliographical oddity that may have contributed to its neglect. Grémy also appears to have published mostly in French, which may also have had an effect. An English summary of his contribution to simulation might be another useful item for RofASSS.) Boudon gets 6 hits on the JASSS search engine (as of 31.05.18), none of which mention simulation and Gremy gets no hits (as does Grémy: unfortunately it is hard to tell how online search engines “cope with” accents and thus whether this is a “real” result).

Since this book is still readily available as a mass-market paperback, I will not reprise the argument of the simulation here (and its limitations relative to existing ABM methodology could be a future RofASSS contribution). Nonetheless, even approximately empirical modelling in the mid-seventies is worthy of note and the article is early to say other important things (for example about simulation being able to avoid “technical assumptions” – made for solubility rather than realism).

The point of this contribution is to draw attention to an argument that I have only heard twice (and only found once in print) namely that we should look at the form of real data as an initial justification for using ABM at all (please correct me if there are earlier or better examples). Grémy (1974, p. 210) makes the point that initial incongruities between the attitudes that people hold (altruistic versus selfish) and their career choices (counsellor versus corporate raider) can be resolved in either direction as time passes (he knows this because Boudon analysed some data collected by Rosenberg at two points from US university students) as well as remaining unresolved and, as such, cannot readily be explained by some sort of “statistical trend” (that people become more selfish as they get older or more altruistic as they become more educated). He thus hypothesises (reasonably it seems to me) that the data requires a model of some sort of dynamic interaction process that Grémy then simulates, paying some attention to their survey results both in constraining the model and analysing its behaviour.

This seems to me an important methodological practice to rescue from neglect. (It is widely recognised anecdotally that people tend to use the research methods they know and like rather than the ones that are suitable.) Elsewhere (Chattoe-Brown 2014), inspired by this argument, I have shown how even casually accessed attitude change data really looks nothing like the output of the (very popular) Zaller-Deffuant model of opinion change (very roughly, 228 hits in JASSS for Deffuant, 8 for Zaller and 9 for Zaller-Deffuant though hyphens sometimes produce unreliable results for online search engines too.) The attitude of the ABM community to data seems to be rather uncomfortable. Perhaps support in theory and neglect in practice would sum it up (Angus and Hassani-Mahmooei 2015, Table 5 in section 4.5). But if our models can’t even “pass first base” with existing real data (let alone be calibrated and validated) should we be too surprised if what seems plausible to us does not seem plausible to social scientists in substantive domains (and thus diminishes their interest in ABM as a “real method?”) Even if others in the ABM community disagree with my emphasis on data (and I know that they do) I think this is a matter that should be properly debated rather than just left floating about in coffee rooms (as such this is what we intend RofASSS to facilitate). As W. C. Fields is reputed to have said (though actually the phrase appears to have been common currency), we would wish to avoid ABM being just “Another good story ruined by an eyewitness”.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4):16.

Chattoe-Brown, Edmund (2014) ‘Using Agent Based Modelling to Integrate Data on Attitude Change’, Sociological Research Online, 19(1):16.

Gremy, Jean-Paul (1974) ‘Simulation Techniques’, in Boudon, Raymond, The Logic of Sociological Explanation (Harmondsworth: Penguin), chapter 11:209-227.


Chattoe-Brown, E. (2018) A Forgotten Contribution: Jean-Paul Grémy’s Empirically Informed Simulation of Emerging Attitude/Career Choice Congruence (1974). Review of Artificial Societies and Social Simulation, 1st June 2018. https://rofasss.org/2018/06/01/ecb/