By Mike Bithell
Consider the following scenario:-
A policy maker has a new idea and wishes to know what might be the effect of implementing the associated policy or regulation. They ask an agent-based modeller for help. The modeller replies that the situation looks interesting. They will start a project to develop a new model from scratch and it will take three years. The policy maker replies they want the results tomorrow afternoon. On being informed that this is not possible (or that the model will of necessity be bad) the policy maker looks elsewhere.
Clearly this will not do. Yet it seems, at present that every new problem leads to the development of a new “hero” ABM developed from the ground up. I would like to argue that for practical policy problems we need a different approach., one in which persistent models are developed that outlast the life of individual research projects, and are continuously developed, updated and challenged against the kinds of multiple data streams that are now becoming available in the social realm.
By way of comparison consider the case of global weather and climate models. These are large models developed over many years. They are typically hundreds of thousands of lines of code, and are difficult for any single individual to fully comprehend. Their history goes back to the early 20th century, when Richardson made the first numerical weather forecast for Europe, doing all the calculations by hand. Despite the forecast being incorrect (a better understanding of how to set up initial conditions was needed) he was not deterred: His vision of future forecasts involved a large room full of “computers” (i.e. people) each calculating the numerics for their part of the globe and pooling the results to enable forecasting in real time (Richardson 1922). With the advent of digital computing in the 1950s these models began to be developed systematically, and their skill at representing the weather and climate has undergone continuous improvement (see e.g. Lynch 2006). At the present time there are perhaps a few tens of such models that operate globally, with various strengths and weaknesses,. Their development is very far from complete: The systems they represent are complex, and the models very complicated, but they gain their effectiveness through being run continually, tested and re-tested against data,, with new components being repeatedly improved and developed by multiple teams over the last 50 years. They are not simple to set up and run, but they persist over time and remain close to the state-of-the –art and to the research community.
I suggest that we need something like this in agent-based modelling. A suite of communally developed models that are not abstract, but that represent substantial real systems, such as large cities, countries or regions,; that are persistent and continually developed, on a code base that is largely stable; and more importantly undergo continual testing and validation. At the moment this last part of the loop is not typically closed: models are developed and scenarios proposed, but the model is not then updated in the light of new evidence, and then re-used and extended: the PhD has finished, or the project ended, and the next new problem leads to a new model. Persistent models, being repeatedly run by many, would gradually have bugs and inconsistencies discovered and corrected(although new ones would also inevitably be introduced), could be very complicated because continually tested, and continually available for interpretation and development of understanding, and become steadily better documented. Accumulated sets of results would show their strengths and weaknesses for particular kinds of issues, and where more work was most urgently needed.
In this way when, say ,the mayor London wanted to know the effect of a given policy, a set of state-of the-art models of London would already exist which could be used to test out the policy given the best available current knowledge. The city model would be embedded in a lager model or models of the UK, or even the EU, so as to be sure that boundary conditions would not be a problem, and to see what the wider unanticipated consequences might be. The output from such models might be very uncertain: “forecasts” (saying what will happen, as opposed to what kind of things might happen) would not be the goal, but the history of repeated testing and output would demonstrate what level of confidence was warranted in the types of behaviour displayed by the results: preferably this would at least be better than random guesswork. Nor would such a set of models rule out or substitute for other kinds of model: idealised, theoretical, abstract and applied case studies would still be needed to develop understanding and new ideas.
The kind of development of models for policy is already taking place in to an extent (see e.g. Waldrop 2018), but is currently very limited. However, in the face of current urgent and pressing problems, such as climate change, eco-system destruction, global financial insecurity, continuing widespread poverty and failure to approach sustainable development goals in any meaningful way, the ad-hoc make-a-new-model every time approach is inadequate. To build confidence in ABM as a tool that can be relied on for real world policy we need persistent virtual worlds.
References
Lynch, P. (2006). The Emergence of Numerical Weather Prediction: Richardson’s Dream. Cambridge: Cambridge University Press.
Richardson, L. F. (1922). Weather Prediction by Numerical Process (reprinted 2007). Cambridge: Cambridge University Press.
Waldrop, M. (2018). Free Agents. Science, 13, 360, 144-147. DOI: 10.1126/science.360.6385.144
Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb
© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)
This is a really useful idea but I think it may highlight another challenge in ABM and, behind that, social science itself. The idea of the “academic division of labour” seems to have become unfashionable (or just vanished) but hopefully we still aspire to it. So one way to build such a “quick response” model is to be able to pull a steadily improving “decision module” off the shelf, put in some sensible parameter values and fire it up. Why can’t we do this? Because social science does not agree about what contribution each discipline can make (or has made) to empirical research on decision. Has economics discovered anything about decision that it is so well based on evidence that a non economist could not credibly reject it? So when we build ABM, we tend to “cherry pick” things from our own background, things we think are “neat” or “architectures” (like BDI) which don’t have an obvious connection to any social science evidence base. Add to that non empirical models (so no combination of guesses is ever really falsified) and you have the perfect recipe for a lack of progress/division of labour. To my knowledge only a very few people (for example the originators of the consumat approach) have ever really thought about this and even then in quite a stylised way. So my question is: How do we arrange things institutionally and intellectually so that ABM can develop a provably “effective” decision module which people would be disposed to adopt for subsequent modelling effort?