Tag Archives: garypolhill

Antisocial simulation: using shared high-performance computing clusters to run agent-based models

By Gary Polhill

Information and Computational Sciences Department, The James Hutton Institute, Aberdeen AB15 8QH, UK.

High-performance computing (HPC) clusters are increasingly being used for agent-based modelling (ABM) studies. There are reasons why HPC provides a significant benefit for ABM work, and to expect a growth in HPC/ABM applications:

  1. ABMs typically feature stochasticity, which require multiple runs using the same parameter settings and initial conditions to ascertain the scope of the behaviour of the model. The ODD protocol has stipulated the explicit specification of this since it was first conceived (Grimm et al. 2006). Some regard stochasticity as ‘inelegant’ and to be avoided in models, but asynchrony in agents’ actions can avoid artefacts (results being a ‘special case’ rather than a ‘typical case’) and introduces an extra level of complexity affecting the predictability of the system even when all data are known (Polhill et al. 2021).
  2. ABMs often have high-dimensional parameter spaces, which need to be sampled for sensitivity analyses and, in the case of empirical ABMs, for calibration and validation. The so-called ‘curse of dimensionality’ means that the problem of exploring parameter space grows exponentially with the number of parameters. While ABMs’ parameters may not all be ‘orthogonal’ (i.e. each point in parameter space does not uniquely specify model behaviour – a situation sometimes referred to as ‘equifinality’), diminishing the ‘curse’, the exponential growth means the challenge of parameter search does not need many dimensions before it becomes intractable exhaustively.
  3. Both the above points are exacerbated in empirical applications of ABMs given Sun et al.’s (2016) observations about the ‘medawar zone’ of model complicatedness in relation to that of theoretical models. In empirical applications, we also may be more interested in knowing that an undesirable outcome cannot occur, or has a very low probability of occurring, requiring more runs with the same conditions. Further, the additional complicatedness of empirical ABM will entail more parameters, and the empirical application will place greater emphasis on searching parameter space for calibrating and validating to data.

HPC clusters are shared computing resources, and it is now commonplace for research organizations and universities to have them. There can be few academic disciplines without some sort of scientific computing requirement – typical applications include particle physics, astronomy, meteorology, materials, chemistry, neuroscience, medicine and genetics. And social science. As a shared resource, an HPC cluster is subject to norms and institutions frequently observed in common-pool resource dilemmas. Users of HPC clusters are asked to request allocations of computing time, memory and long-term storage space to accommodate their needs. The requests are made in advance of the runs being executed; sometimes so far in advance that the calculations form part of the research project proposal. Hence, as a user, if you do not know, or cannot calculate, the resources you will require, you have a dilemma: ask for more than it turns out you really need and risk normative sanctions; or ask for less than it turns out you really need and impair the scientific quality of your research. Normative sanctions are in the job description of the HPC cluster administrator. This can lead to emails such as those in Figure 1.

Can I once again remind everyone to please be sensible (and considerate) in your allocation of memory for jobs on the cluster. We now have a situation on the cluster where jobs are unable to run because large amounts of memory have been requested yet only a tiny amount is actually active - check the attached image, where light green shows allocated and dark green shows used. Over allocating resources can block the cluster for others, as well as waste a huge amount of energy as additional machines need to power up unnecessarily. Picture 1b

Figure 1: Example email and accompanying visualization from an HPC cluster administrator reminding users that it is antisocial to request more resources than you will use when submitting jobs.

The ‘managerialist’ turn in academia has been lamented in various articles. Kolsaker (2008), while presenting a nuanced view of the relationship between managerialist and academic modes of working, says that “managerialism represents a distinctive discourse based upon a set of values that justify the assumed right of one group to monitor and control the activities of others.” Steinþórsdóttir et al. (2019) note in the abstract to their article that their results from a case study in Iceland support arguments that managerialism discriminates against women and early-career researchers, in part because of a systemic bias towards natural sciences. Both observations are relevant in this context.

Measurement and control as the tools of managerialist conduct renders Goodhart’s Law (the principle that when a metric becomes a target, the metric is useless) relevant. Goodhart’s Law has been found to have led to bibliometrics now being useless for comparing researchers’ performance – both within and between departments (Fire and Guestrin 2019). We may therefore expect that if an HPC cluster’s administrator has the accurate prediction of computing resource as a target for their own performance assessment, or if they give it as a target for users – e.g. by prioritizing jobs submitted by users on the basis of the accuracy of their predicted resource use, or denying access to those consistently over-estimating requirements – this accuracy will be useless. To give a concrete example, programming languages such as C give the programmer direct control over memory allocation. Hence, were access to an HPC conditional on the accurate prediction of memory allocation requirements, a savvy C programmer would have the (excessive) memory allowance in the batch job submission as a command-line argument to their program, which on execution would immediately request that allocation from the server’s operating system. The rest of the program would use bespoke memory allocation functions that allocated the memory the program actually needed from the memory initially reserved. Similar principles can be used for CPU cycles – if the program runs too quickly, then calculate digits of π until the predicted CPU time has elapsed; and disk space – if too much disk space has been requested, then pad files with random data. These activities waste the programmer’s time, and entail additional use of computing resources with energy cost implications for the cluster administrator.

With respect to the normative statements such as those in Figure 1, Griesemer (2020, p. 77), discussing the use of metrics leading to ‘gaming the system’ in academia generally (the savvy C programmer’s behaviour being an example in the context of HPC usage) claims that “it is … problematic to moralize and shame [such] practices as if it were clear what constitutes ethical … practice in social worlds where Goodhart’s law operates” [emphasis mine]. In computer science, however, there are theoretical (in the mathematical sense of the term) reasons why such norms are problematic over-and-above the social context of measurement-and-control.

The theory of computer science is founded in mathematics and logic, and the work of notable thinkers such as Gödel, Turing, Hilbert, Kolmogorov, Chomsky, Shannon, Tarski, Russell and von Neumann. The growth in areas of computer science (e.g. artificial intelligence, internet-of-things) means that undergraduate degrees have increasingly less space to devote to teaching this theory. Blumenthal (2021, p. 46), comparing computer science curricula in 2014 and 2021, found that the proportion of courses with required modules on computational theory had dropped from 46% to 40%, though the sample size meant this result was not significant (P = 0.09 under a two-population z-test). Similarly, the time dedicated to algorithmics and complexity in CS2013 fell to 28 (of which 19 are ‘tier-1’ – required of every curriculum; and 9 are ‘tier-2’ – in which 80% topic coverage is the stipulated minimum) from 31 in CS2008 (Joint Task Force on Computing Curricula 2013).

One of the most critical theoretical results in computer science is the so-called Halting Problem (Turing 1937), which proves that it is impossible to write a computer program that (in the general case) takes as input another computer program and its input data and gives as output whether the latter program will halt or run forever. The halting problem is ‘tier-1’ in CS2013, and so should be taught to every computer scientist. Rice (1953) generalized Turing’s finding to prove that any ‘non-trivial’ properties of computer programs could not be decided algorithmically. These results mean that the automated job scheduling and resource allocation algorithms in HPC, such as SLURM (Yoo et al. 2003), cannot take a user’s submitted job as input and calculate the computing resources it will need. Any requirement for such prediction is thus pushed to the user. In the general case, this means users of HPC clusters are being asked to solve formally undecidable problems when submitting jobs. Qualified computer scientists should know this – but possibly not all cluster administrators, and certainly not all cluster users, are qualified computer scientists. The power dynamic implied by Kolsaker’s (2008) characterization of a managerialist working culture puts users as a disadvantage, while Steinþórsdóttir et al.’s (2019) observations suggest this practice may be indirectly discriminatory on the basis of age and gender; the latter particularly when social scientists are seeking access to shared HPC facilities.

I emphasized ‘in the general case’ above because in many specific cases, computing resources can be accurately estimated. Sorting a list of strings in alphabetical order, for example is known to grow in execution time with as a function of n log n, where n is the length of the list. Integers can even be sorted in linear time, but with demands on memory that are exponential in the number of bits used to store an integer (Andersson et al. 1998).

However, agent-based modellers should not expect to be so lucky. There are various features that ABMs may implement that make their computing resources difficult (perhaps impossible) to predict:

  • Birth and death of agents can render computing time and memory requirements difficult to predict. Indeed, the size of the population and any fluctuation in it may be the purpose of the simulation study. With each agent having memory needed to store its attributes, and execution time for its behaviour, if the maximum population size of a specific run is not predictable from its initial conditions and parameter settings without first running the model, then computing resources cannot be predicted for HPC job submission.
    • A more dramatic corollary of birth and death is the question of extinction – i.e. where all agents die before they can reproduce. At this point, a run would typically terminate – far sooner than the computing time budgeted.
  • Interactions among agents, where the set of other agents with which one agent interacts is not predetermined, will also typically result in unpredictable computing times, even if the time needed for any one interaction is known. In some cases, agents’ social networks may be formally represented using data structures (‘links’ in NetLogo), and if these connections can be created or destroyed as a result of the model’s dynamics, then the memory requirements will typically be unpredictable.
  • Memories of agents, where implemented, are most trivially stored in lists that may have arbitrary length. The algorithms implementing the agents’ behaviours that use their memories will have computing times that are a function of the list length at any one time. These lists may not have a predictable length (e.g. if the agent ‘forgets’ some memories) and hence their behavioural algorithms won’t have predictable execution time.
  • Gotts and Polhill (2010) have shown that running a specific model with larger spaces led to qualitatively different results than with smaller spaces. This suggests that smaller (personal) computers (such as desktops and laptops) cannot necessarily be used to accurately estimate execution times and memory requirements prior to submitting larger-scale simulations requiring resources only available on HPC clusters.

Worse, a job will typically comprise several runs in a ‘batch’ covering multiple parameter settings and/or initial conditions. Even if the maximum time and memory requirements of any of the runs in a batch were known, there is no guarantee that all of the other runs will use anything like as much. These matters combine to make agent-based modellers ‘antisocial’ users of HPC clusters where the ‘performance’ of the clusters’ users is measured by their ability to accurately predict resource requirements, or there isn’t an ‘accommodating’ relationship between the administrator and researcher. Further, the social environment in which researchers access these resources put early-career and female researchers at a potential systemic disadvantage

The main purpose of making these points is to lay down the foundations for more equitable access to HPC for social scientists, and provide tentative users of these facilities with the arguments they need to develop constructive working arrangements with cluster administrators for them to run their agent-based models on shared HPC equipment.

Acknowledgements

This work was supported by the Scottish Government Rural and Environment Science and Analytical Services Division (project reference JHI-C5-1)

References

Andersson, A., Hagerup, T., Nilsson, S. and Raman, R. (1998) Sorting in linear time? Journal of Computer and System Sciences 57, 74-93. https://doi.org/10.1006/jcss.1998.1580

Blumenthal, R. (2021) Walking the curricular talk: a longitudinal study of computer science departmental course requirements. In Lu, B. and Smallwood, P. (eds.) The Journal of Computing Sciences in Colleges: Papers of the 30th Annual CCSC Rocky Mountain Conference, October 15th-16th, 2021, Utah Valley University (virtual), Orem, UT. Volume 37, Number 2, pp. 40-50.

Fire, M. and Guestrin, C. (2019) Over-optimization of academic publishing metrics: observing Goodhart’s Law in action. GigaScience 8 (6), giz053. https://doi.org/10.1093/gigascience/giz053

Gotts, N. M. and Polhill, J. G. (2010) Size matters: large-scale replications of experiments with FEARLUS. Advances in Complex Systems 13 (4), 453-467. https://doi.org/10.1142/S0219525910002670

Griesemer, J. (2020) Taking Goodhart’s Law meta: gaming, meta-gaming, and hacking academic performance metrics. In Kippmann, A. and Biagioli, M. (eds.) Gaming the Metrics: Misconduct and Manipulation in Academic Research. Cambridge, MA, USA: The MIT Press, pp. 77-87.

Grimm, V., Berger, U., Bastiansen, F., Eliassen, S., Ginot, V., Giske, J., Goss-Custard, J., Grand, T., Heinz, S. K., Huse, G., Huth, A., Jepsen, J. U., Jørgensen, C., Mooij, W. M., Müller, B., Pe’er, G., Piou, C., Railsback, S. F., Robbins, A. M., Robbins, M. M., Rossmanith, E., Rüger, N., Strand, E., Souissi, S., Stillman, R. A., Vabø, R., Visser, U. and DeAngelis, D. L. (2006) A standard protocol for describing individual-based and agent-based models. Ecological Modelling 198, 115-126. https://doi.org/10.1016/j.ecolmodel.2006.04.023

(The) Joint Task Force on Computing Curricula, Association for Computing Machinery (ACM) IEEE Computer Society (2013) Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science. https://doi.org/10.1145/2534860

Kolsaker, A. (2008) Academic professionalism in the managerialist era: a study of English universities. Studies in Higher Education 33 (5), 513-525. https://doi.org/10.1080/03075070802372885

Polhill, J. G., Hare, M., Bauermann, T., Anzola, D., Palmer, E., Salt, D. and Antosz, P. (2021) Using agent-based models for prediction in complex and wicked systems. Journal of Artificial Societies and Social Simulation 24 (3), 2. https://doi.org/10.18564/jasss.4597

Rice, H. G. (1953) Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society 74, 358-366. https://doi.org/10.1090/S0002-9947-1953-0053041-6

Steinþórsdóttir, F. S., Brorsen Smidt, T., Pétursdóttir, G. M., Einarsdóttir, Þ, and Le Feuvre, N. (2019) New managerialism in the academy: gender bias and precarity. Gender, Work & Organization 26 (2), 124-139. https://doi.org/10.1111/gwao.12286

Sun, Z., Lorscheid, I., Millington, J. D., Lauf, S., Magliocca, N. R., Groeneveld, J., Balbi, S., Nolzen, H., Müller, B., Schulze, J. and Buchmann, C. M. (2016) Simple or complicated agent-based models? A complicated issue. Environmental Modelling & Software 86, 56-67. https://doi.org/10.1016/j.envsoft.2016.09.006

Turing, A. M. (1937) On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society s2-42 (1), 230-265. https://doi.org/10.1112/plms/s2-42.1.230

Yoo, A. B., Jette, M. A. and Grondona, M. (2003) SLURM: Simple Linux utility for resource management. In Feitelson, D., Rudolph, L. and Schwiegelshohn, U. (eds.) Job Scheduling Strategies for Parallel Processing. 9th International Workshop, JSSPP 2003, Seattle, WA, USA, June 2003, Revised Papers. Lecture Notes in Computer Science 2862, pp. 44-60. Berlin, Germany: Springer. https://doi.org/10.1007/10968987_3


Polhill, G. (2022) Antisocial simulation: using shared high-performance computing clusters to run agent-based models. Review of Artificial Societies and Social Simulation, 14 Dec 2022. https://rofasss.org/2022/12/14/antisoc-sim


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Predicting Social Systems – a Challenge

By Bruce Edmonds, Gary Polhill and David Hales

(Part of the Prediction-Thread)

There is a lot of pressure on social scientists to predict. Not only is an ability to predict implicit in all requests to assess or optimise policy options before they are tried, but prediction is also the “gold standard” of science. However, there is a debate among modellers of complex social systems about whether this is possible to any meaningful extent. In this context, the aim of this paper is to issue the following challenge:

Are there any documented examples of models that predict useful aspects of complex social systems?

To do this the paper will:

  1. define prediction in a way that corresponds to what a wider audience might expect of it
  2. give some illustrative examples of prediction and non-prediction
  3. request examples where the successful prediction of social systems is claimed
  4. and outline the aspects on which these examples will be analysed

About Prediction

We start by defining prediction, taken from (Edmonds et al. 2019). This is a pragmatic definition designed to encapsulate common sense usage – what a wider public (e.g. policy makers or grant givers) might reasonably expect from “a prediction”.

By ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model.

Let us clarify the language in this.

  • It has to be reliable. That is, one can rely upon its prediction as one makes this – a model that predicts erratically and only occasionally predicts is no help, since one does not whether to believe any particular prediction. This usually means that (a) it has made successful predictions for several independent cases and (b) the conditions under which it works is (roughly) known.
  • What is predicted has to be unknown at the time of prediction. That is, the prediction has to be made before the prediction is verified. Predicting known data (as when a model is checked on out-of-sample data) is not sufficient [1]. Nor is the practice of looking for phenomena that is consistent with the results of a model, after they have been generated (due to ignoring all the phenomena that is not consistent with the model in this process).
  • What is being predicted is well defined. That is, How to use the model to make a prediction about observed data is clear. An abstract model that is very suggestive – appears to predict phenomena but in a vague and undefined manner but where one has to invent the mapping between model and data to make this work – may be useful as a way of thinking about phenomena, but this is different from empirical prediction.
  • Which aspects of data about being predicted is open. As Watts (2014) points out, this is not restricted to point numerical predictions of some measurable value but could be a wider pattern. Examples of this include: a probabilistic prediction, a range of values, a negative prediction (this will not happen), or a second-order characteristic (such as the shape of a distribution or a correlation between variables). What is important is that (a) this is a useful characteristic to predict and (b) that this can be checked by an independent actor. Thus, for example, when predicting a value, the accuracy of that prediction depends on its use.
  • The prediction has to use the model in an essential manner. Claiming to predict something obviously inevitable which does not use the model is insufficient – the model has to distinguish which of the possible outcomes is being predicted at the time.

Thus, prediction is different from other kinds of scientific/empirical uses, such as description and explanation (Edmonds et al. 2019). Some modellers use “prediction” to mean any output from a model, regardless of its relationship to any observation of what is being modelled [2]. Others use “prediction” for any empirical fitting of data, regardless of whether that data is known before hand. However here we wish to be clearer and avoid any “post-truth” softening of the meaning of the word for two reasons (a) distinguishing different kinds of model use is crucial in matters of model checking or validation and (b) these “softer” kinds of empirical purpose will simply confuse the wider public when if talk to themabout “prediction”. One suspects that modellers have accepted these other meanings because it then allows them to claim they can predict (Edmonds 2017).

Some Examples

Nate Silver and his team aim to predict future social phenomena, such as the results of elections and the outcome of sports competitions. He correctly predicted the outcomes of all 50 electoral colleges in Obama’s election before it happened. This is a data-hungry approach, which involves the long-term development of simulations that carefully see what can be inferred from the available data, with repeated trial and error. The forecasts are probabilistic and repeated many times. As well as making predictions, his unit tries to establish the level of uncertainty in those predictions – being honest about the probability of those predictions coming about given the likely levels of error and bias in the data. These models are not agent-based in nature but tend to be of a mostly statistical nature, thus it is debatable whether these are treated as complex systems – it certainly does not use any theory from complexity science. His book (Silver 2012) describes his approach. Post hoc analysis of predictions – explaining why it worked or not – is kept distinct from the predictive models themselves – this analysis may inform changes to the predictive model but is not then incorporated into the model. The analysis is thus kept independent of the predictive model so it can be an effective check.

Many models in economics and ecology claim to “predict” but on inspection, this only means there is a fit to some empirical data. For example, (Meese & Rogoff 1983) looked at 40 econometric models where they claimed they were predicting some time-series. However, 37 out of 40 models failed completely when tested on newly available data from the same time series that they claimed to predict. Clearly, although presented as being predictive models, they could not predict unknown data. Although we do not know for sure, presumably what happened was that these models had been (explicitly or implicitly) fitted to the out-of-sample data, because the out-of-sample data was already known to the modeller. That is, if the model failed to fit the out-of-sample data when the model was tested, it was then adjusted until it did work, or alternatively, only those models that fitted the out-of-sample data were published.

The Challenge

The challenge is envisioned as happening like this.

  1. We publicise this paper requesting that people send us example of prediction or near-prediction on complex social systems with pointers to the appropriate documentation.
  2. We collect these and analyse them according to the characteristics and questions described below.
  3. We will post some interim results in January 2020 [3], in order to prompt more examples and to stimulate discussion. The final deadline for examples is the end of March 2020.
  4. We will publish the list of all the examples sent to us on the web, and present our summary and conclusions at Social Simulation 2020 in Milan and have a discussion there about the nature and prospects for the prediction of complex social systems. Anyone who contributed an example will be invited to be a co-author if they wish to be so-named.

How suggestions will be judged

For each suggestion, a number of answers will be sought – namely to the following questions:

  • What are the papers or documents that describe the model?
  • Is there an explicit claim that the model can predict (as opposed to might in the future)?
  • What kind of characteristics are being predicted (number, probabilistic, range…)?
  • Is there evidence of a prediction being made before the prediction was verified?
  • Is there evidence of the model being used for a series of independent predictions?
  • Were any of the predictions verified by a team that is independent of the one that made the prediction?
  • Is there evidence of the same team or similar models making failed predictions?
  • To what extent did the model need extensive calibration/adjustment before the prediction?
  • What role does theory play (if any) in the model?
  • Are the conditions under which predictive ability claimed described?

Of course, negative answers to any of the above about a particular model does not mean that the model cannot predict. What we are assessing is the evidence that a model can predict something meaningful about complex social systems. (Silver 2012) describes the method by which they attempt prediction, but this method might be different from that described in most theory-based academic papers.

Possible Outcomes

This exercise might shed some light of some interesting questions, such as:

  • What kind of prediction of complex social systems has been attempted?
  • Are there any examples where the reliable prediction of complex social systems has been achieved?
  • Are there certain kinds of social phenomena which seem to more amenable to prediction than others?
  • Does aiming to predict with a model entail any difference in method than projects with other aims?
  • Are there any commonalities among the projects that achieve reliable prediction?
  • Is there anything we could (collectively) do that would encourage or document good prediction?

It might well be that whether prediction is achievable might depend on exactly what is meant by the word.

Acknowledgements

This paper resulted from a “lively discussion” after Gary’s (Polhill et al. 2019) talk about prediction at the Social Simulation conference in Mainz. Many thanks to all those who joined in this. Of course, prior to this we have had many discussions about prediction. These have included Gary’s previous attempt at a prediction competition (Polhill 2018) and Scott Moss’s arguments about prediction in economics (which has many parallels with the debate here).

Notes

[1] This is sufficient for other empirical purposes, such as explanation (Edmonds et al. 2019)

[2] Confusingly they sometimes the word “forecasting” for what we mean by predict here.

[3] Assuming we have any submitted examples to talk about

References

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1 (see also http://cfpm.org/discussionpapers/236)

Edmonds, B. (2017) The post-truth drift in social simulation. Social Simulation Conference (SSC2017), Dublin, Ireland. (http://cfpm.org/discussionpapers/195)

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Grimm V, Revilla E, Berger U, Jeltsch F, Mooij WM, Railsback SF, Thulke H-H, Weiner J, Wiegand T, DeAngelis DL (2005) Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science 310: 987-991.

Meese, R.A. & Rogoff, K. (1983) Empirical Exchange Rate models of the Seventies – do they fit out of sample? Journal of International Economics, 14:3-24.

Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/

Polhill, G., Hare, H., Anzola, D., Bauermann, T., French, T., Post, H. and Salt, D. (2019) Using ABMs for prediction: Two thought experiments and a workshop. Social Simulation 2019, Mainz.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Thorngate, W. & Edmonds, B. (2013) Measuring simulation-observation fit: An introduction to ordinal pattern analysis. Journal of Artificial Societies and Social Simulation, 16(2):14. http://jasss.soc.surrey.ac.uk/16/2/4.html

Watts, D. J. (2014). Common Sense and Sociological Explanations. American Journal of Sociology, 120(2), 313-351.


Edmonds, B., Polhill, G and Hales, D. (2019) Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Why the social simulation community should tackle prediction

By Gary Polhill

(Part of the Prediction-Thread)

On 4 May 2002, Scott Moss (2002) reported in the Proceedings of the National Academy of Sciences of the United States of America that he had recently approached the e-mail discussion list of the International Institute of Forecasters to ask whether anyone had an example of a correct econometric forecast of an extreme event. None of the respondents were able to provide a satisfactory answer.

As reported by Hassan et al. (2013), on 28 April 2009, Scott Moss asked a similar question of the members of the SIMSOC mailing list: “Does anyone know of a correct, real-time, model-based policy-impact forecast?” [1] No-one responded with such an example, and Hassan et al. note that the ensuing discussion questioned why we are bothering with agent-based models (ABMs). Papers such as Epstein’s (2008) suggest this is not an uncommon conversation.

On 23 March 2018, I wrote an email [2] to the SIMSOC mailing list asking for expressions of interest in a prediction competition to be held at the Social Simulation Conference in Stockholm in 2018. I received two such expressions, and consequently announced on 10 May 2018 that the competition would go ahead. [3] By 22 May 2018, however, one of the two had pulled out because of lack of data, and I contacted the list to say the competition would be replaced with a workshop. [4]

Why the problem with prediction? As Edmonds (2017), discussing different modelling purposes, says, prediction is extremely challenging in the type of complex social system in which an agent-based model would justifiably be applied. He doesn’t go as far as stating that prediction is impossible; but with Aodha (2017, p. 819) he says, in the final chapter of the same book, that modellers should “stop using the word predict” and policymakers should “stop expecting the word predict”. At a minimum, this suggests a strong aversion to prediction within the social simulation community.

Nagel (1979) gives attention to why prediction is hard in the social sciences. Not least amongst the reasons offered is the fact that social systems may adapt according to predictions made – whether those predictions are right or wrong. Nagel gives two examples of this: suicidal predictions are those in which a predicted event does not happen because steps are taken to avert the predicted event; self-fulfilling prophecies are events that occur largely because they have been predicted, but arguably would not have occurred otherwise.

The advent of empirical ABM, as hailed by Janssen and Ostrom’s (2006) editorial introduction to a special issue of Ecology and Society on the subject, naturally raises the question of using ABMs to make predictions, at least insofar as “predict” in this context means using an ABM to generate new knowledge about the empirical world that can be tested by observing it. There are various reasons why developing ABMs with the purpose of prediction is a goal worth pursuing. Three of them are:

  • Developing predictions, Edmonds (2017) notes, is an iterative process, requiring testing and adapting a model against various data. Engaging with such a process with ABMs offers vital opportunities to learn and develop methodology, not least on the collection and use of data in ABMs, but also in areas such as model design, calibration, validation and sensitivity analysis. We should expect, or at least be prepared for, our predictions to fail often. Then, the value is in what we learn from these failures, both about the systems we are modelling, and about the approach taken.
  • There is undeniably a demand for predictions in complex social systems. That demand will not go away just because a small group of people claim that prediction is impossible. A key question is how we want that demand to be met. Presumably at least some of the people engaged in empirical ABM have chosen an agent-based approach over simpler, more established alternatives because they believe ABMs to be sufficiently better to be worth the extra effort of their development. We don’t know whether ABMs can be better at prediction, but such knowledge would at least be useful.
  • Edmonds (2017) says that predictions should be reliable and useful. Reliability pertains both to having a reasonable comprehension of the conditions of application of the model, and to the predictions being consistently right when the conditions apply. Usefulness means that the knowledge the prediction supplies is of value with respect to its accuracy. For example, a weather forecast stating that tomorrow the mean temperature on the Earth’s surface will be between –100 and +100 Celsius is not especially useful (at least to its inhabitants). However, a more general point is that we are accustomed to predictions being phrased in particular ways because of the methods used to generate them. Attempting prediction using ABM may lead to a situation in which we develop different language around prediction, which in turn could have added benefits: (a) gaining a better understanding of what ABM offers that other approaches do not; (b) managing the expectations of those who demand predictions regarding what predictions should look like.

Prediction is not the only reason to engage in a modelling exercise. However, in future if the social simulation community is asked for an example of a correct prediction of an ABM, it would be desirable to be able to point to a body of research and methodology that has been developed as a result of trying to achieve this aim, and ideally to be able to supply a number of examples of success. This would be better than a fraught conversation about the point of modelling, and consequent attempts to divert attention to all of the other reasons to build an ABM that aren’t to do with prediction. To this end, it would be good if the social simulation community embraced the challenge, and provided a supportive environment to those with the courage to take it on.

Notes

  1. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;fb704db4.0904 (Cited in Hassan et al. (2013))
  2. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;14ecabbf.1803
  3. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SIMSOC;1802c445.1805
  4. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;ffe62b05.1805

References

Aodha, L. n. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 801-822.

Edmonds, B. (2017) Different modelling purposes. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 39-58.

Epstein, J. (2008) Why model? Journal of Artificial Societies and Social Simulation 11 (4), 12. http://jasss.soc.surrey.ac.uk/11/4/12.html

Hassan, S., Arroyo, J., Galán, J. M., Antunes, L. and Pavón, J. (2013) Asking the oracle: introducing forecasting principles into agent-based modelling. Journal of Artificial Societies and Social Simulation 16 (3), 13. http://jasss.soc.surrey.ac.uk/16/3/13.html

Janssen, M. A. and Ostrom, E. (2006) Empirically based, agent-based models. Ecology and Society 11 (2), 37. http://www.ecologyandsociety.org/vol11/iss2/art37/

Moss, S. (2002) Policy analysis from first principles. Proceedings of the National Academy of Sciences of the United States of America 99 (suppl. 3), 7267-7274. http://doi.org/10.1073/pnas.092080699

Nagel, E. (1979) The Structure of Science: Problems in the Logic of Scientific Explanation. Hackett Publishing Company.


Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)