Exascale computing and ‘next generation’ agent-based modelling

By Gary Polhill, Alison Heppenstall, Michael Batty, Doug Salt, Ricardo Colasanti, Richard Milton and Matt Hare

Introduction

In the past decade we have seen considerable gains in the amount of data and computational power that are available to us as scientific researchers.  Whilst the proliferation of new forms of data can present as many challenges as opportunities (linking data sets, checking veracity etc.), we can now begin to construct models that are capable of answering ever more complex and interrelated questions.  For example, what happens to individual health and the local economy if we pedestrianize a city centre?  What is the impact of increasing travel costs on the price of housing? How can we divert economic investment to places in economic decline from prosperous cities and regions. These advances are slowly positioning agent-based modelling to support decision-makers to make informed evidence-based decisions.  However, there is still a lack of ABMs being used outside of academia and policy makers find it difficult to mobilise and apply such tools to inform real world problems: here we explore the background in computing that helps address the question why such models are so underutilised in practice.

Whilst reaching a level of maturity (defined as being an accepted tool) within the social sciences, agent-based modelling still has several methodological barriers to cross.  These were first highlighted by Crooks et al. (2008) and revisited by Heppenstall et al. (2020) and include robust validation, elicitation of behaviour from data and scaling up.  Whilst other disciplines, such as meteorology, are able to conduct large numbers of simulations (ensemble modelling) using high-performance computing, there is a relative absence of this capability within agent-based modelling. Moreover, many different kinds of agent-based models are being devised, and key issues concern the number and type of agents and these are reflected in the whole computational context in which such models are developed. Clearly there is potential for agent-based modelling to establish itself as a robust policy tool, but this requires access to large-scale computing.

Exascale high-performance computing is defined with respect to speed of calculation with orders of magnitude defined as 10^18 (a billion-billion) floating point operations per second (flops). That is fast enough to calculate the ratios of the ages of each of every possible pair of people in China in roughly a second. By comparison, modern-day personal computers are around 10^9 flops (gigascale) – a billion times slower. The same rather pointless calculation of age ratios of the Chinese would take just over thirty years on a standard laptop at the time of writing (2023). Though agent-based modellers are more interested in instructions incorporating the rules operated by each agent executed per second than in floating-point operations, the speed of the two is approximately the same.

Anecdotally, the majority of simulations of agent-based models are on personal computers operating on the desktop. However, there are examples of the use of high-performance computing environments such as computing clusters (terascale) and cloud services such as Microsoft’s Azure, Amazon’s AWS or Google Cloud (tera- to peta-scale). High-performance computing provides the capacity to do more of what we already do (more runs for calibration, validation and sensitivity analysis) and/or at a larger scale (regional or sub-national scale rather than local scale) with the number of agents scaled accordingly. As a rough guide, however, since terascale computing is a million times slower than exascale computing, an experiment that currently takes a few days or weeks in a high-performance computing environment could be completed in a fraction of a second at exascale.

We are all familiar with poor user interface design in everyday computing, and in particular the frustration of waiting for the hourglasses, spinning wheels and progress bars to finish so that we can get on with our work. In fact, the ‘Doherty Threshold’ (Yablonski 2020) stipulates 400ms interaction time between human action and computer response for best productivity. If going from 10^9 to 10^18 flops is simply a case of multiplying the speed of computation by a billion, the Doherty threshold is potentially feasible with exascale computing when applied to simulation experiments that now require very long wait times for completion.

The scale of performance of exascale computers means that there is scope to go beyond doing-more-of-what-we-already-do to thinking more deeply about what we could achieve with agent-based modelling. Could we move past some of these methodological barriers that are characteristic of agent-based modelling? What could we achieve if we had appropriate software support, and how this would affect the processes and practices by which agent-based models are built? Could we move agent-based models to having the same level of ‘robustness’ as climate models, for example? We can conceive of a productivity loop in which an empirical agent-based model is used for sequential experimentation with continual adaptation and change, continued experiment with perhaps a new model emerging from these workflows to explore tangential issues. But currently we need to have tools that help us build empirical agent-based models much more rapidly, and critically, to find, access and preprocess empirical data that the model will use for initialisation, then finding and affirming parameter values.

The ExAMPLER project

The ExAMPLER (Exascale Agent-based Modelling for PoLicy Evaluation in Real-time) project is an eighteen-month project funded by the Engineering and Physical Sciences Research Council to explore the software, data and institutional requirements to support agent-based modelling at exascale.

With high-performance computing use not being commonplace in the agent-based modelling community, we are interested in finding out what the state-of-the-art is in high-performance computing use by agent-based modellers, undertaking a systematic literature review to assess the community’s ‘exascale-readiness’. This is not just a question of whether the community has the necessary technical skills to use the equipment. It is also a matter that covers whether the hardware is appropriate to the computational demands that agent-based modellers have, whether the software in which agent-based models are built can take advantage of the hardware, and whether the institutional processes by which agent-based modellers access high-performance computing – especially with respect to information requested of applicants – is aware of their needs.

We will then benchmark the state-of-the-art against high-performance computing use in other domains of research: ecology and microsimulation, which are comparable to agent-based social simulation (ABSS); and fields such as transportation, land use and urban econometric  modelling that are  not directly comparable to ABSS, but have similar computational challenges (e.g. having to simulate many interactions, needing to explore a vast uncharted parameter space, containing multiple qualitatively different outcomes from the same initial conditions, and so on). Ecology might not simulate agents with decision-making algorithms as computationally demanding as some of those used by agent-based modellers of social systems, while a crude characterisation of microsimulation work is that it does not simulate interactions among heterogeneous agents, which affects the parallelisation of simulating them. Land use and transport models usually rely on aggregates of agents but increasingly there are being disaggregated to finer and fine spatial units with these units themselves being treated more like agents. The ‘discipline-to-be-decided’ might have a community with generally higher technical computing skills than would be expected among social scientists. Benchmarking would allow us to gain better insights into the specific barriers faced by social scientists in accessing high-performance computing.

Two other strands of work in ExAMPLER feature significant engagement with the agent-based modelling community. The project’s imaginary starting point is a computer powerful enough to experiment with an agent-based model which run in fractions of a second. With a pre-existing agent-based model, we could use such a computer in a one-day workshop to enable a creative discussion with decision-makers about how to handle problems and policies associated with an emerging crisis. But what if we had the tools at our disposal to gather and preprocess data and build models such that these activities could also be achievable in the same day? or even the same hour? Some of our land use and transportation models are already moving in this direction (Horni, Nagel, and Axhausen, 2016). Agent-based modelling would thus become a social activity that facilitates discussion and decision-making that is mindful of complexity and cascading consequences. The practices and procedures associated with building an agent-based model would then have evolved significantly from what they are now, as have the institutions built around accessing and using high-performance computing.

The first strand of work co-constructs with the agent-based modelling community various scenarios by which agent-based modelling is transformed by the dramatic improvements in computational power that exascale computing entails. These visions will be co-constructed primarily through workshops, the first of which is being held at the Social Simulation Conference in Glasgow – a conference that is well-attended by the European (and wider international) agent-based social simulation community. However, we will also issue a questionnaire to elicit views from the wider community of those who cannot attend one of our events. There are two purposes to these exercises: to understand the requirements of the community and their visions for the future, but also to advertise the benefits that exascale computing could have.

In a second series of workshops, we will develop a roadmap for exascale agent-based modelling that identifies the institutional, scientific and infrastructure support needed to achieve the envisioned exascale agent-based modelling use-cases. In essence, what do we need to have in place to make exascale a reality for the everyday agent-based modeller? This activity is underpinned by training ExAMPLER’s research team in the hardware, software and algorithms that can be used to achieve exascale computation more widely. That knowledge, together with the review of the state-of-the-art in high-performance computing use with agent-based models, can be used to identify early opportunities for the community to make significant gains (Macal, and North, 2008)

Discussion

Exascale agent-based modelling is not simply a case of providing agent-based modellers with usernames and passwords on an exascale computer and letting them run their models on it. There are many institutional, scientific and infrastructural barriers that need to be addressed.

On the scientific side, exascale agent-based modelling could be potentially revolutionary in transforming the practices, methods and audiences for agent-based modelling. As a highly diverse community, methodological development is challenged both by the lack of opportunity to make it happen, and by the sheer range of agent-based modelling applications. Too much standardization and ritualized behaviour associated with ‘disciplining’ agent-based modelling risks some of the creative benefits of having the cross-disciplinary discussions that agent-based modelling enables us to have. Nevertheless, it is increasingly clear that off-the-shelf methods for designing, implementing and assessing models are ill-suited to agent-based modelling, or – especially in the case of the last of these – fail to do it justice (Polhill and Salt 2017, Polhill et al. 2019). Scientific advancement in agent-based modelling is predicated on having the tools at our disposal to tell the whole story of its benefits, and enabling non-agent-based modelling colleagues to understand how to work with the ABM community.

Hence, hardware is only a small part of the story of the infrastructure supporting exascale agent-based modelling. Exascale computers are built using GPUs (Graphical Processing Units) – which, bluntly-speaking, are specialized computing engines for performing matrix calculations and ‘drawing millions of triangles as quickly as possible’ – they are, in any case, different from CPU-based computing. In Table 4 of Kravari and Bassiliades’ (2015) survey of agent-based modelling platforms, only two of the 24 platforms reviewed (Cormas – Bommel et al. 2016 and GAMA – Taillandier et al. 2019) are not listed as involving Java and/or the Java Virtual Machine. (As it turns out, GAMA does use Java.) TornadoVM (Papadimitriou et al. 2019) is one tool allowing Java Virtual Machines to run on GPUs. Even if we can then run NetLogo on a GPU, specialist GPU-based agent-based modelling platforms such as Richmond et al.’s (2010, 2022) FLAME GPU may be preferable in order to make best use of the highly parallelized computing environment on GPUs.

Such software simply achieves getting an agent-based model running on an exascale computer. Realizing some of the visions of future exascale-enabled agent-based modelling means rather more in the way of software support. For example, the one-day workshop in which an agent-based modelling is co-constructed with stakeholders asks either a great deal of the developers in terms of building a bespoke application in tens of minutes, or many stakeholders trusting pre-constructed modular components that can be brought together rapidly using a specialist software tool.

As has been noted (e.g. Alessa et al. 2006, para 3.4), agent-based modelling is already challenging for social scientists without programming expertise, and GPU programming is a highly specialized domain in the world of software environments. Exascale computing intersects GPU programming with high-performance computing; issues with the ways in which high-performance computing clusters are typically administered make access to them a significant obstacle for agent-based modellers (Polhill 2022). There are therefore institutional barriers that need to be broken down for the benefits of exascale agent-based modelling to be realized in a community primarily interested in the dynamics of social and/or ecological complexity, and rather less in the technology that enables them to pursue that interest. ExAMPLER aims to provide us with a voice that gets our requirements heard so that we are not excluded from taking best advantage of advanced development in computing hardware.

Acknowledgements

The ExAMPLER project is funded by the EPSRC under grant number EP/Y008839/1.  Further information is available at: https://exascale.hutton.ac.uk

References

Alessa, L. N., Laituri, M. and Barton, M. (2006) An “all hands” call to the social science community: Establishing a community framework for complexity modeling using cyberinfrastructure. Journal of Artificial Societies and Social Simulation 9 (4), 6. https://www.jasss.org/9/4/6.html

Bommel, P., Becu, N., Le Page, C. and Bousquet, F. (2016) Cormas: An agent-based simulation platform for coupling human decisions with computerized dynamics. In Kaneda, T., Kanegae, H., Toyoda, Y. and Rizzi, P. (eds.) Simulation and Gaming in the Network Society. Translational Systems Sciences 9, pp. 387-410. doi:10.1007/978-981-10-0575-6_27

Crooks, A. T., C. J. E. Castle, and M. Batty. (2008). “Key Challenges in Agent-Based Modelling for Geo-spatial Simulation.” Computers, Environment and Urban Systems  32(6),  417– 30.

Heppenstall A, Crooks A, Malleson N, Manley E, Ge J, Batty M. (2020). Future Developments in Geographical Agent-Based Models: Challenges and Opportunities. Geographical Analysis. 53(1): 76 – 91 doi:10.1111/gean.12267

Horni, A, Nagel, K and Axhausen, K W. (eds)(2016) The Multi-Agent Transport Simulation MATSim, Ubiquity Press, London, 447–450

Kravari, K. and Bassiliades, N. (2015) A survey of agent platforms. Journal of Artificial Societies and Social Simulation 18 (1), 11. https://www.jasss.org/18/1/11.html

Macal, C. M., and North, M. J. (2008) Agent-Based Modeling And Simulation for EXASCALE Computing, http://www.scidac.org

Papadimitriou, M., Fumero, J., Stratikopoulos, A. and Kotselidis, C. (2019) Towards prototyping and acceleration of Java programs onto Intel FPGAs. Proceedings of the 2019 IEEE 27th Annueal International Symposium on Field-Programmable Custom Computing Machines (FCCM). doi:10.1109/FCCM.2019.00051

Polhill, G. (2022) Antisocial simulation: using shared high-performance computing clusters to run agent-based models. Review of Artificial Societies and Social Simulation, 14 Dec 2022. https://rofasss.org/2022/12/14/antisoc-sim

Polhill, G. and Salt, D. (2017) The importance of ontological structure: why validation by ‘fit-to-data’ is insufficient. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity (2nd edition), pp. 141-172. doi:10.1007/978-3-319-66948-9_8

Polhill, J. G., Ge, J., Hare, M. P., Matthews, K. B., Gimona, A., Salt, D. and Yeluripati, J. (2019) Crossing the chasm: a ‘tube-map’ for agent-based simulation of policy scenarios in spatially-distributed systems. Geoinformatica 23, 169-199. doi:10.1007/s10707-018-00340-z

Richmond, P., Chisholm, R., Heywood, P., Leach, M. and Kabiri Chimeh, M. (2022) FLAME GPU (2.0.0-rc). Zenodo. doi:10.5281/zenodo.5428984

Richmond, P., Walker, D., Coakley, S. and Romano, D. (2010) High performance cellular level agent-based simulation with FLAME for the GPU. Briefings in Bioinformatics 11 (3), 334-347. doi:10.1093/bib/bbp073

Taillandier, P., Gaudou, B., Grignard, A.,Huynh, Q.-N., Marilleau, N., P. Caillou, P., Philippon, D. and Drogoul, A. (2019). Building, composing and experimenting complex spatial models with the GAMA platform. Geoinformatica 23 (2), 299-322, doi:10.1007/s10707-018-00339-6

Yablonski, J. (2020) Laws of UX. O’Reilly. https://www.oreilly.com/library/view/laws-of-ux/9781492055303/


Polhill, G., Heppenstall, A., Batty, M., Salt, D., Colasanti, R., Milton, R. and Hare, M. (2023) Exascale computing and ‘next generation’ agent-based modelling. Review of Artificial Societies and Social Simulation, 9 Mar 2023. https://rofasss.org/2023/09/29/exascale-computing-and-next-gen-ABM


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Agent-based Modelling as a Method for Prediction for Complex Social Systems – a review of the special issue

International Journal of Social Research Methodology, Volume 26, Issue 2.

By Oswaldo Terán

Escuela de Ciencias Empresariales, Universidad Católica del Norte, Coquimbo, Chile

This special issue appeared following a series of articles in RofASSS regarding the polemic around Agent-Based Modelling (ABM) prediction (https://rofasss.org/tag/prediction-thread/).  As expected, the articles in the special issue complement and expand upon the initial RofASSS’s discussion.

The goal of the special issue is to explore a wide range of positions regarding ABM prediction, encompassing methodological, epistemic and pragmatic issues. Contributions go from moderately sceptic and pragmatic positions to strongly sceptic positions. Moderately sceptic views argue that ABM can cautiously be employed for prediction, sometimes as a complement to other approaches, acknowledging its somewhat peripheral role in social research. Conversely, strongly sceptic positions contend that, in general, ABM can not be utilized for prediction. Several factors are instrumental in distinguishing and understanding these positions with respect to ABM prediction, especially the following:

  • the conception of prediction.
  • the complexity of modelled systems and models: this encompasses factors such as multiple views (or perspectives), uncertainty, auto-organization, self-production, emergence, structural change, and data incompleteness. These complexities are associated with the limitations of our language and tools to comprehend and symmetrically model complex systems.

Considering these factors, we will summarize the diverse positions presented in this special issue. Then, we will delve into the notions of prediction and complexity and briefly situate each position within the framework provided by these definitions

Elsebroich and Polhill (2023) (Editorial) summarizes the diverse positions in the special issue regarding prediction, categorizing them into three groups: 1) Positive, a position that assumes that “all we need for prediction is to have the right data,  methods and mechanism” (p. 136); 2) pragmatic, a position advocate to for cautious use of ABM to attempt prediction, often to compliment other approaches and avoid exclusive reliance on them; and 3) sceptic, a position arguing that ABM can not be used for prediction but can serve other purposes.  The authors place this discussion in a broader context, considering other relevant papers on ABM prediction. The authors acknowledge the challenge of prediction in complex systems, citing factors such as multiple perspectives, asynchronous agent actions, emergence, nonlinearity, non-ergodicity, evolutionary dynamics and heterogeneity. They indicate that some of these factors are well managed in ABM, but not others, noticeably “multiple perspectives/views”. Uncertainty is another critical element affecting ABM prediction, along with the relationship between prediction and explanation. The authors proved a summary of the debate surrounding the possibilities of prediction and its relation with explanation, incorporating insightful views from external sources (e.g., Thompson & Derr, 2009; Troitzsch, 2009). They also highlight recent developments in this debate, noticing that ABM has evolved into a more empirical and data-driven approach, deeply focused on modelling complex social and ecological systems, including Geographical Information Systems data and real time data integration, leading to a more contentious discussion regarding empirical data-driven ABM prediction.

Chattoe-Brown (2023) supports the idea that ABM prediction is possible. He argues for the utility of using AMB not only to predict real world outcomes but also to predict models. He also advocates for using prediction for predictive failure and assessing predictions. His notion of prediction finds support on by key elements of prediction in social science derived from real research across disciplines. For instance, the need of adopting a conceptual approach to enhance our comprehension of the various facets of prediction, the functioning of diverse prediction approaches, and the need for clear thinking about temporal logic. Chattoe-Brown argues that he attempts to make prediction intelligible rather than seen if it is successful. He support the idea that ABM prediction is useful for coherent social science. He contrasts ABM to other modelling methods that predict on trend data alone, underscoring the advantages of ABM. From his position, ABM prediction can add value to other research, taking a somewhat secondary role.

Dignum (2023) defends the ability of ABM to make prediction while distinguishing the usefulness of a prediction from the truth of a prediction. He argues in favour of limited prediction in specific cases, especially when human behaviour is involved. He shows prediction alongside explanations of the predicted behaviour, which arise under specific constrains that define particular scenarios. His view is moderately positive, suggesting that prediction is possible under certain specific conditions, including a stable environment and sufficient available data.

Carpentras and Quayle (2023) call for improved agent specification to reduce distortions when using psychometric instruments, particularly in measurements of political opinion within ABM. They contend that the quality of prediction and validation depends on the scale of the system but acknowledges the challenges posed by the high complexity of the human brain, which is central to their study. Furthermore, they raise concerns about representativeness, especially considering the discrepancy between certain theoretical frameworks (e.g., opinion dynamics) and survey data.

Anzola and García-Díaz (2023) advocate for better criteria to judge prediction and a more robust framework for the practice of prediction to better coordinate efforts within the research community (helping to better contextualize needs and expectations). They hold a somewhat sceptic position, suggesting that prediction typically serve an instrumental role in scientific practices, subservient to other epistemic goals.

Elsenbroich and Badham (2023) adopt a somewhat negative and critical stance toward using ABM for prediction, asserting that ABM can improve forecasting but not provide definite predictions of specific future events. ABM can only generate coherent extrapolations from a certain initialization of the ABM and a set of assumptions. They argue that ABM generates “justified stories” based on internal coherence, mechanisms and consistency  with empirical evidence, but these can not be confused with precise predictions. They ask for the combined support of ABM on theoretical developments and external data.

Edmonds (2023) is the most sceptical regarding the use of ABM for prediction, contending that the motivation for prediction in ABM is a desire without evidence of its achievability. He highlights inherent reasons for preventing prediction in complex social and ecological systems, including incompleteness, chaos, context specificity, and more. In his perspective, it is essential to establish the distinction between prediction and explanation. He advocates for recognizing the various potential applications of AMB beyond prediction, such as description, explanation, analogy, and more. For Edmonds, prediction should entail generating data that is unknown to the modellers. To address the ongoing debate and the weakness of the practices in ABM prediction, Edmonds proposes a process of iterative and independent verification. However, this approach faces limitations due to the incomplete understanding of the underlying process that should be included into the requirement for high-quality, relevant data. Despite these challenges, Edmonds suggest that prediction could prove valuable in meta-modelling, particularly to comprehend better our own simulation models.

The above summarized diverse positions on ABM prediction within the reviewed articles can be better understood through the lenses of Troitzsch’s notion of prediction and McNabb’s descriptions of complex and complicated systems. Troitzsch (2009) distinguishes the difference between prediction and explanation by using three possible conceptions of predictions. The typical understanding of ABM prediction closely aligns with Troitzsch’s third definition of prediction, which answer to the following question:

Which state will the target system reach in the near future, again given parameters and previous states which may or may not have been precisely measured?

The answer to this question results in a prediction, which can be either stochastic or deterministic. In our view, explanations encompass broader range of statements than predictions. An explanation entails a wider scope, including justifications, descriptions, and reasons for various real or hypothetical scenarios. Explanation is closely tied to a fundamental aspect of human communication capacity signifying the act of making something plain, clear or comprehensible by elaborating its meaning. But, what precisely does it expand or elaborate?. It expands a specific identification, opinion, judgement or belief. In general, a prediction implies a much narrower and more precise statement than an explanation, often hinting at possibilities regarding future events.

Several factors influence complex systems, including self-organization, multiple views, and dynamic complexity as defined by McNabb (2023a-c). McNabb contend that in complex systems the interaction among components and between the system as a whole and its environment transcend the insights derived from a mere components analysis. Two central characteristics of complex systems are self-organization and emergence. It is important to distinguish between complex systems and complicated systems: complex systems are organic systems (comprising biological, psychological and social systems), whereas complicated systems are mechanical systems (e.g., air planes, a computer, and ABM models). The challenge of agency arises primarily in complex systems, marked by highly uncertain behaviour. Relationships within self-organized system exhibit several noteworthy properties, although, given the need for a concise discussion regarding ABM prediction, we will consider here only a few of them (McNabb, 2023a-c):

  1. Multiple views,
  2. Dynamic interactions (connexion among components changes over time),
  3. Non-linear interaction (small causes can lead to unpredictable effects),
  4. The system lacks static equilibrium (instead, it maintains a dynamic equilibrium and remains unstable),
  5. Understanding the current state necessitates examining Its history (a diachronic, not synchronic study, is essential)

Given the possibility of multiple views, a complex systems are prone to significant structural change due to  dynamic and non-linear interactions, dynamic equilibrium  and diachronic evolution. Additionally, the probability of possessing both the right change mechanism (the logical process) and complete data (addressing the challenge of data incompleteness) required to initialize the model and establish necessary assumptions is excessively low. Consequently, predicting outcomes in complex systems (defined as organic systems) whether using AMB or alternative mechanisms, becomes nearly impossible. If such prediction does occur, it typically happens under highly specific conditions, such as within a brief time frame and controlled settings, often amounting to a form of coincidental success. Only after the expected event or outcomes materializes can we definitely claim that it was predicted. Although prediction remains a challenging endeavour in complex systems, it remains viable in complicated systems. In complicated systems, prediction serves as an answer to Troitzsch’s aforementioned question.

Taking into account Troitzsch’s notion of prediction and McNabb’s ideas on complex systems and complicated systems, let’s briefly revisit the various positions presented in this special issue.

Chattoe-Brown (2023) suggests using models to predict models. Models are considered complicated rather than complex systems, so it this case, we would be predicting a complicated system rather than a complex one. This represents a significant reduction.

Dignum (2023) argues that prediction is possible in cases where there is a stable environment (conditions) and sufficient available data. However, this generally is not the case, making it challenging to meet the requirements for prediction when considering complex (organic) systems.

Carpentras and Quayle (2023) themselves acknowledge the difficulties of prediction in ABM when studying issues related to psychological systems involving psychometric measures, which are a type of organic system, aligning with our argument.

Elsenbroich and Badham (2023), Elsebroich and Polhill (2023), and Edmonds (2023) maintain a strongly sceptic position regarding ABM prediction. They argue that AMBs yield coherent extrapolations based on a specific initialization of the model and a set of assumptions, but these extrapolations are not necessarily grounded in reality. According to them, complex systems exhibit properties such as information incompleteness, multiple perspectives, emergence, evolutionary dynamics, and context specificity. In this respect, their position aligns with the stance we are presenting here.

Finally, Anzola and García-Díaz (2023) advocate for a more robust framework for prediction and recognizes the ongoing debate on prediction, an stance that closely resonates with our own.

In conclusion, Troitzsch notion of prediction and McNabb descriptions of complex systems and complicated systems have helped us better understand the diverse positions on ABM prediction in the reviewed issue. This exemplifies how a good conceptual framework, in this
case offered by appropriate notions of prediction and complexity, can
contribute to reducing the controversy surrounding ABM prediction.

References

Anzola D. and García-Díaz C. (2023). What kind of prediction? Evaluating different facets of prediction in agent-based social simulation International Journal of Social Research Methodology, 26(2), pp. 171-191. https://doi.org/10.1080/13645579.2022.2137919

Carpentras D. and Quayle M. (2023). The psychometric house-of-mirrors: the effect of measurement distortions on agent-based models’ predictions. International Journal of Social Research Methodology, 26(2), pp. 215-231. https://doi.org/10.1080/13645579.2022.2137938

Chattoe-Brown E. (2023). Is agent-based modelling the future of prediction International Journal of Social Research Methodology, 26(2), pp. 143-155. https://doi.org/10.1080/13645579.2022.2137923

Dignum F. (2023). Should we make predictions based on social simulations?}. International Journal of Social Research Methodology, 26(2), pp. 193-206. https://doi.org/10.1080/13645579.2022.2137925

Edmonds B. (2023). The practice and rhetoric of prediction – the case in agent-based modelling. International Journal of Social Research Methodology, 26(2), pp. 157-170. https://doi.org/10.1080/13645579.2022.2137921

Edmonds, B., Polhill, G., & Hales, D. (2019). Predicting Social Systems – A Challenge. https://rofasss.org/2019/11/04/predicting-social-systems-a-challenge/

Elsenbroich C. and Polhill G. (2023) Editorial: Agent-based modelling as a method for prediction in complex social systems. International Journal of Social Research Methodology, 26/2, 133-142. https://doi.org/10.1080/13645579.2023.2152007

Elsenbroich C. and Badham J. (2023). Negotiating a Future that is not like the Past. International Journal of Social Research Methodology, 26(2), pp. 207-213. https://doi.org/10.1080/13645579.2022.2137935

McNabb D. (2023a, September 20). El Paradigma de la complejidad (1/3) [Video]. YouTube. https://www.youtube.com/watch?app=desktop&v=Uly1n6tOOlA&ab_channel=DarinMcNabb

McNabb D. (2023b, September 20). El Paradigma de la complejidad (2/3) [Video]. YouTube. https://www.youtube.com/watch?v=PT2m9lkGhvM&ab_channel=DarinMcNabb

McNabb D. (2023c, September 20). El Paradigma de la complejidad (3/3) [Video]. YouTube. https://www.youtube.com/watch?v=25f7l6jzV5U&ab_channel=DarinMcNabb

Troitzsch, K. G. (2009). Not all explanations predict satisfactorily, and not all good predictions explain. Journal of Artificial Societies and Social Simulation, 12(1), 10. https://www.jasss.org/12/1/10.html


Terán, O. (2023) Agent-based Modelling as a Method for Prediction for Complex Social Systems - a review of the special issue. Review of Artificial Societies and Social Simulation, 28 Sep 2023. https://rofasss.org/2023/09/28/review-ABM-for-prediction


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)