Tag Archives: agent-based modelling

Outlining some requirements for synthetic populations to initialise agent-based models

By Nick Roxburgh1, Rocco Paolillo2, Tatiana Filatova3, Clémentine Cottineau3, Mario Paolucci2 and Gary Polhill1

1  The James Hutton Institute, Aberdeen AB15 8QH, United Kingdom {nick.roxburgh,gary.polhill}@hutton.ac.uk

2  Institute for Research on Population and Social Policies, Rome, Italy {rocco.paolillo,mario.paolucci}@cnr.it

3 Delft University of Technology, Delft, The Netherlands {c.cottineau,t.filatova}@tudelft.nl

Abstract. We propose a wish list of features that would greatly enhance population synthesis methods from the perspective of agent-based modelling. The challenge of synthesising appropriate populations is heightened in agent-based modelling by the emphasis on complexity, which requires accounting for a wide array of features. These often include, but are not limited to: attributes of agents, their location in space, the ways they make decisions and their behavioural dynamics. In the real-world, these aspects of everyday human life can be deeply interconnected, with these associations being highly consequential in shaping outcomes. Initialising synthetic populations in ways that fail to respect these covariances can therefore compromise model efficacy, potentially leading to biased and inaccurate simulation outcomes.

1 Introduction

With agent-based models (ABMs), the rationale for creating ever more empirically informed, attribute-rich synthetic populations is clear: the closer agents and their collectives mimic their  real-world counterparts, the more accurate the models can be and the wider the range of questions they can be used to address (Zhou et al., 2022). However, while many ABMs would benefit from synthetic populations that more fully capture the complexity and richness of real-world populations – including their demographic and psychological attributes, social networks, spatial realms, decision making, and behavioural dynamics – most efforts are stymied by methodological and data limitations. One reason for this is that population synthesis methods have predominantly been developed with microsimulation applications in mind (see review by Chapuis et al. (2022)), rather than ABM. We therefore argue that there is a need for improved population synthesis methods, attuned to support the specific requirements of the ABM community, as well as commonly encountered data constraints. We propose a wish list of features for population synthesis methods that could significantly enhance the capability and performance of ABMs across a wide range of application domains, and we highlight several promising approaches that could help realise these ambitions. Particular attention is paid to methods that prioritise accounting for covariance of characteristics and attributes.

2 The interrelationships among aspects of daily life

2.1 Demographic and psychological attributes

To effectively replicate real-world dynamics, ABMs must realistically depict demographic and psychological attributes at both individual and collective levels. A critical aspect of this realism is accounting for the covariance of such attributes. For instance, interactions between race and income levels significantly influence spatial segregation patterns in the USA, as demonstrated in studies like Bruch (2014).

Several approaches to population synthesis have been developed over the years, often with a specific focus on assignment of demographic attributes. That said, where psychological attributes are collected in surveys alongside demographic data, they can be incorporated into synthetic populations just like other demographic attributes (e.g., Wu et al. (2022)). Among the most established methods is Iterative Proportional Fitting (IPF). While capable of accounting for covariances, it does have significant limitations. One of these is that it “matches distributions only at one demographic level (i.e., either household or individual)” (Zhou et al., 2022 p.2). Other approaches have sought to overcome this – such as Iterative Proportional Updating, Combinatorial Optimisation, and deep learning methods – but they invariably have their own limitations and downsides, though the extent to which these will matter depends on the application. In their overview of the existing population synthesis landscape, Zhou et al., (2022) suggest that deep learning methods appear particularly promising for high-dimensional cases. Such approaches tend to be data hungry, though – a potentially significant barrier to exploitation given many studies already face challenges with survey availability and sample size.

2.2 Social networks

Integrating realistic social networks into ABMs during population synthesis is crucial for effectively mimicking real-world social interactions, such as those underlying epidemic spread, opinion dynamics, and economic transactions (Amblard et al., 2015). In practice, this means generating networks that link agents by edges that represent particular associations between them. These networks may need to be weighted, directional, or multiplex, and potentially need to account for co-dependencies and correlations between layers. Real-world social networks emerge from distinct processes and tendencies. For example, homophily preferences strongly influence the likelihood of friendship formation, with connections more likely to have developed in cases where agents share attributes like age, gender, socio-economic context, and location (McPherson et al., 2001). Another example is personality which can strongly influence the size and nature of an individual’s social network (Zell et al., 2014). For models where social interactions play an important role, it is therefore critical that consideration be given to the underlying factors and mechanisms that are likely to have influenced the development of social networks historically, if synthetic networks are to have any chance of reasonably depicting real world network structures.

Generating synthetic social networks is challenging due to often limited or unavailable data. Consequently, researchers tend to use simple models like regular lattices, random graphs, small-world networks, scale-free networks, and models based on spatial proximity. These models capture basic elements of real-world social networks but can fall short in complex scenarios. For instance, Jiang et al. (2022) describes a model where agents, already assigned to households and workplaces, form small-world networks based on employment or educational ties. While this approach accounts for spatial and occupational similarities, it overlooks other factors, limiting its applicability for networks like friendships that rely on personal history and intangible attributes.

To address these limitations, more sophisticated methods have been proposed, including Exponential Random Graph Models (ERGM) (Robins et al., 2007) and Yet Another Network Generator (YANG) (Amblard et al., 2015). However, they also come with their own challenges; for example, ERGMs sometimes misrepresent the likelihood of certain network structures, deviating from real-world observations.

2.3 Spatial locations

The places where people live, work, take their leisure and go to school are critically interlinked and interrelated with social networks and demographics. Spatial location also affects options open to people, including transport, access to services, job opportunities and social encounters. ABMs’ capabilities in representing space explicitly and naturally is a key attraction for geographers interested in social simulation and population synthesis (Cottineau et al., 2018). Ignoring the spatial concentration of agents with common traits, or failing to account for the effects that space has on other aspects of everyday human existence, risks overlooking a critical factor that influences a wide range of social dynamics and outcomes.

Spatial microsimulation generates synthetic populations tailored to defined geographic zones, such as census tracts (Lovelace and Dumont, 2017). However, many ABM applications require agents to be assigned to specific dwellings and workplaces, not just aggregated zones. While approaches to dealing with this have been proposed, agreement on best practice is yet to cohere. Certain agent-location assignments can be implemented using straightforward heuristic methods without greatly compromising fidelity, if heuristics align well with real-world practices. For example, children might be allocated to schools simply based on proximity, such as in Jiang et al., (2022). Others use rule-based or stochastic methods to account for observed nuances and random variability, though these often take the form of crude approximations. One of the more well-rounded examples is detailed by Zhou et al. (2022). They start by generating a synthetic population, which they then assign to specific dwellings and jobs using a combination of rule-based matching heuristic and probabilistic models. Dwellings are assigned to households by considering factors like household size, income, and dwelling type jointly. Meanwhile, jobs are assigned to workers using a destination choice model that predicts the probability of selecting locations based on factors such as sector-specific employment opportunities, commuting costs, and interactions between commuting costs and individual worker attributes. In this way, spatial location choices are more closely aligned with the diverse attributes of agents. The challenge with such an approach is to obtain sufficient microdata to inform the rules and probabilities.

2.4 Decision-making and behavioural dynamics

In practice, peoples’ decision-making and behaviours are influenced by an array of factors, including their individual characteristics such as wealth, health, education, gender, and age, their social network, and their geographical circumstances. These factors shape – among other things – the information agents’ are exposed to, the choices open to them, the expectations placed on them, and their personal beliefs and desires (Lobo et al., 2023). Consequently, accurately initialising such factors is important for ensuring that agents are predisposed to make decisions and take actions in ways that reflect how their real world counterparts might behave. Furthermore, the assignment of psychographic attributes to agents necessitates the prior establishment of these foundational characteristics as they are often closely entwined.

Numerous agent decision-making architectures have been proposed (see Wijermans et al. (2023)). Many suggest that a range of agent state attributes could, or even should, be taken into consideration when evaluating information and selecting behaviours. For example, the MoHub Framework (Schlüter et al., 2017) proposes four classes of attributes as potentially influential in the decision-making process: needs/goals, knowledge, assets, and social. In practice, however, the factors taken into consideration in decision-making procedures tend to be much narrower. This is understandable given the higher data demands that richer decision-making procedures entail. However, it is also regrettable given we know that decision-making often draws on many more factors than are currently accounted for, and the ABM community has worked hard to develop the tools needed to depict these richer processes.

3 Practicalities

Our wish list of features for synthetic population algorithms far exceeds their current capabilities. Perhaps the main issue today is data scarcity, especially concerning less tangible aspects of populations, such as psychological attributes and social networks, where systematic data collection is often more limited. Another significant challenge is that existing algorithms struggle to manage the numerous conditional probabilities involved in creating realistic populations, excelling on niche measures of performance but not from a holistic perspective. Moreover, there are accessibility issues with population synthesis tools. The next generation of methods need to be made more accessible to non-specialists through developing easy to use stand-alone tools or plugins for widely used platforms like NetLogo, else they risk not having their potential exploited.

Collectively, these issues may necessitate a fundamental rethink of how synthetic populations are generated. The potential benefits of successfully addressing these challenges are immense. By enhancing the capabilities of synthetic population tools to meet the wish list set out here, we can significantly improve model realism and expand the potential applications of social simulation, as well as strengthen credibility with stakeholders. More than this, though, such advancements would enhance our ability to draw meaningful insights, respecting the complexities of real-world dynamics. Most critically, better representation of the diversity of actors and circumstances reduces the risk of overlooking factors that might adversely impact segments of the population – something there is arguably a moral imperative to strive for.

Acknowledgements

MP & RP were supported by FOSSR (Fostering Open Science in Social Science Research), funded by the European Union – NextGenerationEU under NPRR Grant agreement n. MUR IR0000008. CC was supported by the ERC starting Grant SEGUE (101039455).

References

Amblard, F., Bouadjio-Boulic, A., Gutiérrez, C.S. and Gaudou, B. 2015, December. Which models are used in social simulation to generate social networks? A review of 17 years of publications in JASSS. In 2015 Winter Simulation Conference (WSC) (pp. 4021-4032). IEEE. https://doi.org/10.1109/WSC.2015.7408556

Bruch, E.E., 2014. How population structure shapes neighborhood segregation. American Journal of Sociology119(5), pp.1221-1278. https://doi.org/10.1086/675411

Chapuis, K., Taillandier, P. and Drogoul, A., 2022. Generation of synthetic populations in social simulations: a review of methods and practices. Journal of Artificial Societies and Social Simulation25(2). https://doi.org/10.18564/jasss.4762

Cottineau, C., Perret, J., Reuillon, R., Rey-Coyrehourcq, S. and Vallée, J., 2018, March. An agent-based model to investigate the effects of social segregation around the clock on social disparities in dietary behaviour. In CIST2018-Représenter les territoires/Representing territories (pp. 584-589). https://hal.science/hal-01854398v1

Jiang, N., Crooks, A.T., Kavak, H., Burger, A. and Kennedy, W.G., 2022. A method to create a synthetic population with social networks for geographically-explicit agent-based models. Computational Urban Science2(1), p.7. https://doi.org/10.1007/s43762-022-00034-1

Lobo, I., Dimas, J., Mascarenhas, S., Rato, D. and Prada, R., 2023. When “I” becomes “We”: Modelling dynamic identity on autonomous agents. Journal of Artificial Societies and Social Simulation26(3). https://doi.org/10.18564/jasss.5146

Lovelace, R. and Dumont, M., 2017. Spatial microsimulation with R. Chapman and Hall/CRC. https://spatial-microsim-book.robinlovelace.net

McPherson, M., Smith-Lovin, L. and Cook, J.M., 2001. Birds of a feather: Homophily in social networks. Annual review of sociology27(1), pp.415-444. https://doi.org/10.1146/annurev.soc.27.1.415

Robins, G., Pattison, P., Kalish, Y. and Lusher, D., 2007. An introduction to exponential random graph (p*) models for social networks. Social networks29(2), pp.173-191. https://doi.org/10.1016/j.socnet.2006.08.002

Schlüter, M., Baeza, A., Dressler, G., Frank, K., Groeneveld, J., Jager, W., Janssen, M.A., McAllister, R.R., Müller, B., Orach, K. and Schwarz, N., 2017. A framework for mapping and comparing behavioural theories in models of social-ecological systems. Ecological economics131, pp.21-35. https://doi.org/10.1016/j.ecolecon.2016.08.008

Wijermans, N., Scholz, G., Chappin, É., Heppenstall, A., Filatova, T., Polhill, J.G., Semeniuk, C. and Stöppler, F., 2023. Agent decision-making: The Elephant in the Room-Enabling the justification of decision model fit in social-ecological models. Environmental Modelling & Software170, p.105850. https://doi.org/10.1016/j.envsoft.2023.105850

Wu, G., Heppenstall, A., Meier, P., Purshouse, R. and Lomax, N., 2022. A synthetic population dataset for estimating small area health and socio-economic outcomes in Great Britain. Scientific Data9(1), p.19. https://doi.org/10.1038/s41597-022-01124-9

Zell, D., McGrath, C. and Vance, C.M., 2014. Examining the interaction of extroversion and network structure in the formation of effective informal support networks. Journal of Behavioral and Applied Management15(2), pp.59-81. https://jbam.scholasticahq.com/article/17938.pdf

Zhou, M., Li, J., Basu, R. and Ferreira, J., 2022. Creating spatially-detailed heterogeneous synthetic populations for agent-based microsimulation. Computers, Environment and Urban Systems91, p.101717. https://doi.org/10.1016/j.compenvurbsys.2021.101717


Roxburgh, N., Paolillo, R., Filatova, T., Cottineau, C., Paolucci, M. and Polhill, G. (2025) Outlining some requirements for synthetic populations to initialise agent-based models. Review of Artificial Societies and Social Simulation, 27 Jan 2025. https://rofasss.org/2025/01/29/popsynth


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Exascale computing and ‘next generation’ agent-based modelling

By Gary Polhill, Alison Heppenstall, Michael Batty, Doug Salt, Ricardo Colasanti, Richard Milton and Matt Hare

Introduction

In the past decade we have seen considerable gains in the amount of data and computational power that are available to us as scientific researchers.  Whilst the proliferation of new forms of data can present as many challenges as opportunities (linking data sets, checking veracity etc.), we can now begin to construct models that are capable of answering ever more complex and interrelated questions.  For example, what happens to individual health and the local economy if we pedestrianize a city centre?  What is the impact of increasing travel costs on the price of housing? How can we divert economic investment to places in economic decline from prosperous cities and regions. These advances are slowly positioning agent-based modelling to support decision-makers to make informed evidence-based decisions.  However, there is still a lack of ABMs being used outside of academia and policy makers find it difficult to mobilise and apply such tools to inform real world problems: here we explore the background in computing that helps address the question why such models are so underutilised in practice.

Whilst reaching a level of maturity (defined as being an accepted tool) within the social sciences, agent-based modelling still has several methodological barriers to cross.  These were first highlighted by Crooks et al. (2008) and revisited by Heppenstall et al. (2020) and include robust validation, elicitation of behaviour from data and scaling up.  Whilst other disciplines, such as meteorology, are able to conduct large numbers of simulations (ensemble modelling) using high-performance computing, there is a relative absence of this capability within agent-based modelling. Moreover, many different kinds of agent-based models are being devised, and key issues concern the number and type of agents and these are reflected in the whole computational context in which such models are developed. Clearly there is potential for agent-based modelling to establish itself as a robust policy tool, but this requires access to large-scale computing.

Exascale high-performance computing is defined with respect to speed of calculation with orders of magnitude defined as 10^18 (a billion-billion) floating point operations per second (flops). That is fast enough to calculate the ratios of the ages of each of every possible pair of people in China in roughly a second. By comparison, modern-day personal computers are around 10^9 flops (gigascale) – a billion times slower. The same rather pointless calculation of age ratios of the Chinese would take just over thirty years on a standard laptop at the time of writing (2023). Though agent-based modellers are more interested in instructions incorporating the rules operated by each agent executed per second than in floating-point operations, the speed of the two is approximately the same.

Anecdotally, the majority of simulations of agent-based models are on personal computers operating on the desktop. However, there are examples of the use of high-performance computing environments such as computing clusters (terascale) and cloud services such as Microsoft’s Azure, Amazon’s AWS or Google Cloud (tera- to peta-scale). High-performance computing provides the capacity to do more of what we already do (more runs for calibration, validation and sensitivity analysis) and/or at a larger scale (regional or sub-national scale rather than local scale) with the number of agents scaled accordingly. As a rough guide, however, since terascale computing is a million times slower than exascale computing, an experiment that currently takes a few days or weeks in a high-performance computing environment could be completed in a fraction of a second at exascale.

We are all familiar with poor user interface design in everyday computing, and in particular the frustration of waiting for the hourglasses, spinning wheels and progress bars to finish so that we can get on with our work. In fact, the ‘Doherty Threshold’ (Yablonski 2020) stipulates 400ms interaction time between human action and computer response for best productivity. If going from 10^9 to 10^18 flops is simply a case of multiplying the speed of computation by a billion, the Doherty threshold is potentially feasible with exascale computing when applied to simulation experiments that now require very long wait times for completion.

The scale of performance of exascale computers means that there is scope to go beyond doing-more-of-what-we-already-do to thinking more deeply about what we could achieve with agent-based modelling. Could we move past some of these methodological barriers that are characteristic of agent-based modelling? What could we achieve if we had appropriate software support, and how this would affect the processes and practices by which agent-based models are built? Could we move agent-based models to having the same level of ‘robustness’ as climate models, for example? We can conceive of a productivity loop in which an empirical agent-based model is used for sequential experimentation with continual adaptation and change, continued experiment with perhaps a new model emerging from these workflows to explore tangential issues. But currently we need to have tools that help us build empirical agent-based models much more rapidly, and critically, to find, access and preprocess empirical data that the model will use for initialisation, then finding and affirming parameter values.

The ExAMPLER project

The ExAMPLER (Exascale Agent-based Modelling for PoLicy Evaluation in Real-time) project is an eighteen-month project funded by the Engineering and Physical Sciences Research Council to explore the software, data and institutional requirements to support agent-based modelling at exascale.

With high-performance computing use not being commonplace in the agent-based modelling community, we are interested in finding out what the state-of-the-art is in high-performance computing use by agent-based modellers, undertaking a systematic literature review to assess the community’s ‘exascale-readiness’. This is not just a question of whether the community has the necessary technical skills to use the equipment. It is also a matter that covers whether the hardware is appropriate to the computational demands that agent-based modellers have, whether the software in which agent-based models are built can take advantage of the hardware, and whether the institutional processes by which agent-based modellers access high-performance computing – especially with respect to information requested of applicants – is aware of their needs.

We will then benchmark the state-of-the-art against high-performance computing use in other domains of research: ecology and microsimulation, which are comparable to agent-based social simulation (ABSS); and fields such as transportation, land use and urban econometric  modelling that are  not directly comparable to ABSS, but have similar computational challenges (e.g. having to simulate many interactions, needing to explore a vast uncharted parameter space, containing multiple qualitatively different outcomes from the same initial conditions, and so on). Ecology might not simulate agents with decision-making algorithms as computationally demanding as some of those used by agent-based modellers of social systems, while a crude characterisation of microsimulation work is that it does not simulate interactions among heterogeneous agents, which affects the parallelisation of simulating them. Land use and transport models usually rely on aggregates of agents but increasingly there are being disaggregated to finer and fine spatial units with these units themselves being treated more like agents. The ‘discipline-to-be-decided’ might have a community with generally higher technical computing skills than would be expected among social scientists. Benchmarking would allow us to gain better insights into the specific barriers faced by social scientists in accessing high-performance computing.

Two other strands of work in ExAMPLER feature significant engagement with the agent-based modelling community. The project’s imaginary starting point is a computer powerful enough to experiment with an agent-based model which run in fractions of a second. With a pre-existing agent-based model, we could use such a computer in a one-day workshop to enable a creative discussion with decision-makers about how to handle problems and policies associated with an emerging crisis. But what if we had the tools at our disposal to gather and preprocess data and build models such that these activities could also be achievable in the same day? or even the same hour? Some of our land use and transportation models are already moving in this direction (Horni, Nagel, and Axhausen, 2016). Agent-based modelling would thus become a social activity that facilitates discussion and decision-making that is mindful of complexity and cascading consequences. The practices and procedures associated with building an agent-based model would then have evolved significantly from what they are now, as have the institutions built around accessing and using high-performance computing.

The first strand of work co-constructs with the agent-based modelling community various scenarios by which agent-based modelling is transformed by the dramatic improvements in computational power that exascale computing entails. These visions will be co-constructed primarily through workshops, the first of which is being held at the Social Simulation Conference in Glasgow – a conference that is well-attended by the European (and wider international) agent-based social simulation community. However, we will also issue a questionnaire to elicit views from the wider community of those who cannot attend one of our events. There are two purposes to these exercises: to understand the requirements of the community and their visions for the future, but also to advertise the benefits that exascale computing could have.

In a second series of workshops, we will develop a roadmap for exascale agent-based modelling that identifies the institutional, scientific and infrastructure support needed to achieve the envisioned exascale agent-based modelling use-cases. In essence, what do we need to have in place to make exascale a reality for the everyday agent-based modeller? This activity is underpinned by training ExAMPLER’s research team in the hardware, software and algorithms that can be used to achieve exascale computation more widely. That knowledge, together with the review of the state-of-the-art in high-performance computing use with agent-based models, can be used to identify early opportunities for the community to make significant gains (Macal, and North, 2008)

Discussion

Exascale agent-based modelling is not simply a case of providing agent-based modellers with usernames and passwords on an exascale computer and letting them run their models on it. There are many institutional, scientific and infrastructural barriers that need to be addressed.

On the scientific side, exascale agent-based modelling could be potentially revolutionary in transforming the practices, methods and audiences for agent-based modelling. As a highly diverse community, methodological development is challenged both by the lack of opportunity to make it happen, and by the sheer range of agent-based modelling applications. Too much standardization and ritualized behaviour associated with ‘disciplining’ agent-based modelling risks some of the creative benefits of having the cross-disciplinary discussions that agent-based modelling enables us to have. Nevertheless, it is increasingly clear that off-the-shelf methods for designing, implementing and assessing models are ill-suited to agent-based modelling, or – especially in the case of the last of these – fail to do it justice (Polhill and Salt 2017, Polhill et al. 2019). Scientific advancement in agent-based modelling is predicated on having the tools at our disposal to tell the whole story of its benefits, and enabling non-agent-based modelling colleagues to understand how to work with the ABM community.

Hence, hardware is only a small part of the story of the infrastructure supporting exascale agent-based modelling. Exascale computers are built using GPUs (Graphical Processing Units) – which, bluntly-speaking, are specialized computing engines for performing matrix calculations and ‘drawing millions of triangles as quickly as possible’ – they are, in any case, different from CPU-based computing. In Table 4 of Kravari and Bassiliades’ (2015) survey of agent-based modelling platforms, only two of the 24 platforms reviewed (Cormas – Bommel et al. 2016 and GAMA – Taillandier et al. 2019) are not listed as involving Java and/or the Java Virtual Machine. (As it turns out, GAMA does use Java.) TornadoVM (Papadimitriou et al. 2019) is one tool allowing Java Virtual Machines to run on GPUs. Even if we can then run NetLogo on a GPU, specialist GPU-based agent-based modelling platforms such as Richmond et al.’s (2010, 2022) FLAME GPU may be preferable in order to make best use of the highly parallelized computing environment on GPUs.

Such software simply achieves getting an agent-based model running on an exascale computer. Realizing some of the visions of future exascale-enabled agent-based modelling means rather more in the way of software support. For example, the one-day workshop in which an agent-based modelling is co-constructed with stakeholders asks either a great deal of the developers in terms of building a bespoke application in tens of minutes, or many stakeholders trusting pre-constructed modular components that can be brought together rapidly using a specialist software tool.

As has been noted (e.g. Alessa et al. 2006, para 3.4), agent-based modelling is already challenging for social scientists without programming expertise, and GPU programming is a highly specialized domain in the world of software environments. Exascale computing intersects GPU programming with high-performance computing; issues with the ways in which high-performance computing clusters are typically administered make access to them a significant obstacle for agent-based modellers (Polhill 2022). There are therefore institutional barriers that need to be broken down for the benefits of exascale agent-based modelling to be realized in a community primarily interested in the dynamics of social and/or ecological complexity, and rather less in the technology that enables them to pursue that interest. ExAMPLER aims to provide us with a voice that gets our requirements heard so that we are not excluded from taking best advantage of advanced development in computing hardware.

Acknowledgements

The ExAMPLER project is funded by the EPSRC under grant number EP/Y008839/1.  Further information is available at: https://exascale.hutton.ac.uk

References

Alessa, L. N., Laituri, M. and Barton, M. (2006) An “all hands” call to the social science community: Establishing a community framework for complexity modeling using cyberinfrastructure. Journal of Artificial Societies and Social Simulation 9 (4), 6. https://www.jasss.org/9/4/6.html

Bommel, P., Becu, N., Le Page, C. and Bousquet, F. (2016) Cormas: An agent-based simulation platform for coupling human decisions with computerized dynamics. In Kaneda, T., Kanegae, H., Toyoda, Y. and Rizzi, P. (eds.) Simulation and Gaming in the Network Society. Translational Systems Sciences 9, pp. 387-410. doi:10.1007/978-981-10-0575-6_27

Crooks, A. T., C. J. E. Castle, and M. Batty. (2008). “Key Challenges in Agent-Based Modelling for Geo-spatial Simulation.” Computers, Environment and Urban Systems  32(6),  417– 30.

Heppenstall A, Crooks A, Malleson N, Manley E, Ge J, Batty M. (2020). Future Developments in Geographical Agent-Based Models: Challenges and Opportunities. Geographical Analysis. 53(1): 76 – 91 doi:10.1111/gean.12267

Horni, A, Nagel, K and Axhausen, K W. (eds)(2016) The Multi-Agent Transport Simulation MATSim, Ubiquity Press, London, 447–450

Kravari, K. and Bassiliades, N. (2015) A survey of agent platforms. Journal of Artificial Societies and Social Simulation 18 (1), 11. https://www.jasss.org/18/1/11.html

Macal, C. M., and North, M. J. (2008) Agent-Based Modeling And Simulation for EXASCALE Computing, http://www.scidac.org

Papadimitriou, M., Fumero, J., Stratikopoulos, A. and Kotselidis, C. (2019) Towards prototyping and acceleration of Java programs onto Intel FPGAs. Proceedings of the 2019 IEEE 27th Annueal International Symposium on Field-Programmable Custom Computing Machines (FCCM). doi:10.1109/FCCM.2019.00051

Polhill, G. (2022) Antisocial simulation: using shared high-performance computing clusters to run agent-based models. Review of Artificial Societies and Social Simulation, 14 Dec 2022. https://rofasss.org/2022/12/14/antisoc-sim

Polhill, G. and Salt, D. (2017) The importance of ontological structure: why validation by ‘fit-to-data’ is insufficient. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity (2nd edition), pp. 141-172. doi:10.1007/978-3-319-66948-9_8

Polhill, J. G., Ge, J., Hare, M. P., Matthews, K. B., Gimona, A., Salt, D. and Yeluripati, J. (2019) Crossing the chasm: a ‘tube-map’ for agent-based simulation of policy scenarios in spatially-distributed systems. Geoinformatica 23, 169-199. doi:10.1007/s10707-018-00340-z

Richmond, P., Chisholm, R., Heywood, P., Leach, M. and Kabiri Chimeh, M. (2022) FLAME GPU (2.0.0-rc). Zenodo. doi:10.5281/zenodo.5428984

Richmond, P., Walker, D., Coakley, S. and Romano, D. (2010) High performance cellular level agent-based simulation with FLAME for the GPU. Briefings in Bioinformatics 11 (3), 334-347. doi:10.1093/bib/bbp073

Taillandier, P., Gaudou, B., Grignard, A.,Huynh, Q.-N., Marilleau, N., P. Caillou, P., Philippon, D. and Drogoul, A. (2019). Building, composing and experimenting complex spatial models with the GAMA platform. Geoinformatica 23 (2), 299-322, doi:10.1007/s10707-018-00339-6

Yablonski, J. (2020) Laws of UX. O’Reilly. https://www.oreilly.com/library/view/laws-of-ux/9781492055303/


Polhill, G., Heppenstall, A., Batty, M., Salt, D., Colasanti, R., Milton, R. and Hare, M. (2023) Exascale computing and ‘next generation’ agent-based modelling. Review of Artificial Societies and Social Simulation, 9 Mar 2023. https://rofasss.org/2023/09/29/exascale-computing-and-next-gen-ABM


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)