Tag Archives: technology

Why Object-Oriented Programming is not the best method to implement Agent-Based Models

By Martin Hinsch

Research Department of Genetics, Evolution and Environment
University College London

Introduction

A considerable part of the history of software engineering consists of attempts to make the complexity of software systems manageable in the sense of making them easier to implement, understand, modify, and extend. An important aspect of this is the separation of concerns (SoC, Dijkstra 1982). SoC reduces complexity by dividing the implementation of a system into (presumably simpler) problems that can be solved without having to spend too much thought on other aspects of the system. Architecturally, SoC is accomplished through modularity and encapsulation. This means that parts of the system that have strong inter-dependencies are put together into a “module” (in the widest sense) that presents only aspects of itself to the outside that are required to interact with other modules. This is based on the fundamental assumption that the visible behaviour of a component (its interface) is simpler than its potentially complex inner workings which can be ignored when interacting with it.

The history of Object-Oriented Programming (OOP) is complicated and there are various flavours and philosophies of OOP (Black 2013). However, one way to see Object Orientation (OO) is as a coherent, formalised method to ensure modularisation and encapsulation. In OOP data and functions to create and manipulate that data are combined in objects. Depending on programming language, some object functions (methods) and properties can be made inaccessible to users of the object, thereby hiding internal complexity and presenting a simplified behaviour to the outside. Many OO languages furthermore allow for polymorphism, i.e. different types of objects can have the same interface (but different internal implementations) and can therefore be used interchangeably.

After its inception in the 1960s (Dahl and Nygaard 1966) OOP gained popularity throughout the 80s and 90s, to the point that many established programming languages were retrofitted with language constructs that enabled OOP (C++, Delphi, OCaml, CLOS, Visual Basic, …) and new languages were designed based on OOP principles (Smalltalk, Python, Ruby, Java, Eiffel,…). By the mid-90s many computer science departments taught OOP not as one, or even just a useful paradigm, but as the paradigm that would make all other methods obsolete.

This is the climate in which agent-based or individual-based modelling (ABM) emerged as a new modelling methodology. In ABM the behaviour of a system is not modelled directly but instead the model consists of (many, similar or identical) individual components. The interactions between these components leads to the emergence of global behaviour.

While the origins of the paradigm reach further back, it only started to become popular in the 90s (Bianchi and Squazzoni 2015). As the method requires programming expertise, which in academia was rare outside of computer science, the majority of early ABMs were created by or with the help of computer scientists, which in turn applied the at the time most popular programming paradigm. At first glance OOP also seems to be an excellent fit for ABM – agents are objects, their state is represented by object variables, and their behaviour by methods. It is therefore no surprise that OOP has become and remained the predominant way to implement ABMs (even after the enthusiasm for OOP has waned to some degree in mainstream computer science).

In the following I will argue that OOP is not only not necessarily the best method to write ABMs, but that it has, in fact, some substantial drawbacks. More specifically, I think that the claim that OOP is uniquely suited for ABM is based on a conceptual confusion that can lead to a number of bad modelling habits. Furthermore the specific requirements of ABM implementations do not mesh well with an OOP approach.

Sidenote: Strictly speaking, for most languages we have to distinguish between objects (the entities holding values) and classes (the types that describe the makeup and functionality of objects). This distinction is irrelevant for the point I am making, therefore I will only talk about objects.

Conceptual confusion

About every introduction to OOP I have come across starts with a simple toy example that demonstrates core principles of the methods. Usually a few classes corresponding to everyday objects from the same category are declared (e.g. animal, cat, dog or vehicle, car, bicycle). These classes have methods that usually correspond to activities of these objects (bark, meow, drive, honk).

Beyond introducing the basic syntax and semantics of the language constructs involved, these introductions also transport a message: OOP is easy and intuitive because OOP objects are just representations of objects from the real world (or the problem domain). OOP is therefore simply the process of translating objects in the problem domain into software objects.

OOP objects are not representations of real-world objects

While this approach makes the concept of OOP more accessible, it is misleading. At its core the motivation behind OOP is the reduction of complexity by rigorous application of some basic tenets of software engineering (see Introduction). OOP objects therefore are not primarily defined by their representational relationship to real-world objects, but by their functionality as modules in a complicated machine.

For programmers, this initial misunderstanding is harmless as they will undergo continued training. For nascent modellers without computer science background, however, these simple explanations often remain the extent of their exposure to software engineering principles, and the misunderstanding sticks. This is unfortunately further reinforced by many ABM tutorials. Similar to introductions to OOP they present the process of the implementation of an ABM as simply consisting of defining agents as objects, with object properties that represent the real-world entities’ state and methods that implement their behaviour.

At this point a second misunderstanding almost automatically follows. By emphasising a direct correspondence between real-world entities and OOP objects, it is often implied (and sometimes explicitly stated) that modelling is, in fact, the process of translating from one to the other.

OOP is not modelling

As mentioned above, this is a misinterpretation of the intent behind OOP – to reduce software complexity. Beyond that, however, it is also a misunderstanding of the process of modelling. Unfortunately, it connects very well with a common “lay theory of modelling” that I have encountered many times when talking to domain experts with no or little experience with modelling: the idea that a model is a representation of a real system where a “better” or “more correct” representation is a better model.

Models are not (simply) representations

There are various ways to use a model and reasons to do it (Epstein 2008), but put in the most general terms, a (simulation) model is an artificial (software) system that in one way or other teaches us something about a real system that is similar in some aspects (Noble 1997). Importantly, however, the question or purpose for which the model was built determines which aspects of the real system will be part of the model. As a corollary, even given the same real-world system, two models with different questions can look very different, to the point that they use different modelling paradigms (Hinsch and Bijak 2021).

Experienced modellers are aware of all this, of course, and will not be confused by objects and methods. For novices and domain experts without that experience, however, OOP and the way it is taught in connection with ABM can lead to a particular style of modelling where first, all entities in the system are captured as agents, and second, these agents are being equipped with more and more properties and methods, “because it is more realistic”. 

An additional issue with this is that it puts the focus of the modelling process on entities. The direct correspondence between nouns in our (natural language-based) description of the model and classes in our object-oriented implementation makes it very tempting to think about the model solely in terms of entities and their properties.

ABMs are not (just) collections of entities

There are other reasons to build a simulation model, but in most cases the dynamic behaviour of the finished model will be crucial. The reason to use an ABM as opposed to, say, a differential equation model, is not that the system is composed of entities, but that the behaviour of the system depends in such a way on interactions between entities that it cannot be reduced to aggregate population behaviour. The “interesting” part of the model is therefore not the agents per se, but their behaviour and the interactions between them. It is only possible to understand the model’s macroscopic behaviour (which is often the goal of ABM) by thinking about it in terms of microscopic interactions. When creating the model it is therefore crucial to think not (only) about which entities are part of the system, but primarily which entity-level interactions and behaviours are likely to affect the macroscopic behaviour of interest.

To summarise the first part, OOP is a software engineering methodology, not a way to create models. This unfortunately often gets lost in the way it is commonly taught (in particular in connection with ABM), so that OOP can easily lead to a mindset that sees models as representations, emphasises “realism”, and puts the focus on entities rather than the more important interactions.

Practical considerations

But assuming a seasoned modeller who understands all this – surely there would be no harm in choosing an OOP implementation?

At first approximation this is certainly true. The points discussed above apply to the modelling process, so assuming all of the mentioned pitfalls are avoided, the implementation should only be a matter of translating a formal structure into working program code. As long as the code is exactly functionally equivalent to the formal model, it should not matter which programming paradigm is used.

In reality things are a little bit more complicated, however. For a number of reasons model code has different properties and requirements to “normal” code. These combine to make OOP not very suitable for the implementation of ABMs.

OOP does not reduce complexity of an ABM

Any non-trivial piece of software is too complicated to understand all at once. At the same time, we usually want its behaviour to be well-defined, well-understood, and predictable. OOP is a way to accomplish this by partitioning the complexity into manageable pieces. By composing the program of simple(r) modules, which in turn have well-defined, well-understood, and predictable behaviour and which interact in a simple, predictable manner, the complexity of the system remains manageable and understandable.

An ABM has parts that we want to be well-understood and predictable as well, such as parameterisation, data output, visualisation, etc. For these “technical” parts of the simulation program, the usual rules of software engineering apply, and OOP can be a helpful technique. The “semantic” part, i.e. the implementation of the model itself is different, however. By definition, the behaviour of a model is unpredictable and difficult to understand. Furthermore, in an ABM the complexity of the model behaviour is the result of the (non-linear) interactions between its components – the agents – which themselves are often relatively simple. The benefit of OOP – a reduction in complexity by hiding it behind simple object interfaces – therefore does not apply for the semantic part of the implementation of an ABM.

OOP makes ABMs more difficult to read and understand

There is more, however. Making code easy to read and understand is an important part of good practice in programming. This holds even more so for ABM code.

First, most ordinary application code is constructed to produce very specific runtime behaviour. To put it very simply – if the program does not show that behaviour, we have found an error; if it does, our program is by definition correct. For ABM code the behaviour can not be known in advance (otherwise we would not need to simulate). Some of it can be tested by running edge cases with known behaviour, but to a large degree making sure that the simulation program is implemented correctly has to rely on inspection of the source code.

Second, for more complicated models such as ABMs the simulation program is very rarely just the translation of a formal specification. Language is inherently ambiguous and to my knowledge there is no practical mathematical notation for ABMs (or software in general). Given further factors such as turnaround times of scientific work, ambiguity of language and documentation drift, it is often unavoidable that the code remains the ultimate authority on what the model does. In fact, good arguments have been made to embrace this reality and its potential benefits (Meisser 2016), but even so we have to live with the reality that for most ABMs, most of the time, the code is the model.

Finally, an important part of the modelling process is working out the mechanisms that lead to the observed behaviour. This involves trying to relate the observed model behaviour to the effect of agent interactions, often by modifying parameter values or making small changes to the model itself. During this process, being able to understand at a glance what a particular piece of the model does can be very helpful.

For all of these reasons, readability and clarity are paramount for ABM code. Implementing the model in an OO manner directly contradicts this requirement. We would try to implement most functionality as methods of an object. The processes that make up the dynamic behaviour of the model – the interactions between the agents – are then split into methods belonging to various objects. Someone who tries to understand – or modify – a particular aspect of the behaviour then has to jump between these methods, often distributed over different files, having to assemble the interactions that actually take place in their mind. Furthermore, encapsulation, i.e. the hiding of complexity behind simple interfaces, can make ABM code more difficult to understand by giving the misleading impression of simplicity. If we encounter agent.get_income() for example, we might access a simple state variable or we might get the result of a complex calculation. For normal code this would not make a difference since the potential complexity hidden behind that function call should not affect the caller. For ABM code, however, the difference might be crucial.

To sum up the second part – due to the way complexity arises in ABMs an OOP implementation does not lead to simplification, but on the contrary can make the code more difficult to understand and maintain.

Conclusion and discussion

Obviously none of the points mentioned above are absolutes and excellent models have been created using object-oriented languages and implementation principles. However, I would like to argue that the current state of affairs where object-orientation is uncritically presented as the best or even only way to implement agent-based models does on average lead to worse models and worse model implementations. I think that in the future any beginner’s course on agent-based modelling should at least:

  • Clarify the difference between model and implementation.
  • Show examples of the same model implemented according to a number of different paradigms.
  • Emphasise that ABMs are about interactions, not entities.

Concerning best practices for implementation, I think readability is the best guideline. Personally, I have found it useful to implement agents as “shallow” objects with the rule of thumb that only functions that a) have an obvious meaning and b) only affect the agent in question become methods implemented at the same place as the agent definition. Everything else is implemented as free functions, which can then be sorted into files by processes, e.g. ’reproduction’ or ’movement’. This avoids philosophical problems – does infection in a disease model, for example, belong to the agents, some environment object or maybe even a disease object? But above all it makes it easy to quickly find and understand a specific aspect of the model.

If at the same time the model code is kept as independent of the parts of the code that manages technical infrastructure (such as parameter loading or gui) as possible, we can maintain the implementation of the model (and only the model) as a self-contained entity in a form that is optimised for clarity and readability.

References

Bianchi, Federico, and Flaminio Squazzoni. 2015. “Agent-Based Models in Sociology.” Wiley Interdisciplinary Reviews: Computational Statistics 7 (4): 284–306. https://doi.org/10.1002/wics.1356.

Black, Andrew P. 2013. “Object-Oriented Programming: Some History, and Challenges for the Next Fifty Years.” Information and Computation, Fundamentals of Computation Theory, 231 (October): 3–20. https://doi.org/10.1016/j.ic.2013.08.002.

Dahl, Ole-Johan, and Kristen Nygaard. 1966. “SIMULA: An ALGOL-Based Simulation Language.” Commun. ACM 9 (9): 671–78. https://doi.org/10.1145/365813.365819.

Dijkstra, Edsger W. 1982. “On the Role of Scientific Thought.” In Selected Writings on Computing: A Personal Perspective, edited by Edsger W. Dijkstra, 60–66. New York, NY: Springer. https://doi.org/10.1007/978-1-4612-5695-3_12.

Epstein, Joshua M. 2008. “Why Model?” Jasss-the Journal of Artificial Societies and Social Simulation 11 (4): 12. https://doi.org/10.13140/2.1.5032.9927.

Hinsch, Martin, and Jakub Bijak. 2021. “Principles and State of the Art of Agent-Based Migration Modelling.” In Towards Bayesian Model-Based Demography: Agency, Complexity and Uncertainty in Migration Studies. Methodos Series 17, 33-49.

Meisser, Luzius. 2016. “The Code Is the Model.” International Journal of Microsimulation 10 (3): 184–201. https://doi.org/10.34196/ijm.00169.

Noble, Jason. 1997. “The Scientific Status of Artificial Life.” In Poster Presented at the Fourth European Conference on Artificial Life (ECAL97), Brighton, UK.

Hinsch, M.(2025) Why Object-Oriented Programming is not the best method to implement Agent-Based models. Review of Artificial Societies and Social Simulation, 3 Feb 2026. https://rofasss.org/2026/02/03/oop


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Nigel Gilbert

By Corinna Elsenbroich & Petra Ahrweiler

The first piece on winners of the European Social Simulation Association’s Rosaria Conte Outstanding Contribution Award for Social Simulation.

Gilbert, a former sociologist of science, has been one of the chief links in Britain between computer scientists and sociologists of science” [1, p. 294]

Nigel has always been and still is a sociologist – not only of science, but also of technology, innovation, methods and many other subfields of sociology with important contributions in theory, empirical research and sociological methods.

He has pioneered a range of sociological areas such as Sociology of Scientific Knowledge, Secondary Analysis of Government Datasets, Access to Social Security Information, Social Simulation, and Complexity Methods of Policy Evaluation.

Collins is right, however, that Nigel is one of the chief links between sociologists and computer scientists in the UK and beyond. This earned him to be elected as the first practising social scientist elected as a Fellow of the Royal Academy of Engineering (1999). As the principal founding father of agent-based modelling as a method for the social sciences in Europe, he initiated, promoted and institutionalised a completely novel way of doing social sciences through the Centre for Research in Social Simulation (CRESS) at the University of Surrey, the Journal of Artificial Societies and Social Simulation (JASSS), founded Sociological Research Online (1993) and Social Research Update. Nigel has 100s of publications on all aspects of social simulation and seminal books like: Simulating societies: the computer simulation of social phenomena (Gilbert & Doran 1994), Artificial Societies: The Computer Simulation of Social Phenomena (Gilbert & Conte 1995), Simulation for the Social Scientist (Gilbert &Troitzsch 2005), and Agent-based Models (Gilbert 2019). His entrepreneurial spirit and acumen resulted in over 25 large project grants (across the UK and Europe), often in close collaboration with policy and decision makers to ensure real life impact, a simulation platform on innovation networks called SKIN, and a spin off company CECAN Ltd, training practitioners in complexity methods and bringing their use to policy evaluation projects.

Nigel is a properly interdisciplinary person, turning to the sociology of scientific knowledge in his PhD under Michael Mulkay after graduating in Engineering from Cambridge’s Emmanuel College. He joined the Sociology Department at the University of Surrey in 1976 where he became professor of sociology in 1991. Nigel was appointed Commander of the Order of the British Empire (CBE) in 2016 for contributions to engineering and social sciences.

He was the second president of the European Social Simulation Association ESSA, the originator of the SIMSOC mailing list, launched and edited the Journal of Artificial Societies and Social Simulation from 1998-2014 and he was the first holder of the Rosaria Conte Outstanding Contribution Award for Social Simulation in 2016, an unanimous decision by the ESSA Management Committee.

Despite all of this, all these achievements and successes, Nigel is the most approachable, humble and kindest person you will ever meet. In any peril he is the person that will bring you a step forward when you need a helping hand. On asking him, after getting a CBE etc. what is the recognition that makes him most happy, he said, with the unique Nigel Gilbert twinkle in his eye, “my Rosaria Conte Award”.

References

Collins, H. (1995). Science studies and machine intelligence. In Handbook of Science and Technology Studies, Revised Edition (pp. 286-301). SAGE Publications, Inc., https://doi.org/10.4135/9781412990127

Gilbert, N., & Doran, R. (Eds.). (1994). Simulating societies: the computer simulation of social phenomena. Routledge.

Gilbert, N. & Conte, R. (1995) Artificial Societies: the computer simulation of social life. Routeledge. https://library.oapen.org/handle/20.500.12657/24305

Gilbert, N. (2019). Agent-based models. Sage Publications.

Gilbert, N., & Troitzsch, K. (2005). Simulation for the social scientist. Open University Press; 2nd edition.


Elsenbroich, C. & Ahrweiler, P. (2025) Nigel Gilbert. Review of Artificial Societies and Social Simulation, 3 Mar 2025. https://rofasss.org/2025/04/03/nigel-gilbert


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Quantum computing in the social sciences

By Emile Chappin and Gary Polhill

The dream

What could quantum computing mean for the computational social sciences? Although quantum computing is at an early stage, this is the right time to dream about precisely that question for two reasons. First, we need to keep the computational social sciences ‘in the conversation’ about use cases for quantum computing to ensure our potential needs are discussed. Second, thinking about how quantum computing could affect the way we work in the computational social sciences could lead to interesting research questions, new insights into social systems and their uncertainties, and form the basis of advances in our area of work.

At first glance, quantum computing and the computational social sciences seem unrelated. Computational social science uses computer programs written in high-level languages to explore the consequences of assumptions as macro-level system patterns based on coded rules for micro-level behaviour (e.g., Gilbert, 2007). Quantum computing is in an early phase, with the state-of-the-art being in the order of 100s of qubits [1],[2], and a wide range of applications are envisioned (Hassija, 2020), e.g., in the areas of physics (Di Meglio et al., 2024) and drug discovery (Blunt et al., 2022). Hence, the programming of quantum computers is also in an early phase. Major companies (e.g., IBM, Microsoft, Alphabet, Intel, Rigetti Computing) are investing heavily and have put out high expectations – though how much of this is hyperbole to attract investors and how much it is backed up by substance remains to be seen. This means it is still hard to comprehend what opportunities may come from scaling up.

Our dream is that quantum computing enables us to represent human decision-making on a much larger scale, do more justice to how decisions come about, and embrace the influences people have on each other. It would respect that people’s actual choices are undetermined until they have to show behaviour. On a philosophical level, these features are consistent with how quantum computation operates. Applying quantum computing to decision-making with interactions may help us inform or discover behavioural theory and contribute to complex systems science.

The mysticism around quantum computing

There is mysticism around what qubits are. To start thinking about how quantum computing could be relevant for computational social science, there is no direct need to understand the physics of how qubits are physically set up. However, it is necessary to understand the logic and how quantum computers operate. At the logical level, there are similarities between quantum and traditional computers.

The main similarity is that the building blocks are bits and that they are either 0 or 1, but only when you measure them. A second similarity is that quantum computers work with ‘instructions’. Quantum ‘processors’ alter the state of the bits in a ‘memory’ using programs that comprise sequences of ‘instructions’ (e.g., Sutor, 2019).

There are also differences. They are: 1) qubits are programmed to have probabilities of being a zero or a one, 2) qubits have no determined value until they are measured, and 3) multiple qubits can be entangled. The latter means the values (when measured) depend on each other.

Operationally speaking, quantum computers are expected to augment conventional computers in a ‘hybrid’ computing environment. This means we can expect to use traditional computer programs to do everything around a quantum program, not least to set up and analyse the outcomes.

Programming quantum computers

Until now, programming languages for quantum computing are low-level; like assembly languages for regular machines. Quantum programs are therefore written very close to ‘the hardware’. Similarly, in the early days of electronic computers, instructions for processors to perform directly were programmed directly: punched cards contained machine language instructions. Over time, computers got bigger, more was asked of them, and their use became more widespread and embedded in everyday life. At a practical level, different processors, which have different instruction sets, and ever-larger programs became more and more unwieldy to write in machine language. Higher-level languages were developed, and reached a point where modellers could use the languages to describe and simulate dynamic systems. Our code is still ultimately translated into these lower-level instructions when we compile software, or it is interpreted at run-time. The instructions now developed for quantum computing are akin to the early days of conventional computing, but development of higher-level programming languages for quantum computers may happen quickly.

At the start, qubits are put in entangled states (e.g., Sutor, 2019); the number of qubits at your disposal makes up the memory. A quantum computer program is a set of instructions that is followed. Each instruction alters the memory, but only by changing the probabilities of qubits being 0 or 1 and their entanglement. Instruction sets are packaged into so-called quantum circuits. The instructions operate on all qubits at the same time, (you can think of this in terms of all probabilities needing to add up to 100%). This means the speed of a quantum program does not depend on the scale of the computation in number of qubits, but only depends on the number of instructions that one executes in a program. Since qubits can be entangled, quantum computing can do calculations that take too long to run on a normal computer.

Quantum instructions are typically the inverse of themselves: if you execute an instruction twice, you’re back at the state before the first operation. This means you can reverse a quantum program simply by executing the program again, but now in reverse order of the instructions. The only exception to this is the so-called ‘read’ instruction, by which the value is determined for each qubit to either be 1 or 0. This is the natural end of the quantum program.

Recent developments in quantum computing and their roadmaps

Several large companies such as Microsoft, IBM and Alphabet are investing heavily in developing quantum computing. The route currently is to move up in the scale of these computers with respect to the number of qubits they have and the number of gates (instructions) that can be run. IBM’s roadmap they suggest growing to 7500 instructions, as quickly as 2025[3]. At the same time, programming languages for quantum computing are being developed, on the basis of the types of instructions above. At the moment, researchers can gain access to actual quantum computers (or run quantum programs on simulated quantum hardware). For example, IBM’s Qiskit[4] is one of the first open-source software developing kit for quantum computing.

A quantum computer doing agent-based modelling

The exponential growth in quantum computing capacity (Coccia et al., 2024) warrants us to consider how it may be used in the computational social sciences. Here is a first sketch. What if there is a behavioural theory that says something about ‘how’ different people decide in a specific context on a specifical behavioural action. Can we translate observed behaviour into the properties of a quantum program and explore the consequences of what we can observe? Or, in contrast, can we unravel the assumptions underneath our observations? Could we look at alternative outcomes that could also have been possible in the same system, under the same conceptualization? Given what we observe, what other system developments could have had emerged that also are possible (and not highly unlikely)? Can we unfold possible pathways without brute-forcing a large experiment? These questions are, we believe, different when approached from a perspective of quantum computing. For one, the reversibility of quantum programs (until measuring) may provide unique opportunities. This also means, doing such analyses may inspire new kinds of social theory, or it may give a reflection on the use of existing theory.

One of the early questions is how we may use qubits to represent modelled elements in social simulations. Here we sketch basic alternative routes, with alternative ideas. For each strain we include a very rudimentary application to both Schelling’s model of segregation and the Traffic Basic model, both present in NetLogo model library.

Qubits as agents

A basic option could be to represent an agent by a qubit. Thinking of one type of stylized behaviour, an action that can be taken, then a quantum bit could represent whether that action is taken or not. Instructions in the quantum program would capture the relations between actions that can be taken by the different agents, interventions that may affect specific agents. For Schelling’s model, this would have to imply to show whether segregation takes place or not. For Traffic Basic, this would be what the probability is for having traffic jams. Scaling up would mean we would be able to represent many interacting agents without the simulation to slow down. This is, by design, abstract and stylized. But it may help to answer whether a dynamic simulation on a quantum computer can be obtained and visualized.

Decision rules coded in a quantum computer

A second option is for an agent to perform a quantum program as part of their decision rules. The decision-making structure should then match with the logic of a quantum computer. This may be a relevant ontological reference to how brains work and some of the theory that exists on cognition and behaviour. Consider a NetLogo model with agents that have a variety of properties that get translated to a quantum program. A key function for agents would be that the agent performs a quantum calculation on the basis of a set of inputs. The program would then capture how different factors interact and whether the agent performs specific actions, i.e., show particular behaviour. For Schelling’s segregation model, it would be the decision either to move (and in what direction) or not. For Traffic Basic it would lead to a unique conceptualization of heterogeneous agents. But for such simple models it would not necessarily take benefit of the scale-advantage that quantum computers have, because most of the computation occurs on traditional computers and the limited scope of the decision logic of these models. Rather, it invites to developing much more rich and very different representations of how decisions are made by humans. Different brain functions may all be captured: memory, awareness, attitudes, considerations, etc. If one agent’s decision-making structure would fit in a quantum computer, experiments can already be set up, running one agent after the other (just as it happens on traditional computers). And if a small, reasonable number of agents would fit, one could imagine group-level developments. If not of humans, this could represent companies that function together, either in a value chain or as competitors in a market. Because of this, it may be revolutionary:  let’s consider this as quantum agent-based modelling.

Using entanglement

Intuitively one could consider the entanglement if qubits to be either represent the connection between different functions in decision making, the dependencies between agents that would typically interact, or the effects of policy interventions on agent decisions. Entanglement of qubits could also represent the interaction of time steps, capturing path dependencies of choices, limiting/determining future options. This is the reverse of memory: what if the simulation captures some form of anticipation by entangling future options in current choices. Simulations of decisions may then be limited, myopic in their ability to forecast. By thinking through such experiments, doing the work, it may inspire new heuristics that represent bounded rationality of human decision making. For Schelling’s model this could be the local entanglement restricting movement, it could be restricting movement because of future anticipated events, which contributes to keep the status quo. For Traffic Basic, one could forecast traffic jams and discover heuristics to avoid them which, in turn may inspire policy interventions.

Quantum programs representing system-level phenomena

The other end of the spectrum can also be conceived. As well as observing other agents, agents could also interact with a system in order to make their observations and decisions where the system with which they interact with itself is a quantum program. The system could be an environmental, or physical system, for example. It would be able to have the stochastic, complex nature that real world systems show. For some systems, problems could possibly be represented in an innovative way. For Schelling’s model, it could be the natural system with resources that agents benefit from if they are in the surroundings; resources having their own dynamics depending on usage. For Traffic Basic, it may represent complexities in the road system that agents can account for while adjusting their speed.

Towards a roadmap for quantum computing in the social sciences

What would be needed to use quantum computation in the social sciences? What can we achieve by taking the power of high-performance computing combined with quantum computers when the latter scale up? Would it be possible to reinvent how we try to predict the behaviour of humans by embracing the domain of uncertainty that also is essential in how we may conceptualise cognition and decision-making? Is quantum agent-based modelling at one point feasible? And how do the potential advantages compare to bringing it into other methods in the social sciences (e.g. choice models)?

A roadmap would include the following activities:

  • Conceptualise human decision-making and interactions in terms of quantum computing. What are promising avenues of the ideas presented here and possibly others?
  • Develop instruction sets/logical building blocks that are ontologically linked to decision-making in the social sciences. Connect to developments for higher-level programming languages for quantum computing.
  • Develop a first example. One could think of reproducing one of the traditional models. Either an agent-based model, such as Schelling’s model of segregation or Basic Traffic, or a cellular automata model, such as game-of-life. The latter may be conceptualized with a relatively small number of cells and could be a valuable demonstration of the possibilities.
  • Develop quantum computing software for agent-based modelling, e.g., as a quantum extension for NetLogo, MESA, or for other agent-based modelling packages.

Let us become inspired to develop a more detailed roadmap for quantum computing for the social sciences. Who wants to join in making this dream a reality?

Notes

[1] https://newsroom.ibm.com/2022-11-09-IBM-Unveils-400-Qubit-Plus-Quantum-Processor-and-Next-Generation-IBM-Quantum-System-Two

[2] https://www.fastcompany.com/90992708/ibm-quantum-system-two

[3] https://www.ibm.com/roadmaps/quantum/

[4] https://github.com/Qiskit/qiskit-ibm-runtime

References

Blunt, Nick S., Joan Camps, Ophelia Crawford, Róbert Izsák, Sebastian Leontica, Arjun Mirani, Alexandra E. Moylett, et al. “Perspective on the Current State-of-the-Art of Quantum Computing for Drug Discovery Applications.” Journal of Chemical Theory and Computation 18, no. 12 (December 13, 2022): 7001–23. https://doi.org/10.1021/acs.jctc.2c00574.

Coccia, M., S. Roshani and M. Mosleh, “Evolution of Quantum Computing: Theoretical and Innovation Management Implications for Emerging Quantum Industry,” in IEEE Transactions on Engineering Management, vol. 71, pp. 2270-2280, 2024, https://doi: 10.1109/TEM.2022.3175633.

Di Meglio, Alberto, Karl Jansen, Ivano Tavernelli, Constantia Alexandrou, Srinivasan Arunachalam, Christian W. Bauer, Kerstin Borras, et al. “Quantum Computing for High-Energy Physics: State of the Art and Challenges.” PRX Quantum 5, no. 3 (August 5, 2024): 037001. https://doi.org/10.1103/PRXQuantum.5.037001.

Gilbert, N., Agent-based models. SAGE Publications Ltd, 2007. ISBN 978-141-29496-44

Hassija, V., Chamola, V., Saxena, V., Chanana, V., Parashari, P., Mumtaz, S. and Guizani, M. (2020), Present landscape of quantum computing. IET Quantum Commun., 1: 42-48. https://doi.org/10.1049/iet-qtc.2020.0027

Sutor, R. S. (2019). Dancing with Qubits: How quantum computing works and how it can change the world. Packt Publishing Ltd.


Chappin, E. & Polhill, G (2024) Quantum computing in the social sciences. Review of Artificial Societies and Social Simulation, 25 Sep 2024. https://rofasss.org/2024/09/24/quant


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)