Tag Archives: AI

A Reminder – computability limits on “vibe coding” ABMs (using LLMS to do the programming for us)

By Bruce Edmonds

Introduction

Machine-learning systems, including Large Language Models (LLMs), are algorithms trained on large datasets rather than something categorically different. Consequently, they inherit the standard theoretical and practical limitations that apply to all algorithmic methods. Here we look at the computational limits in terms of “Vibe coding” – where the LLM writes some of the code for an Agent-Based Model (ABM) in response to a descriptive prompt. Firstly, we review a couple of fundamental theorems in terms of producing or checking code relative to its descriptive specification. In general, this is shown to be impossible (Edmonds & Bryson 2004). However, this does not rule out the possibility that this could work with simpler classes of code that fall short of full Turing completeness (being able to mimic any conceivable computation). Thus, I recap a result that shows how simple a class of ABMs can be and still be Turing complete (and thus subject to the previous results) (Edmonds 2004). If you do not like discussion of proofs, I suggest you skip to the concluding discussion. More formal versions of the proofs can be found in the Appendix.

Descriptions of Agent-Based Models

When we describe code, the code should be consistent with that description. That is, the code should be one of the possible codes which are right for the code. In this paper we are crucially concerned with the relationship between code and its description and the difficulty of passing from one to the other.

When we describe an ABM, we can do so in a number of different ways. We can do this with different degrees of formality: from chatty natural language to pseudo code to UML diagrams. The more formal the method of description, the less ambiguity there is concerning its meaning. We can also describe what we want at low or high levels. A low-level description specifies the detail of what bit of code does at each time – an imperative description. A high-level description specifies what the system should do as a whole or what the results should be like. These tend to be declarative descriptions.

Whilst a compiler takes a formal, low-level description of code, an LLM takes a high-level, informal description – in the former case the description is very prescriptive with the same description always producing the same code, but in the case of LLMs there are usually a great many sets of code that are consistent with the input prompt. In other words, the LLM makes a great many decisions for the user, saving time – decisions that a programmer would be forced to confront if using a compiler (Keles 2026).

Here, we are focusing on the latter case, when we use an LLM to do some or all of our ABM programming for us. We use high-level, informal natural language in order to direct the LLM as to what ABM (or aspect of ABM) we would like. Of course, one can be more precise in one’s use of language, but it will tend to remain at a fairly high level (if we are going to do a complete low-level description then we might as well write the code ourselves).

In the formal results below, we restrict ourselves to formal descriptions as this is necessary to do any proofs. However, what is true below for formal descriptions is also true for the wider class of any description as one can always use natural language in a precise manner. For a bit more detail in what we mean here by a formal language, see the Appendix.

The impossibility of a general “specification compiler”

The dream is that one could write a description of what one would like and an algorithm, T, would produce code that fitted that description. However, to enable proofs to be used we need to formalize the situation so that the description, is in some suitably expressive but formal language (e.g. a logic with enumerable expressions). This situation is illustrated in figure 1.

Figure 1. Automatically generating code from a specification.

Obviously, it is easy to write an impossible formal specification – one for which no code exists –so the question is whether there could be such an algorithm, T, that would give us code that fitted a specification when it does exist. The proof is taken from (Edmonds & Bryson 2004) and given in more formal detail in the Appendix.

The proof uses a version of Turing’s “halting problem” (Turing 1937). This is the problem of checking some code (which takes a number as an input parameter) to see if would come to a halt (the program finishes) or go on for ever. The question here is whether there is any effective and systematic way of doing this. In other words, whether there an “automatic program checker” is possible – a program, H, which takes two inputs: the program number, x, and a possible input, y and then works out if the computation Px(y) ever ends.  Whilst in some cases spotting this is easy – e.g. trivial infinite loops – other cases are hard (e.g. testing the even numbers to find one that is not the sum of two prime numbers1).

For our purposes, let us consider a series of easier problems – what I call “limited halting problems”. This is the problem of checking whether programs, x, applied to inputs y ever come to an end, but only for x, y ≤ n, where n is a fixed upper limit. Imagine a big n ´ n table with the columns being the program numbers and the rows being the inputs. Each element is 0 if the combination never stops and 1 if it does. A series of simpler checking programs, Hn, would just look up the answers in the table as long as they had been filled in correctly. We know that these programs exist, since programs that implement simple look up tables always exist and that one of the possible n x´n tables will be the right one for Hn. For each limited halting problem, we can write a formal specification for this, giving us a series of specifications (one for each n).

Now imagine that we had a general-purpose specification compiling program, T, as described above and illustrated in Figure 1. Then, we could do the following:

  1. work out max(x,y)
  2. given any computation Px(y) we could construct the specification for the limited halting problem with index, max(x,y)
  3. then we could use T to construct some code for Hn and
  4. use that code to see if Px(y) ever halted.

Taken together, these steps (a-d) can be written as a new piece of computer code that would solve the general halting problem. However, this we know is impossible (Turing 1937), therefore there is not a general compiling program like T above, it is impossible.

The impossibility of a general “code checker”

The checking problem is apparently less ambitious than the programming problem – here we are given a program and a specification and have ‘only’ to check whether they correspond.  That is whether the code satisfies the specification.

Figure 2. Algorithmically checking if some code satisfies a descriptive specification.

Again, the answer is in the negative.  The proof to this is similar. If there were a checking program C that, given a program and a formal specification would tell us whether the program met the specification, we could again solve the general halting problem.  We would be able to do this as follows:

  1. work out the maximum of x and y (call this m);
  2. construct a sequence of programs implementing all possible finite lookup tables of type: mxm→{0,1};
  3. test these programs one at a time using C to find one that satisfies SHn (we know there is at least one);
  4. use this program to compute whether Px(y) halts. 

Thus, there is no general specification checking program, like C.

Thus, we can see that there are some, perfectly well-formed specifications, where we know code exists that would comply with the specification but where there is no such algorithm, however clever, that will always take us from a specification to the code. Since trained neural nets are a kind of clever algorithm, they cannot do this either.

What about simple Agent-Based Models?

To illustrate how simple such systems can be, I defined a particular class of particularly simple multi-agent system, called “GASP” systems (Giving Agent System with Plans).  These are defined as follows.  There are n agents, labelled: 1, 2, 3, etc., each of which has an integer store which can change and a finite number of simple plans (which do not change).  Each time interval the store of each agent is incremented by one.  Each plan is composed of: a (possibly empty) sequence of ‘give instructions’ and finishes with a single ‘test instruction’.  Each ‘give instruction’, Ga, has the effect of giving 1 unit to agent a (if the store is non-zero).  The ‘test instruction’ is of the form JZa,p,q, which has the effect of jumping (i.e. designating the plan that will be executed next time period) to plan p if the store of agent a is zero and plan q otherwise.  This class is described more in (Edmonds 2004). This is illustrated in Figure 3.

Figure 3. An Illustration of a “GASP” multi-agent system.

Thus ‘all’ that happens in this class of GASP systems is the giving of tokens with value 1 to other agents and the testing of other agents’ store to see if they are zero to determine the next plan.  There is no fancy communication, learning or reasoning done by agents. Agents have fixed and very simple plans and only one variable. However, this class of agent can be shown to be Turing Complete. The proof is taken from (Edmonds & Bryson 2004).

The detail of this proof is somewhat tedious, but basically involves showing that any computation (any Turing Machine) can be mapped to a GASP machine using a suitable effective and systematic mapping. This is done in three stages. That is for any particular Turing Machine:

  1. Create an equivalent “Unlimited Register Machine” (URM), with an indefinitely large (but finite) number of integer variables and four basic kinds of instruction (add one to a variable, set a variable to 0, copy the number in a variable to another, jump to a set instruction if two specified variables are equal. This is known to be possible (Cutland page 57).
  2. Create an equivalent “AURA” machine for this URM machine (Moss & Edmonds 1994)
  3. Create an equivalent “GASP” ABM for this AURA system.

This is unsurprising – many systems that allow for an indefinite storage, basic arithmetic operations and a kind of “IF” statement are Turing Complete (see any textbook on computability, e.g. Cutland 1980).

This example of a class of GASP agents shows just how simple an ABM can be and still be Turing Complete, and subject to the impossibility of a general compiler (like T above) or checker (like C above), however clever these might be.

Discussion

What the above results show is that:

  1. There is no algorithm that will take any formal specification and give you code that satisfies it. This includes trained LLMs.
  2. There is no algorithm that will take any formal specification and some code and then check whether the code satisfies that specification. This includes trained LLMs.
  3. Even apparently simple classes of agent-based model are capable of doing any computation and so there will be examples where the above two negative results hold.

These general results do not mean that there are not special cases where programs like T or C are possible (e.g. compilers). However, as we see from the example ABMs above, it does not take much in the way of its abilities to make this impossible for high level descriptions. Using informal, rather than formal, language does not escape these results, but merely adds more complication (such as vagueness).

In conclusion, this means that there will be kinds of ABMs for which no algorithm can turn descriptions into the correct, working code2. This does not mean that LLMs can’t be very good at producing working code from the prompts given to them. They might (in some cases) be better than the best humans at this but they can never be perfect. There will always be specifications where they either cannot produce the code or produce the wrong code.

The underlying problem is that coding is a very hard, in general. There are no practical, universal methods that always work – even when it is known to be possible. Suitably-trained LLMs, human ingenuity, various methodologies can help but none will be a panacea.

Notes

1. Which would disprove “Goldbach’s conjecture”, whose status is still unknown despite centuries of mathematical effort. If there is such a number it is known to be more than 4×1017.

2. Of course, if humans are limited to effective procedures – ones that could be formally written down as a program (which seems likely) – then humans are similarly limited.

Acknowledgements

Many thanks for the very helpful comments on earlier drafts of this by Peer-Olaf Siebers, Luis Izquierdo and other members of the LLM4ABM SIG. Also, the participants of AAMAS 2004 for their support and discussion on the formal results when they were originally presented.

Appendix

Formal descriptions

The above proofs rely on the fact that the descriptions are “recursively enumerable”, as in the construction of Gödel (1931). That is, one can index the descriptions (1, 2, 3…) in such a way that once can reconstruct the description from the index. Most formal languages, including those compilers take, computer code, formal logic expressions, are recursively enumerable since they can be constructed from an enumerable set of atoms (e.g. variable names) using a finite number of formal composition rules (e.g. if A and B are allowed expressions, then so are A → B, A & B etc. Any language that can be specified using syntax diagrams (e.g. using Backus–Naur form) will be recursively enumerable in this sense.

Producing code from a specification

The ‘halting problem’ is an undecidable problem (Turing 1937), (that is it is a question for which there does not exist a program that will answer it, say outputting 1 for yes and 0 for no).  This is the problem of whether a given program will eventually come to a halt with a given input.  In other words, whether Px(y), program number x applied to input y, ever finishes with a result or whether it goes on for ever.  Turing proved that there is no such program (Turing 1937).

Define a series of problems, LH1, LH2, etc., which we call ‘limited halting problems’.  LHn is the problem of ‘whether a program with number £n and an input £n will ever halt’.  The crucial fact is that each of these is computable, since each can be implemented as a finite lookup table.  Call the programs that implement these lookup tables: PH1, PH2, etc. respectively.  Now if the specification language can specify each such program, one can form a corresponding enumeration of formal specifications: SH1, SH2,etc. 

The question now is whether there is any way of computationally finding PHn from the specification SHn.  But if there were such a way we could solve Turing’s general halting problem in the following manner: first find the maximum of x and y (call this m); then compute PHm from SHm; and finally use PHm to compute whether Px(y) halts.  Since we know the general halting problem is not computable, we also know that there is no effective way of discovering PHn from SHn even though for each SHn we know an appropriate PHn exists!

Thus, the only question left is whether the specification language is sufficiently expressive to enable SH1, SH2, etc. to be formulated.  Unfortunately, the construction in Gödel’s famous incompleteness proof (Gödel 1931) guarantees that any formal language that can express even basic arithmetic properties will be able to formulate such specifications.

Checking code meets a specification

To demonstrate this, we can reuse the limited halting problems defined in the last subsection.  The counter-example is whether one can computationally check (using C) that a given program P meets the specification SHn.  In this case we will limit ourselves to programs, P, that implement n´n finite lookup tables with entries: {0,1}

Now we can see that if there were a checking program C that, given a program and a formal specification would tell us whether the program met the specification, we could again solve the general halting problem.  We would be able to do this as follows: first find the maximum of x and y (call this m); then construct a sequence of programs implementing all possible finite lookup tables of type: mxm{0,1}; then test these programs one at a time using C to find one that satisfies SHn (we know there is at least one: PHm);and finally use this program to compute whether Px(y) halts.  Thus, there is no such program, C.

Showing GASP ABMs are Turning Complete

The class of Turing machines is computationally equivalent to that of unlimited register machines (URMs) (Cutland page 57).  That is the class of programs with 4 types of instructions which refer to registers, R1, R2, etc. which hold positive integers.  The instruction types are: Sn, increment register Rn by one; Zn, set register Rn to 0; Cn,m, copy the number from Rn to Rm (erasing the previous value); and Jn,m,q, if Rn=Rm jump to instruction number q.  This is equivalent to the class of AURA programs which just have two types of instruction: Sn, increment register Rn by one; and DJZn,q, decrement Rn if this is non-zero then if the result is zero jump to instruction step q (Moss & Edmonds 1994).   Thus we only need to prove that given any AURA program we can simulate its effect with a suitable GASP system.  Given an AURA program of m instructions: i1, i2,…, im which refers to registers R1, …, Rn, we construct a GASP system with n+2 agents, each of which has m plans.   Agent An+1 is basically a dump for discarded tokens and agent An+2 remains zero (it has the single plan: (Gn+1, Ja+1,1,1)). Plan s (sÎ{1,…,m}) in agent number a (aÎ{1,…,n}) is determined as follows: there are four cases depending on the nature of instruction number s:

1.        is is Sa: plan s is (Ja,s+1,s+1);

2.        is is Sb where b¹a: plan s is (Gn+1, Ja,s+1,s+1);

3.        is is DJZa,q: plan s is (Gn+1, Gn+1, Ja,q,s+1);

4.        is is DJZb,q where b¹a: plan s is (Gn+1, Ja,q,s+1).

Thus, each plan s in each agent mimics the effect of instruction s in the AURA program with respect to the particular register that the agent corresponds to.


References

Cutland, N. (1980) Computability: An Introduction to Recursive Function Theory. Oxford University Press.

Edmonds, B. (2004) Using the Experimental Method to Produce Reliable Self-Organised Systems. In Brueckner, S. et al. (eds.) Engineering Self Organising Sytems: Methodologies and Applications, Springer, Lecture Notes in Artificial Intelligence, 3464:84-99. http://cfpm.org/cpmrep131.html

Edmonds, B. & Bryson, J. (2004) The Insufficiency of Formal Design Methods – the necessity of an experimental approach for the understanding and control of complex MAS. In Jennings, N. R. et al. (eds.) Proceedings of the 3rd Internation Joint Conference on Autonomous Agents & Multi Agent Systems (AAMAS’04), July 19-23, 2004, New York. ACM Press, 938-945. http://cfpm.org/cpmrep128.html

Gödel, K. (1931), Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I, Monatshefte für Mathematik und Physik, 38(1):173–198. http://doi.org/10.1007/BF01700692

Keles, A. (2026) LLMs could be, but shouldn’t be compilers. Online https://alperenkeles.com/posts/llms-could-be-but-shouldnt-be-compilers/ (viewed 11 Feb 2026)

Moss, S. and Edmonds, B. (1994) Economic Methodology and Computability: Some Implications for Economic Modelling, IFAC Conf. on Computational Economics, Amsterdam, 1994. http://cfpm.org/cpmrep01.html

Turing, A.M. (1937), On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42: 230-265. https://doi.org/10.1112/plms/s2-42.1.230


Edmonds, B. (2026) A Reminder – computability limits on “vibe coding” ABMs (using LLMS to do the programming for us). Review of Artificial Societies and Social Simulation, 12 Feb 2026. https://rofasss.org/2026/02/12/vibe


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Why Object-Oriented Programming is not the best method to implement Agent-Based Models

By Martin Hinsch

Research Department of Genetics, Evolution and Environment
University College London

Introduction

A considerable part of the history of software engineering consists of attempts to make the complexity of software systems manageable in the sense of making them easier to implement, understand, modify, and extend. An important aspect of this is the separation of concerns (SoC, Dijkstra 1982). SoC reduces complexity by dividing the implementation of a system into (presumably simpler) problems that can be solved without having to spend too much thought on other aspects of the system. Architecturally, SoC is accomplished through modularity and encapsulation. This means that parts of the system that have strong inter-dependencies are put together into a “module” (in the widest sense) that presents only aspects of itself to the outside that are required to interact with other modules. This is based on the fundamental assumption that the visible behaviour of a component (its interface) is simpler than its potentially complex inner workings which can be ignored when interacting with it.

The history of Object-Oriented Programming (OOP) is complicated and there are various flavours and philosophies of OOP (Black 2013). However, one way to see Object Orientation (OO) is as a coherent, formalised method to ensure modularisation and encapsulation. In OOP data and functions to create and manipulate that data are combined in objects. Depending on programming language, some object functions (methods) and properties can be made inaccessible to users of the object, thereby hiding internal complexity and presenting a simplified behaviour to the outside. Many OO languages furthermore allow for polymorphism, i.e. different types of objects can have the same interface (but different internal implementations) and can therefore be used interchangeably.

After its inception in the 1960s (Dahl and Nygaard 1966) OOP gained popularity throughout the 80s and 90s, to the point that many established programming languages were retrofitted with language constructs that enabled OOP (C++, Delphi, OCaml, CLOS, Visual Basic, …) and new languages were designed based on OOP principles (Smalltalk, Python, Ruby, Java, Eiffel,…). By the mid-90s many computer science departments taught OOP not as one, or even just a useful paradigm, but as the paradigm that would make all other methods obsolete.

This is the climate in which agent-based or individual-based modelling (ABM) emerged as a new modelling methodology. In ABM the behaviour of a system is not modelled directly but instead the model consists of (many, similar or identical) individual components. The interactions between these components leads to the emergence of global behaviour.

While the origins of the paradigm reach further back, it only started to become popular in the 90s (Bianchi and Squazzoni 2015). As the method requires programming expertise, which in academia was rare outside of computer science, the majority of early ABMs were created by or with the help of computer scientists, which in turn applied the at the time most popular programming paradigm. At first glance OOP also seems to be an excellent fit for ABM – agents are objects, their state is represented by object variables, and their behaviour by methods. It is therefore no surprise that OOP has become and remained the predominant way to implement ABMs (even after the enthusiasm for OOP has waned to some degree in mainstream computer science).

In the following I will argue that OOP is not only not necessarily the best method to write ABMs, but that it has, in fact, some substantial drawbacks. More specifically, I think that the claim that OOP is uniquely suited for ABM is based on a conceptual confusion that can lead to a number of bad modelling habits. Furthermore the specific requirements of ABM implementations do not mesh well with an OOP approach.

Sidenote: Strictly speaking, for most languages we have to distinguish between objects (the entities holding values) and classes (the types that describe the makeup and functionality of objects). This distinction is irrelevant for the point I am making, therefore I will only talk about objects.

Conceptual confusion

About every introduction to OOP I have come across starts with a simple toy example that demonstrates core principles of the methods. Usually a few classes corresponding to everyday objects from the same category are declared (e.g. animal, cat, dog or vehicle, car, bicycle). These classes have methods that usually correspond to activities of these objects (bark, meow, drive, honk).

Beyond introducing the basic syntax and semantics of the language constructs involved, these introductions also transport a message: OOP is easy and intuitive because OOP objects are just representations of objects from the real world (or the problem domain). OOP is therefore simply the process of translating objects in the problem domain into software objects.

OOP objects are not representations of real-world objects

While this approach makes the concept of OOP more accessible, it is misleading. At its core the motivation behind OOP is the reduction of complexity by rigorous application of some basic tenets of software engineering (see Introduction). OOP objects therefore are not primarily defined by their representational relationship to real-world objects, but by their functionality as modules in a complicated machine.

For programmers, this initial misunderstanding is harmless as they will undergo continued training. For nascent modellers without computer science background, however, these simple explanations often remain the extent of their exposure to software engineering principles, and the misunderstanding sticks. This is unfortunately further reinforced by many ABM tutorials. Similar to introductions to OOP they present the process of the implementation of an ABM as simply consisting of defining agents as objects, with object properties that represent the real-world entities’ state and methods that implement their behaviour.

At this point a second misunderstanding almost automatically follows. By emphasising a direct correspondence between real-world entities and OOP objects, it is often implied (and sometimes explicitly stated) that modelling is, in fact, the process of translating from one to the other.

OOP is not modelling

As mentioned above, this is a misinterpretation of the intent behind OOP – to reduce software complexity. Beyond that, however, it is also a misunderstanding of the process of modelling. Unfortunately, it connects very well with a common “lay theory of modelling” that I have encountered many times when talking to domain experts with no or little experience with modelling: the idea that a model is a representation of a real system where a “better” or “more correct” representation is a better model.

Models are not (simply) representations

There are various ways to use a model and reasons to do it (Epstein 2008), but put in the most general terms, a (simulation) model is an artificial (software) system that in one way or other teaches us something about a real system that is similar in some aspects (Noble 1997). Importantly, however, the question or purpose for which the model was built determines which aspects of the real system will be part of the model. As a corollary, even given the same real-world system, two models with different questions can look very different, to the point that they use different modelling paradigms (Hinsch and Bijak 2021).

Experienced modellers are aware of all this, of course, and will not be confused by objects and methods. For novices and domain experts without that experience, however, OOP and the way it is taught in connection with ABM can lead to a particular style of modelling where first, all entities in the system are captured as agents, and second, these agents are being equipped with more and more properties and methods, “because it is more realistic”. 

An additional issue with this is that it puts the focus of the modelling process on entities. The direct correspondence between nouns in our (natural language-based) description of the model and classes in our object-oriented implementation makes it very tempting to think about the model solely in terms of entities and their properties.

ABMs are not (just) collections of entities

There are other reasons to build a simulation model, but in most cases the dynamic behaviour of the finished model will be crucial. The reason to use an ABM as opposed to, say, a differential equation model, is not that the system is composed of entities, but that the behaviour of the system depends in such a way on interactions between entities that it cannot be reduced to aggregate population behaviour. The “interesting” part of the model is therefore not the agents per se, but their behaviour and the interactions between them. It is only possible to understand the model’s macroscopic behaviour (which is often the goal of ABM) by thinking about it in terms of microscopic interactions. When creating the model it is therefore crucial to think not (only) about which entities are part of the system, but primarily which entity-level interactions and behaviours are likely to affect the macroscopic behaviour of interest.

To summarise the first part, OOP is a software engineering methodology, not a way to create models. This unfortunately often gets lost in the way it is commonly taught (in particular in connection with ABM), so that OOP can easily lead to a mindset that sees models as representations, emphasises “realism”, and puts the focus on entities rather than the more important interactions.

Practical considerations

But assuming a seasoned modeller who understands all this – surely there would be no harm in choosing an OOP implementation?

At first approximation this is certainly true. The points discussed above apply to the modelling process, so assuming all of the mentioned pitfalls are avoided, the implementation should only be a matter of translating a formal structure into working program code. As long as the code is exactly functionally equivalent to the formal model, it should not matter which programming paradigm is used.

In reality things are a little bit more complicated, however. For a number of reasons model code has different properties and requirements to “normal” code. These combine to make OOP not very suitable for the implementation of ABMs.

OOP does not reduce complexity of an ABM

Any non-trivial piece of software is too complicated to understand all at once. At the same time, we usually want its behaviour to be well-defined, well-understood, and predictable. OOP is a way to accomplish this by partitioning the complexity into manageable pieces. By composing the program of simple(r) modules, which in turn have well-defined, well-understood, and predictable behaviour and which interact in a simple, predictable manner, the complexity of the system remains manageable and understandable.

An ABM has parts that we want to be well-understood and predictable as well, such as parameterisation, data output, visualisation, etc. For these “technical” parts of the simulation program, the usual rules of software engineering apply, and OOP can be a helpful technique. The “semantic” part, i.e. the implementation of the model itself is different, however. By definition, the behaviour of a model is unpredictable and difficult to understand. Furthermore, in an ABM the complexity of the model behaviour is the result of the (non-linear) interactions between its components – the agents – which themselves are often relatively simple. The benefit of OOP – a reduction in complexity by hiding it behind simple object interfaces – therefore does not apply for the semantic part of the implementation of an ABM.

OOP makes ABMs more difficult to read and understand

There is more, however. Making code easy to read and understand is an important part of good practice in programming. This holds even more so for ABM code.

First, most ordinary application code is constructed to produce very specific runtime behaviour. To put it very simply – if the program does not show that behaviour, we have found an error; if it does, our program is by definition correct. For ABM code the behaviour can not be known in advance (otherwise we would not need to simulate). Some of it can be tested by running edge cases with known behaviour, but to a large degree making sure that the simulation program is implemented correctly has to rely on inspection of the source code.

Second, for more complicated models such as ABMs the simulation program is very rarely just the translation of a formal specification. Language is inherently ambiguous and to my knowledge there is no practical mathematical notation for ABMs (or software in general). Given further factors such as turnaround times of scientific work, ambiguity of language and documentation drift, it is often unavoidable that the code remains the ultimate authority on what the model does. In fact, good arguments have been made to embrace this reality and its potential benefits (Meisser 2016), but even so we have to live with the reality that for most ABMs, most of the time, the code is the model.

Finally, an important part of the modelling process is working out the mechanisms that lead to the observed behaviour. This involves trying to relate the observed model behaviour to the effect of agent interactions, often by modifying parameter values or making small changes to the model itself. During this process, being able to understand at a glance what a particular piece of the model does can be very helpful.

For all of these reasons, readability and clarity are paramount for ABM code. Implementing the model in an OO manner directly contradicts this requirement. We would try to implement most functionality as methods of an object. The processes that make up the dynamic behaviour of the model – the interactions between the agents – are then split into methods belonging to various objects. Someone who tries to understand – or modify – a particular aspect of the behaviour then has to jump between these methods, often distributed over different files, having to assemble the interactions that actually take place in their mind. Furthermore, encapsulation, i.e. the hiding of complexity behind simple interfaces, can make ABM code more difficult to understand by giving the misleading impression of simplicity. If we encounter agent.get_income() for example, we might access a simple state variable or we might get the result of a complex calculation. For normal code this would not make a difference since the potential complexity hidden behind that function call should not affect the caller. For ABM code, however, the difference might be crucial.

To sum up the second part – due to the way complexity arises in ABMs an OOP implementation does not lead to simplification, but on the contrary can make the code more difficult to understand and maintain.

Conclusion and discussion

Obviously none of the points mentioned above are absolutes and excellent models have been created using object-oriented languages and implementation principles. However, I would like to argue that the current state of affairs where object-orientation is uncritically presented as the best or even only way to implement agent-based models does on average lead to worse models and worse model implementations. I think that in the future any beginner’s course on agent-based modelling should at least:

  • Clarify the difference between model and implementation.
  • Show examples of the same model implemented according to a number of different paradigms.
  • Emphasise that ABMs are about interactions, not entities.

Concerning best practices for implementation, I think readability is the best guideline. Personally, I have found it useful to implement agents as “shallow” objects with the rule of thumb that only functions that a) have an obvious meaning and b) only affect the agent in question become methods implemented at the same place as the agent definition. Everything else is implemented as free functions, which can then be sorted into files by processes, e.g. ’reproduction’ or ’movement’. This avoids philosophical problems – does infection in a disease model, for example, belong to the agents, some environment object or maybe even a disease object? But above all it makes it easy to quickly find and understand a specific aspect of the model.

If at the same time the model code is kept as independent of the parts of the code that manages technical infrastructure (such as parameter loading or gui) as possible, we can maintain the implementation of the model (and only the model) as a self-contained entity in a form that is optimised for clarity and readability.

References

Bianchi, Federico, and Flaminio Squazzoni. 2015. “Agent-Based Models in Sociology.” Wiley Interdisciplinary Reviews: Computational Statistics 7 (4): 284–306. https://doi.org/10.1002/wics.1356.

Black, Andrew P. 2013. “Object-Oriented Programming: Some History, and Challenges for the Next Fifty Years.” Information and Computation, Fundamentals of Computation Theory, 231 (October): 3–20. https://doi.org/10.1016/j.ic.2013.08.002.

Dahl, Ole-Johan, and Kristen Nygaard. 1966. “SIMULA: An ALGOL-Based Simulation Language.” Commun. ACM 9 (9): 671–78. https://doi.org/10.1145/365813.365819.

Dijkstra, Edsger W. 1982. “On the Role of Scientific Thought.” In Selected Writings on Computing: A Personal Perspective, edited by Edsger W. Dijkstra, 60–66. New York, NY: Springer. https://doi.org/10.1007/978-1-4612-5695-3_12.

Epstein, Joshua M. 2008. “Why Model?” Jasss-the Journal of Artificial Societies and Social Simulation 11 (4): 12. https://doi.org/10.13140/2.1.5032.9927.

Hinsch, Martin, and Jakub Bijak. 2021. “Principles and State of the Art of Agent-Based Migration Modelling.” In Towards Bayesian Model-Based Demography: Agency, Complexity and Uncertainty in Migration Studies. Methodos Series 17, 33-49.

Meisser, Luzius. 2016. “The Code Is the Model.” International Journal of Microsimulation 10 (3): 184–201. https://doi.org/10.34196/ijm.00169.

Noble, Jason. 1997. “The Scientific Status of Artificial Life.” In Poster Presented at the Fourth European Conference on Artificial Life (ECAL97), Brighton, UK.

Hinsch, M.(2025) Why Object-Oriented Programming is not the best method to implement Agent-Based models. Review of Artificial Societies and Social Simulation, 3 Feb 2026. https://rofasss.org/2026/02/03/oop


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Nigel Gilbert

By Corinna Elsenbroich & Petra Ahrweiler

The first piece on winners of the European Social Simulation Association’s Rosaria Conte Outstanding Contribution Award for Social Simulation.

Gilbert, a former sociologist of science, has been one of the chief links in Britain between computer scientists and sociologists of science” [1, p. 294]

Nigel has always been and still is a sociologist – not only of science, but also of technology, innovation, methods and many other subfields of sociology with important contributions in theory, empirical research and sociological methods.

He has pioneered a range of sociological areas such as Sociology of Scientific Knowledge, Secondary Analysis of Government Datasets, Access to Social Security Information, Social Simulation, and Complexity Methods of Policy Evaluation.

Collins is right, however, that Nigel is one of the chief links between sociologists and computer scientists in the UK and beyond. This earned him to be elected as the first practising social scientist elected as a Fellow of the Royal Academy of Engineering (1999). As the principal founding father of agent-based modelling as a method for the social sciences in Europe, he initiated, promoted and institutionalised a completely novel way of doing social sciences through the Centre for Research in Social Simulation (CRESS) at the University of Surrey, the Journal of Artificial Societies and Social Simulation (JASSS), founded Sociological Research Online (1993) and Social Research Update. Nigel has 100s of publications on all aspects of social simulation and seminal books like: Simulating societies: the computer simulation of social phenomena (Gilbert & Doran 1994), Artificial Societies: The Computer Simulation of Social Phenomena (Gilbert & Conte 1995), Simulation for the Social Scientist (Gilbert &Troitzsch 2005), and Agent-based Models (Gilbert 2019). His entrepreneurial spirit and acumen resulted in over 25 large project grants (across the UK and Europe), often in close collaboration with policy and decision makers to ensure real life impact, a simulation platform on innovation networks called SKIN, and a spin off company CECAN Ltd, training practitioners in complexity methods and bringing their use to policy evaluation projects.

Nigel is a properly interdisciplinary person, turning to the sociology of scientific knowledge in his PhD under Michael Mulkay after graduating in Engineering from Cambridge’s Emmanuel College. He joined the Sociology Department at the University of Surrey in 1976 where he became professor of sociology in 1991. Nigel was appointed Commander of the Order of the British Empire (CBE) in 2016 for contributions to engineering and social sciences.

He was the second president of the European Social Simulation Association ESSA, the originator of the SIMSOC mailing list, launched and edited the Journal of Artificial Societies and Social Simulation from 1998-2014 and he was the first holder of the Rosaria Conte Outstanding Contribution Award for Social Simulation in 2016, an unanimous decision by the ESSA Management Committee.

Despite all of this, all these achievements and successes, Nigel is the most approachable, humble and kindest person you will ever meet. In any peril he is the person that will bring you a step forward when you need a helping hand. On asking him, after getting a CBE etc. what is the recognition that makes him most happy, he said, with the unique Nigel Gilbert twinkle in his eye, “my Rosaria Conte Award”.

References

Collins, H. (1995). Science studies and machine intelligence. In Handbook of Science and Technology Studies, Revised Edition (pp. 286-301). SAGE Publications, Inc., https://doi.org/10.4135/9781412990127

Gilbert, N., & Doran, R. (Eds.). (1994). Simulating societies: the computer simulation of social phenomena. Routledge.

Gilbert, N. & Conte, R. (1995) Artificial Societies: the computer simulation of social life. Routeledge. https://library.oapen.org/handle/20.500.12657/24305

Gilbert, N. (2019). Agent-based models. Sage Publications.

Gilbert, N., & Troitzsch, K. (2005). Simulation for the social scientist. Open University Press; 2nd edition.


Elsenbroich, C. & Ahrweiler, P. (2025) Nigel Gilbert. Review of Artificial Societies and Social Simulation, 3 Mar 2025. https://rofasss.org/2025/04/03/nigel-gilbert


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Rosaria Conte (1952–2016)

By Mario Paolucci

This is the “header piece” for a short series on those who have been awarded the “Rosaria Conte Outstanding Award for Social Simulation” awarded by the European Social Simulation Association every two years. It makes no sense to describe those who have got this award without information about the person which it is named after, so this is about her.

Rosaria Conte was one of the first researchers in Europe to recognize and champion agent-based social simulation. She became a leader of what would later become the ESSA community in the 1990s, chairing the 1997 ICCS&SS – First International Conference on Computer Simulation and the Social Sciences in Cortona, Italy, and co-editing with Nigel Gilbert the book Artificial Societies (Gilbert & Conte, 1995). With her unique approach, her open approach to interdisciplinarity, and her charisma, she inspired and united a generation of researchers who still pursue her scientific endeavour.

Known as a relentless advocate for cognitive agents in the agent-based modeling community, Conte stood firmly against the keep-it-simple principle. Instead, she argued that plausible agents—those capable of explaining complex social phenomena where immergence (Castelfranchi, 1998; Conte et al., 2009) is as critical as emergence—require explicit, theory-backed representations of cognitive artifacts (Conte & Paolucci, 2011).

Born in Foggia, Italy, Rosaria graduated in philosophy at the University of Rome La Sapienza in 1976, to later join the Italian National Research Council (Consiglio Nazionale delle Ricerche, CNR). In the ‘90s, she founded and directed the Laboratory of Agent-Based Social Simulation (LABSS) at the Institute of Cognitive Sciences and Technologies (ISTC-CNR). Under her leadership, LABSS became an internationally renowned hub for research on agent-based modeling and social simulation. Conte’s work at LABSS focused on the development of computational models to study complex social phenomena, including cooperation, reputation, and social norms.

Influenced by collaborators such as Cristiano Castelfranchi and Domenico Parisi, whose guidance helped shape her studies of social behavior through computational models, she proposed the integration of cognitive and social theories into agent-based models. Unlike approaches that treated agents as simple rule-followers, Rosaria emphasized the importance of incorporating cognitive and emotional processes into simulations. Her 1995 book, Cognitive and Social Action (Conte & Castelfranchi, 1995), became a landmark text in the field. The book employed their characteristic pre-formal approach—using logic formulas in order to illustrate relationships between concepts, without a fully developed system of postulates or theorem-proving tools. The reason for this approach was, as they noted, that “formalism sometimes disrupts implicit knowledge and theories” (p. 14). The ideas in the book, together with her attention to the dependance relations between agents (Sichman et al., 1998) would go on to inspire Rosaria’s approach to simulation throughout her career.

Rosaria’s research extended to the study of reputation and social norms. For reputation (Conte & Paolucci, 2002), an attempt to create a specific, cognitive-based model has been made with the Repage approach (Sabater et al., 2006). Regarding social norms (Andrighetto et al., 2007), she explored how norms emerge, spread, and influence individual and collective behavior. This work had practical implications for a range of fields, including organizational behavior, policy design, and conflict resolution. She had a key role in the largest recent attempt to create a center for complexity and social sciences, the FuturICT project (Conte et al., 2012).

Rosaria Conte held several leadership positions. She served as President of the European Social Simulation Society (ESSA) from 2010 to 2012. Additionally, she was President of the Italian Cognitive Science Association (AISC) from 2008 to 2009, member of the Italian Bioethics Committee (CNB) from 2013 to 2016, and Vice President of the Italian CNR Scientific Council.

You can watch an interview with Rosaria about FuturICT here: https://www.youtube.com/watch?v=ghgzt5zgGP8

References

Andrighetto, G., Campenni, M., Conte, R., & Paolucci, M. (2007). On the immergence of norms: A normative agent architecture. Proceedings of AAAI Symposium, Social and Organizational Aspects of Intelligence. http://www.aaai.org/Library/Symposia/Fall/fs07-04.php

Castelfranchi, C. (1998). Simulating with Cognitive Agents: The Importance of Cognitive Emergence. Proceedings of the First International Workshop on Multi-Agent Systems and Agent-Based Simulation, 26–44. http://portal.acm.org/citation.cfm?id=665578

Conte, R., Andrighetto, G., & Campennì, M. (2009). The Immergence of Norms in Agent Worlds. In H. Aldewereld, V. Dignum, & G. Picard (Eds.), Engineering Societies in the Agents World X< (pp. 1–14). Springer. https://doi.org/10.1007/978-3-642-10203-5_1

Conte, R., & Castelfranchi, C. (1995). Cognitive Social Action. London: UCL Press.

Conte, R., Gilbert, N., Bonelli, G., Cioffi-Revilla, C., Deffuant, G., Kertesz, J., Loreto, V., Moat, S., Nadal, J.-P., Sanchez, A., Nowak, A., Flache, A., San Miguel, M., & Helbing, D. (2012). Manifesto of computational social science. The European Physical Journal Special Topics, 214(1), 325–346. https://doi.org/10.1140/epjst/e2012-01697-8

Conte, R., & Paolucci, M. (2002). Reputation in Artificial Societies—Social Beliefs for Social Order. Springer. https://iris.unibs.it/retrieve/ddc633e2-a83d-4e2e-e053-3705fe0a4c80/Review%20of%20Conte%2C%20Rosaria%20and%20Paolucci%2C%20Mario_%20Reputation%20in%20Artificial%20Socie.pdf

Conte, R., & Paolucci, M. (2011). On Agent Based Modelling and Computational Social Science. Social Science Research Network Working Paper Series. https://doi.org/10.3389/fpsyg.2014.00668

Gilbert, N., & Conte, R. (Eds.). (1995). Artificial Societies: The Computer Simulation of Social Life. Taylor & Francis, Inc. https://library.oapen.org/bitstream/handle/20.500.12657/24305/1005826.pdf

Sabater, J., Paolucci, M., & Conte, R. (2006). Repage: REPutation and ImAGE Among Limited Autonomous Partners. Journal of Artificial Societies and Social Simulation, 9<(2). http://jasss.soc.surrey.ac.uk/9/2/3.html

Sichman, J. S., Conte, R., Demazeau, Y., & Castelfranchi, C. (1998). A social reasoning mechanism based on dependence networks. 416–420.


Paolucci, M. (2023) Rosaria Conte (1952-2016). Review of Artificial Societies and Social Simulation, 11 Feb 2023. https://rofasss.org/2025/02/11/rosariaconte/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Quantum computing in the social sciences

By Emile Chappin and Gary Polhill

The dream

What could quantum computing mean for the computational social sciences? Although quantum computing is at an early stage, this is the right time to dream about precisely that question for two reasons. First, we need to keep the computational social sciences ‘in the conversation’ about use cases for quantum computing to ensure our potential needs are discussed. Second, thinking about how quantum computing could affect the way we work in the computational social sciences could lead to interesting research questions, new insights into social systems and their uncertainties, and form the basis of advances in our area of work.

At first glance, quantum computing and the computational social sciences seem unrelated. Computational social science uses computer programs written in high-level languages to explore the consequences of assumptions as macro-level system patterns based on coded rules for micro-level behaviour (e.g., Gilbert, 2007). Quantum computing is in an early phase, with the state-of-the-art being in the order of 100s of qubits [1],[2], and a wide range of applications are envisioned (Hassija, 2020), e.g., in the areas of physics (Di Meglio et al., 2024) and drug discovery (Blunt et al., 2022). Hence, the programming of quantum computers is also in an early phase. Major companies (e.g., IBM, Microsoft, Alphabet, Intel, Rigetti Computing) are investing heavily and have put out high expectations – though how much of this is hyperbole to attract investors and how much it is backed up by substance remains to be seen. This means it is still hard to comprehend what opportunities may come from scaling up.

Our dream is that quantum computing enables us to represent human decision-making on a much larger scale, do more justice to how decisions come about, and embrace the influences people have on each other. It would respect that people’s actual choices are undetermined until they have to show behaviour. On a philosophical level, these features are consistent with how quantum computation operates. Applying quantum computing to decision-making with interactions may help us inform or discover behavioural theory and contribute to complex systems science.

The mysticism around quantum computing

There is mysticism around what qubits are. To start thinking about how quantum computing could be relevant for computational social science, there is no direct need to understand the physics of how qubits are physically set up. However, it is necessary to understand the logic and how quantum computers operate. At the logical level, there are similarities between quantum and traditional computers.

The main similarity is that the building blocks are bits and that they are either 0 or 1, but only when you measure them. A second similarity is that quantum computers work with ‘instructions’. Quantum ‘processors’ alter the state of the bits in a ‘memory’ using programs that comprise sequences of ‘instructions’ (e.g., Sutor, 2019).

There are also differences. They are: 1) qubits are programmed to have probabilities of being a zero or a one, 2) qubits have no determined value until they are measured, and 3) multiple qubits can be entangled. The latter means the values (when measured) depend on each other.

Operationally speaking, quantum computers are expected to augment conventional computers in a ‘hybrid’ computing environment. This means we can expect to use traditional computer programs to do everything around a quantum program, not least to set up and analyse the outcomes.

Programming quantum computers

Until now, programming languages for quantum computing are low-level; like assembly languages for regular machines. Quantum programs are therefore written very close to ‘the hardware’. Similarly, in the early days of electronic computers, instructions for processors to perform directly were programmed directly: punched cards contained machine language instructions. Over time, computers got bigger, more was asked of them, and their use became more widespread and embedded in everyday life. At a practical level, different processors, which have different instruction sets, and ever-larger programs became more and more unwieldy to write in machine language. Higher-level languages were developed, and reached a point where modellers could use the languages to describe and simulate dynamic systems. Our code is still ultimately translated into these lower-level instructions when we compile software, or it is interpreted at run-time. The instructions now developed for quantum computing are akin to the early days of conventional computing, but development of higher-level programming languages for quantum computers may happen quickly.

At the start, qubits are put in entangled states (e.g., Sutor, 2019); the number of qubits at your disposal makes up the memory. A quantum computer program is a set of instructions that is followed. Each instruction alters the memory, but only by changing the probabilities of qubits being 0 or 1 and their entanglement. Instruction sets are packaged into so-called quantum circuits. The instructions operate on all qubits at the same time, (you can think of this in terms of all probabilities needing to add up to 100%). This means the speed of a quantum program does not depend on the scale of the computation in number of qubits, but only depends on the number of instructions that one executes in a program. Since qubits can be entangled, quantum computing can do calculations that take too long to run on a normal computer.

Quantum instructions are typically the inverse of themselves: if you execute an instruction twice, you’re back at the state before the first operation. This means you can reverse a quantum program simply by executing the program again, but now in reverse order of the instructions. The only exception to this is the so-called ‘read’ instruction, by which the value is determined for each qubit to either be 1 or 0. This is the natural end of the quantum program.

Recent developments in quantum computing and their roadmaps

Several large companies such as Microsoft, IBM and Alphabet are investing heavily in developing quantum computing. The route currently is to move up in the scale of these computers with respect to the number of qubits they have and the number of gates (instructions) that can be run. IBM’s roadmap they suggest growing to 7500 instructions, as quickly as 2025[3]. At the same time, programming languages for quantum computing are being developed, on the basis of the types of instructions above. At the moment, researchers can gain access to actual quantum computers (or run quantum programs on simulated quantum hardware). For example, IBM’s Qiskit[4] is one of the first open-source software developing kit for quantum computing.

A quantum computer doing agent-based modelling

The exponential growth in quantum computing capacity (Coccia et al., 2024) warrants us to consider how it may be used in the computational social sciences. Here is a first sketch. What if there is a behavioural theory that says something about ‘how’ different people decide in a specific context on a specifical behavioural action. Can we translate observed behaviour into the properties of a quantum program and explore the consequences of what we can observe? Or, in contrast, can we unravel the assumptions underneath our observations? Could we look at alternative outcomes that could also have been possible in the same system, under the same conceptualization? Given what we observe, what other system developments could have had emerged that also are possible (and not highly unlikely)? Can we unfold possible pathways without brute-forcing a large experiment? These questions are, we believe, different when approached from a perspective of quantum computing. For one, the reversibility of quantum programs (until measuring) may provide unique opportunities. This also means, doing such analyses may inspire new kinds of social theory, or it may give a reflection on the use of existing theory.

One of the early questions is how we may use qubits to represent modelled elements in social simulations. Here we sketch basic alternative routes, with alternative ideas. For each strain we include a very rudimentary application to both Schelling’s model of segregation and the Traffic Basic model, both present in NetLogo model library.

Qubits as agents

A basic option could be to represent an agent by a qubit. Thinking of one type of stylized behaviour, an action that can be taken, then a quantum bit could represent whether that action is taken or not. Instructions in the quantum program would capture the relations between actions that can be taken by the different agents, interventions that may affect specific agents. For Schelling’s model, this would have to imply to show whether segregation takes place or not. For Traffic Basic, this would be what the probability is for having traffic jams. Scaling up would mean we would be able to represent many interacting agents without the simulation to slow down. This is, by design, abstract and stylized. But it may help to answer whether a dynamic simulation on a quantum computer can be obtained and visualized.

Decision rules coded in a quantum computer

A second option is for an agent to perform a quantum program as part of their decision rules. The decision-making structure should then match with the logic of a quantum computer. This may be a relevant ontological reference to how brains work and some of the theory that exists on cognition and behaviour. Consider a NetLogo model with agents that have a variety of properties that get translated to a quantum program. A key function for agents would be that the agent performs a quantum calculation on the basis of a set of inputs. The program would then capture how different factors interact and whether the agent performs specific actions, i.e., show particular behaviour. For Schelling’s segregation model, it would be the decision either to move (and in what direction) or not. For Traffic Basic it would lead to a unique conceptualization of heterogeneous agents. But for such simple models it would not necessarily take benefit of the scale-advantage that quantum computers have, because most of the computation occurs on traditional computers and the limited scope of the decision logic of these models. Rather, it invites to developing much more rich and very different representations of how decisions are made by humans. Different brain functions may all be captured: memory, awareness, attitudes, considerations, etc. If one agent’s decision-making structure would fit in a quantum computer, experiments can already be set up, running one agent after the other (just as it happens on traditional computers). And if a small, reasonable number of agents would fit, one could imagine group-level developments. If not of humans, this could represent companies that function together, either in a value chain or as competitors in a market. Because of this, it may be revolutionary:  let’s consider this as quantum agent-based modelling.

Using entanglement

Intuitively one could consider the entanglement if qubits to be either represent the connection between different functions in decision making, the dependencies between agents that would typically interact, or the effects of policy interventions on agent decisions. Entanglement of qubits could also represent the interaction of time steps, capturing path dependencies of choices, limiting/determining future options. This is the reverse of memory: what if the simulation captures some form of anticipation by entangling future options in current choices. Simulations of decisions may then be limited, myopic in their ability to forecast. By thinking through such experiments, doing the work, it may inspire new heuristics that represent bounded rationality of human decision making. For Schelling’s model this could be the local entanglement restricting movement, it could be restricting movement because of future anticipated events, which contributes to keep the status quo. For Traffic Basic, one could forecast traffic jams and discover heuristics to avoid them which, in turn may inspire policy interventions.

Quantum programs representing system-level phenomena

The other end of the spectrum can also be conceived. As well as observing other agents, agents could also interact with a system in order to make their observations and decisions where the system with which they interact with itself is a quantum program. The system could be an environmental, or physical system, for example. It would be able to have the stochastic, complex nature that real world systems show. For some systems, problems could possibly be represented in an innovative way. For Schelling’s model, it could be the natural system with resources that agents benefit from if they are in the surroundings; resources having their own dynamics depending on usage. For Traffic Basic, it may represent complexities in the road system that agents can account for while adjusting their speed.

Towards a roadmap for quantum computing in the social sciences

What would be needed to use quantum computation in the social sciences? What can we achieve by taking the power of high-performance computing combined with quantum computers when the latter scale up? Would it be possible to reinvent how we try to predict the behaviour of humans by embracing the domain of uncertainty that also is essential in how we may conceptualise cognition and decision-making? Is quantum agent-based modelling at one point feasible? And how do the potential advantages compare to bringing it into other methods in the social sciences (e.g. choice models)?

A roadmap would include the following activities:

  • Conceptualise human decision-making and interactions in terms of quantum computing. What are promising avenues of the ideas presented here and possibly others?
  • Develop instruction sets/logical building blocks that are ontologically linked to decision-making in the social sciences. Connect to developments for higher-level programming languages for quantum computing.
  • Develop a first example. One could think of reproducing one of the traditional models. Either an agent-based model, such as Schelling’s model of segregation or Basic Traffic, or a cellular automata model, such as game-of-life. The latter may be conceptualized with a relatively small number of cells and could be a valuable demonstration of the possibilities.
  • Develop quantum computing software for agent-based modelling, e.g., as a quantum extension for NetLogo, MESA, or for other agent-based modelling packages.

Let us become inspired to develop a more detailed roadmap for quantum computing for the social sciences. Who wants to join in making this dream a reality?

Notes

[1] https://newsroom.ibm.com/2022-11-09-IBM-Unveils-400-Qubit-Plus-Quantum-Processor-and-Next-Generation-IBM-Quantum-System-Two

[2] https://www.fastcompany.com/90992708/ibm-quantum-system-two

[3] https://www.ibm.com/roadmaps/quantum/

[4] https://github.com/Qiskit/qiskit-ibm-runtime

References

Blunt, Nick S., Joan Camps, Ophelia Crawford, Róbert Izsák, Sebastian Leontica, Arjun Mirani, Alexandra E. Moylett, et al. “Perspective on the Current State-of-the-Art of Quantum Computing for Drug Discovery Applications.” Journal of Chemical Theory and Computation 18, no. 12 (December 13, 2022): 7001–23. https://doi.org/10.1021/acs.jctc.2c00574.

Coccia, M., S. Roshani and M. Mosleh, “Evolution of Quantum Computing: Theoretical and Innovation Management Implications for Emerging Quantum Industry,” in IEEE Transactions on Engineering Management, vol. 71, pp. 2270-2280, 2024, https://doi: 10.1109/TEM.2022.3175633.

Di Meglio, Alberto, Karl Jansen, Ivano Tavernelli, Constantia Alexandrou, Srinivasan Arunachalam, Christian W. Bauer, Kerstin Borras, et al. “Quantum Computing for High-Energy Physics: State of the Art and Challenges.” PRX Quantum 5, no. 3 (August 5, 2024): 037001. https://doi.org/10.1103/PRXQuantum.5.037001.

Gilbert, N., Agent-based models. SAGE Publications Ltd, 2007. ISBN 978-141-29496-44

Hassija, V., Chamola, V., Saxena, V., Chanana, V., Parashari, P., Mumtaz, S. and Guizani, M. (2020), Present landscape of quantum computing. IET Quantum Commun., 1: 42-48. https://doi.org/10.1049/iet-qtc.2020.0027

Sutor, R. S. (2019). Dancing with Qubits: How quantum computing works and how it can change the world. Packt Publishing Ltd.


Chappin, E. & Polhill, G (2024) Quantum computing in the social sciences. Review of Artificial Societies and Social Simulation, 25 Sep 2024. https://rofasss.org/2024/09/24/quant


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Agent-Based Modelling Pioneers: An Interview with Jim Doran

By David Hales and Jim Doran

Jim Doran is an ABM pioneer. Specifically applying ABM to social phenomena. He has been working on these ideas since the 1960’s. His work made a major contribution to establishing the area as it exists today.

In fact Jim has made significant contributions in many areas related to computation such as Artificial Intelligence (AI), Distributed AI (DAI) and Multi-agent Systems (MAS).

I know Jim — he was my PhD supervisor (at the University of Essex) so I had regular meetings with him over a period of about four years. It is hard to capture both the depth and breadth of Jim’s approach. Basically he thinks big. I mean really big! — yet plausibly and precisely. This is a very difficult trick to pull off. Believe me I’ve tried.

He retired from Essex almost two decades ago but continues to work on a number of very innovative ABM related projects that are discussed in the interview.

The interview was conducted over e-mail in August. We did a couple of iterations and included references to the work mentioned.


According to your webpage, at the University of Essex [1] , your background was originally mathematics and then Artificial Intelligence (working with Donald Michie at Edinburgh). In those days AI was a very new area. I wonder if you could say a little about how you came to work with Michie and what kind of things you worked on?

Whilst reading Mathematics at Oxford, I both joined the University Archaeological Society (inspired by the TV archaeologist of the day, Sir Mortimer Wheeler) becoming a (lowest grade) digger and encountering some real archaeologists like Dennis Britten, David Clarke and Roy Hodson, and also, at postgraduate level, was lucky enough to come under the influence of a forward thinking and quite distinguished biometrist, Norman T. J. Bailey, who at that time was using a small computer (an Elliot 803, I think it was) to simulate epidemics — i.e. a variety of computer simulation of social phenomena (Bailey 1967). One day, Bailey told me of this crazy but energetic Reader at Edinburgh University, Donald Michie, who was trying to program computers to play games and to display AI, and who was recruiting assistants. In due course I got a job as an Research Assistant / Junior Research Fellow in Michie’s group (the EPU, for Experimental Programming Unit). During the war Michie had worked with and had been inspired by Alan Turing (see: Lee and Holtzman 1995) [2].

Given this was the very early days of AI, What was it like working at the EPU at that time? Did you meet any other early AI researchers there?

Well, I remember plenty of energy, plenty of parties and visitors from all over including both the USSR (not easy at that time!) and the USA. The people I was working alongside – notably, but not only, Rod Burstall [3], (the late) Robin Popplestone [4], Andrew Ortony [5] – have typically had very successful academic research careers.

I notice that you wrote a paper with Michie in 1966 “Experiments with the graph traverser program”. Am I right, that this is a very early implementation of a generalised search algorithm?

When I took up the research job in Edinburgh at the EPU, in 1964 I think, Donald Michie introduced me to the work by Arthur Samuel on a learning Checkers playing program (Samuel 1959) and proposed to me that I attempt to use Samuel’s rather successful ideas and heuristics to build a general problem solving program — as a rival to the existing if somewhat ineffective and pretentious Newell, Shaw and Simon GPS (Newell et al 1959). The Graph Traverser was the result – one of the first standardised heuristic search techniques and a significant contribution to the foundations of that branch of AI (Doran and Michie 1966) [6]. It’s relevant to ABM because cognition involves planning and AI planning systems often use heuristic search to create plans that when executed achieve desired goals.

Can you recall when you first became aware of and / or began to think about simulating social phenomena using computational agents?

I guess the answer to your question depends on the definition of “computational agent”. My definition of a “computational agent” (today!) is any locus of slightly human like decision-making or behaviour within a computational process. If there is more than one then we have a multi-agent system.

Given the broad context that brought me to the EPU it was inevitable that I would get to think about what is now called agent based modelling (ABM) of social systems – note that archaeology is all about social systems and their long term dynamics! Thus in my (rag bag!) postgraduate dissertation (1964), I briefly discussed how one might simulate on a computer the dynamics of the set of types of pottery (say) characteristic of a particular culture – thus an ABM of a particular type of social dynamics. By 1975 I was writing a critical review of past mathematical modelling and computer simulation in archaeology with prospects (chapter 11 of Doran and Hodson, 1975).

But I didn’t myself use the word “agent” in a publication until, I believe, 1985 in a chapter I contributed to the little book by Gilbert and Heath (1985). Earlier I tended to use the word “actor” with the same meaning. Of course, once Distributed AI emerged as a branch of AI, ABM too was bound to emerge.

Didn’t you write a paper once titled something like “experiments with a pleasure seeking ant in a grid world”? I ask this speculatively because I have some memory of it but can find no references to it on the web.

Yes. The title you are after is “Experiments with a pleasure seeking automaton” published in the volume Machine Intelligence 3 (edited by Michie from the EPU) in 1968. And there was a follow up paper in Machine Intelligence 4 in 1969 (Doran 1968; 1969). These early papers address the combination of heuristic search with planning, plan execution and action within a computational agent but, as you just remarked, they attracted very little attention.

You make an interesting point about how you, today, define a computational agent. Do you have any thoughts on how one would go about trying to identify “agents” in a computational, or other, process? It seems as humans we do this all the time, but could we formalise it in some way?

Yes. I have already had a go at this, in a very limited way. It really boils down to, given the specification of a complex system, searching thru it for subsystems that have particular properties e.g. that demonstrably have memory within their structure of what has happened to them. This is a matter of finding a consistent relationship between the content of the hypothetical agent’s hypothetical memory and the actual input-output history (within the containing complex system) of that hypothetical agent – but the searches get very large. See, for example, my 2002 paper “Agents and MAS in STaMs” (Doran 2002).

From your experience what would you say are the main benefits and limitations of working with agent-based models of social phenomena?

The great benefit is, I feel, precision – the same benefit that mathematical models bring to science generally – including the precise handling of cognitive factors. The computer supports the derivation of the precise consequences of precise assumptions way beyond the powers of the human brain. A downside is that precision often implies particularisation. One can state easily enough that “cooperation is usually beneficial in complex environments”, but demonstrating the truth or otherwise of this vague thesis in computational terms requires precise specification of “cooperation, “complex” and “environment” and one often ends up trying to prove many different results corresponding to the many different interpretations of the thesis.

You’ve produced a number of works that could be termed “computationally assisted thought experiments”, for example, your work on foreknowledge (Doran 1997) and collective misbelief (1998). What do you think makes for a “good” computational thought experiment?

If an experiment and its results casts light upon the properties of real social systems or of possible social systems (and what social systems are NOT possible?), then that has got to be good if only by adding to our store of currently useless knowledge!

Perhaps I should clarify: I distinguish sharply between human societies (and other natural societies) and computational societies. The latter may be used as models of the former, but can be conceived, created and studied in their own right. If I build a couple of hundred or so learning and intercommunicating robots and let them play around in my back garden, perhaps they will evolve a type of society that has NEVER existed before… Or can it be proved that this is impossible?

The recently reissued classic book “Simulating Societies” (Gilbert and Doran 1994, 2018) contains contributions from several of the early researchers working in the area. Could you say a little about how this group came together?

Well – better to ask Nigel Gilbert this question – he organised the meeting that gave rise to the book, and although it’s quite likely I was involved in the choice of invitees, I have no memory. But note there were two main types of contributor – the mainstream social science oriented and the archaeologically oriented, corresponding to Nigel and myself respectively.

Looking back, what would you say have been the main successes in the area?

So many projects have been completed and are ongoing — I’m not going to try to pick out one or two as particularly successful. But getting the whole idea of social science ABM established and widely accepted as useful or potentially useful (along with AI, of course) is a massive achievement.

Looking forward, what do you think are the main challenges for the area?

There are many but I can give two broad challenges:

(i) Finding out how best to discover what levels of abstraction are both tractable and effective in particular modelling domains. At present I get the impression that the level of abstraction of a model is usually set by whatever seems natural or for which there is precedent – but that is too simple.

(Ii) Stopping the use of AI and social ABM being dominated by military and business applications that benefit only particular interests. I am quite pessimistic about this. It seems all too clear that when the very survival of nations, or totalitarian regimes, or massive global corporations is at stake, ethical and humanitarian restrictions and prohibitions, even those internationally agreed and promulgated by the UN, will likely be ignored. Compare, for example, the recent talk by Cristiano Castelfranchi entitled “For a Science-oriented AI and not Servant of the Business”. (Castelfranchi 2018)

What are you currently thinking about?

Three things. Firstly, my personal retirement project, MoHAT — how best to use AI and ABM to help discover effective methods of achieving much needed global cooperation.

The obvious approach is: collect LOTS of global data, build a theoretically supported and plausible model, try to validate it and then try out different ways of enhancing cooperation. MoHAT, by contrast, emphasises:

(i) Finding a high level of abstraction for modelling which is effective but tractable.

(ii) Finding particular long time span global models by reference to fundamental boundary conditions, not by way of observations at particular times and places. This involves a massive search through possible combinations of basic model elements but computers are good at that — hence AI Heuristic Search is key.

(iii) Trying to overcome the ubiquitous reluctance of global organisational structures, e.g. nation states, fully to cooperate – by exploring, for example what actions leading to enhanced global cooperation, if any, are available to one particular state.

Of course, any form of globalism is currently politically unpopular — MoHAT is swimming against the tide!

Full details of MoHAT (including some simple computer code) are in the corresponding project entry in my Research Gate profile (Doran 2018a).

Secondly, Gillian’s Hoop and how one assesses its plausibility as a “modern” metaphysical theory. Gillian’s Hoop is a somewhat wild speculation that one of my daughters came up with a few years ago: we are all avatars in a virtual world created by game players in a higher world who in fact are themselves avatars in a virtual world created by players in a yet higher world … with the upward chain of virtual worlds ultimately linking back to form a hoop! Think about that!

More generally I conjecture that metaphysical systems (e.g. the Roman Catholicism that I grew up with, Gillian’s Hoop, Iamblichus’ system [7], Homer’s) all emerge from the properties of our thought processes. The individual comes up with generalised beliefs and possibilities (e.g. Homer’s flying chariot) and these are socially propagated, revised and pulled together into coherent belief systems. This is little to do with what is there, much more to do with the processes that modify beliefs. This is not a new idea, of course, but it would be good to ground it in some computational modelling.

Again, there is a project description on Research Gate (Doran 2018b).

Finally, I’m thinking about planning and imagination and their interactions and consequences. I’ve put together a computational version of our basic subjective stream of thoughts (incorporating both directed and associative thinking) that can be used to address imagination and its uses. This is not as difficult to come up with as might at first appear. And then comes a conjecture — given ANY set of beliefs, concepts, memories etc in a particular representation system (cf. AI Knowledge Representation studies) it will be possible to define a (or a few) modification processes that bring about generalisations and imaginations – all needed for planning — which is all about deploying imaginations usefully.

In fact I am tempted to follow my nose and assert that:

Imagination is required for planning (itself required for survival in complex environments) and necessarily leads to “metaphysical” belief systems

Might be a good place to stop – any further and I am really into fantasy land…

Notes

  1. Archived copy of Jim Doran’s University of Essex homepage: https://bit.ly/2Pdk4Nf
  2. Also see an online video of some of the interviews, including with Michie, used as a source for the Lee and Holtzman paper: https://youtu.be/6p3mhkNgRXs
  3. https://en.wikipedia.org/wiki/Rod_Burstall
  4. https://en.wikipedia.org/wiki/Robin_Popplestone
  5. https://www.researchgate.net/profile/Andrew_Orton
  6. See also discussion of the historical context of the Graph Traverser in Russell and Norvig (1995).
  7. https://en.wikipedia.org/wiki/Iamblichus

References

Bailey, Norman T. J. (1967) The simulation of stochastic epidemics in two dimensions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 4: Biology and Problems of Health, 237–257, University of California Press, Berkeley, Calif. https://bit.ly/2or7sqp

Castelfranchi, C. (2018) For a Science-oriented AI and not Servant of the Business. Powerpoint file available from the author on request at Research Gate: https://www.researchgate.net/profile/Cristiano_Castelfranchi

Doran, J.E and Michie, D. (1966) Experiments with the Graph Traverser Program. September 1966. Proceedings of The Royal Society A 294(1437):235-259.

Doran, J.E. (1968) Experiments with a pleasure seeking automaton. In Machine Intelligence 3 (ed. D. Michie) Edinburgh University Press, pp 195-216.

Doran, J.E. (1969) Planning and generalization in an automaton-environment system. In Machine Intelligence 4 (eds. B. Meltzer and D. Michie) Edinburgh University Press. pp 433-454.

Doran, J.E and Hodson, F.R (1975) Mathematics and Computers in Archaeology. Edinburgh University Press, 1975 [and Harvard University Press, 1976]

Doran, J.E. (1997) Foreknowledge in Artificial Societies. In: Conte R., Hegselmann R., Terna P. (eds) Simulating Social Phenomena. Lecture Notes in Economics and Mathematical Systems, vol 456. Springer, Berlin, Heidelberg. https://bit.ly/2Pf5Onv

Doran, J.E. (1998) Simulating Collective Misbelief. Journal of Artificial Societies and Social Simulation vol. 1, no. 1, http://jasss.soc.surrey.ac.uk/1/1/3.html

Doran, J.E. (2002) Agents and MAS in STaMs. In Foundations and Applications of Multi-Agent Systems: UKMAS Workshop 1996-2000, Selected Papers (eds. M d’Inverno, M Luck, M Fisher, C Preist), Springer Verlag, LNCS 2403, July 2002, pp. 131-151. https://bit.ly/2wsrHYG

Doran, J.E. (2018a) MoHAT — a new AI heuristic search based method of DISCOVERING and USING tractable and reliable agent-based computational models of human society. Research Gate Project: https://bit.ly/2lST35a

Doran, J.E. (2018b) An Investigation of Gillian’s HOOP: a speculation in computer games, virtual reality and METAPHYSICS. Research Gate Project: https://bit.ly/2C990zn

Gilbert, N. and Doran, J.E. eds. (2018) Simulating Societies: The Computer Simulation of Social Phenomena. Routledge Library Editions: Artificial Intelligence, Vol 6, Routledge: London and New York.

Gilbert, N. and Heath, C. (1985) Social Action and Artificial Intelligence. London: Gower.

Lee, J. and Holtzman, G. (1995) 50 Years after breaking the codes: interviews with two of the Bletchley Park scientists. IEEE Annals of the History of Computing, vol. 17, no. 1, pp. 32-43. https://ieeexplore.ieee.org/document/366512/

Newell, A.; Shaw, J.C.; Simon, H.A. (1959) Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

Russell, S. and Norvig, P. (1995) Artificial Intelligence: A Modern Approach. Prentice-Hall, First edition, pp. 86, 115-117.

Samuel, Arthur L. (1959) “Some Studies in Machine Learning Using the Game of Checkers”. IBM Journal of Research and Development. doi:10.1147/rd.441.0206.


Hales, D. and Doran, J, (2018) Agent-Based Modelling Pioneers: An Interview with Jim Doran, Review of Artificial Societies and Social Simulation, 4th September 2018. https://rofasss.org/2018/09/04/dh/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)