Tag Archives: emilechappin

Quantum computing in the social sciences

By Emile Chappin and Gary Polhill

The dream

What could quantum computing mean for the computational social sciences? Although quantum computing is at an early stage, this is the right time to dream about precisely that question for two reasons. First, we need to keep the computational social sciences ‘in the conversation’ about use cases for quantum computing to ensure our potential needs are discussed. Second, thinking about how quantum computing could affect the way we work in the computational social sciences could lead to interesting research questions, new insights into social systems and their uncertainties, and form the basis of advances in our area of work.

At first glance, quantum computing and the computational social sciences seem unrelated. Computational social science uses computer programs written in high-level languages to explore the consequences of assumptions as macro-level system patterns based on coded rules for micro-level behaviour (e.g., Gilbert, 2007). Quantum computing is in an early phase, with the state-of-the-art being in the order of 100s of qubits [1],[2], and a wide range of applications are envisioned (Hassija, 2020), e.g., in the areas of physics (Di Meglio et al., 2024) and drug discovery (Blunt et al., 2022). Hence, the programming of quantum computers is also in an early phase. Major companies (e.g., IBM, Microsoft, Alphabet, Intel, Rigetti Computing) are investing heavily and have put out high expectations – though how much of this is hyperbole to attract investors and how much it is backed up by substance remains to be seen. This means it is still hard to comprehend what opportunities may come from scaling up.

Our dream is that quantum computing enables us to represent human decision-making on a much larger scale, do more justice to how decisions come about, and embrace the influences people have on each other. It would respect that people’s actual choices are undetermined until they have to show behaviour. On a philosophical level, these features are consistent with how quantum computation operates. Applying quantum computing to decision-making with interactions may help us inform or discover behavioural theory and contribute to complex systems science.

The mysticism around quantum computing

There is mysticism around what qubits are. To start thinking about how quantum computing could be relevant for computational social science, there is no direct need to understand the physics of how qubits are physically set up. However, it is necessary to understand the logic and how quantum computers operate. At the logical level, there are similarities between quantum and traditional computers.

The main similarity is that the building blocks are bits and that they are either 0 or 1, but only when you measure them. A second similarity is that quantum computers work with ‘instructions’. Quantum ‘processors’ alter the state of the bits in a ‘memory’ using programs that comprise sequences of ‘instructions’ (e.g., Sutor, 2019).

There are also differences. They are: 1) qubits are programmed to have probabilities of being a zero or a one, 2) qubits have no determined value until they are measured, and 3) multiple qubits can be entangled. The latter means the values (when measured) depend on each other.

Operationally speaking, quantum computers are expected to augment conventional computers in a ‘hybrid’ computing environment. This means we can expect to use traditional computer programs to do everything around a quantum program, not least to set up and analyse the outcomes.

Programming quantum computers

Until now, programming languages for quantum computing are low-level; like assembly languages for regular machines. Quantum programs are therefore written very close to ‘the hardware’. Similarly, in the early days of electronic computers, instructions for processors to perform directly were programmed directly: punched cards contained machine language instructions. Over time, computers got bigger, more was asked of them, and their use became more widespread and embedded in everyday life. At a practical level, different processors, which have different instruction sets, and ever-larger programs became more and more unwieldy to write in machine language. Higher-level languages were developed, and reached a point where modellers could use the languages to describe and simulate dynamic systems. Our code is still ultimately translated into these lower-level instructions when we compile software, or it is interpreted at run-time. The instructions now developed for quantum computing are akin to the early days of conventional computing, but development of higher-level programming languages for quantum computers may happen quickly.

At the start, qubits are put in entangled states (e.g., Sutor, 2019); the number of qubits at your disposal makes up the memory. A quantum computer program is a set of instructions that is followed. Each instruction alters the memory, but only by changing the probabilities of qubits being 0 or 1 and their entanglement. Instruction sets are packaged into so-called quantum circuits. The instructions operate on all qubits at the same time, (you can think of this in terms of all probabilities needing to add up to 100%). This means the speed of a quantum program does not depend on the scale of the computation in number of qubits, but only depends on the number of instructions that one executes in a program. Since qubits can be entangled, quantum computing can do calculations that take too long to run on a normal computer.

Quantum instructions are typically the inverse of themselves: if you execute an instruction twice, you’re back at the state before the first operation. This means you can reverse a quantum program simply by executing the program again, but now in reverse order of the instructions. The only exception to this is the so-called ‘read’ instruction, by which the value is determined for each qubit to either be 1 or 0. This is the natural end of the quantum program.

Recent developments in quantum computing and their roadmaps

Several large companies such as Microsoft, IBM and Alphabet are investing heavily in developing quantum computing. The route currently is to move up in the scale of these computers with respect to the number of qubits they have and the number of gates (instructions) that can be run. IBM’s roadmap they suggest growing to 7500 instructions, as quickly as 2025[3]. At the same time, programming languages for quantum computing are being developed, on the basis of the types of instructions above. At the moment, researchers can gain access to actual quantum computers (or run quantum programs on simulated quantum hardware). For example, IBM’s Qiskit[4] is one of the first open-source software developing kit for quantum computing.

A quantum computer doing agent-based modelling

The exponential growth in quantum computing capacity (Coccia et al., 2024) warrants us to consider how it may be used in the computational social sciences. Here is a first sketch. What if there is a behavioural theory that says something about ‘how’ different people decide in a specific context on a specifical behavioural action. Can we translate observed behaviour into the properties of a quantum program and explore the consequences of what we can observe? Or, in contrast, can we unravel the assumptions underneath our observations? Could we look at alternative outcomes that could also have been possible in the same system, under the same conceptualization? Given what we observe, what other system developments could have had emerged that also are possible (and not highly unlikely)? Can we unfold possible pathways without brute-forcing a large experiment? These questions are, we believe, different when approached from a perspective of quantum computing. For one, the reversibility of quantum programs (until measuring) may provide unique opportunities. This also means, doing such analyses may inspire new kinds of social theory, or it may give a reflection on the use of existing theory.

One of the early questions is how we may use qubits to represent modelled elements in social simulations. Here we sketch basic alternative routes, with alternative ideas. For each strain we include a very rudimentary application to both Schelling’s model of segregation and the Traffic Basic model, both present in NetLogo model library.

Qubits as agents

A basic option could be to represent an agent by a qubit. Thinking of one type of stylized behaviour, an action that can be taken, then a quantum bit could represent whether that action is taken or not. Instructions in the quantum program would capture the relations between actions that can be taken by the different agents, interventions that may affect specific agents. For Schelling’s model, this would have to imply to show whether segregation takes place or not. For Traffic Basic, this would be what the probability is for having traffic jams. Scaling up would mean we would be able to represent many interacting agents without the simulation to slow down. This is, by design, abstract and stylized. But it may help to answer whether a dynamic simulation on a quantum computer can be obtained and visualized.

Decision rules coded in a quantum computer

A second option is for an agent to perform a quantum program as part of their decision rules. The decision-making structure should then match with the logic of a quantum computer. This may be a relevant ontological reference to how brains work and some of the theory that exists on cognition and behaviour. Consider a NetLogo model with agents that have a variety of properties that get translated to a quantum program. A key function for agents would be that the agent performs a quantum calculation on the basis of a set of inputs. The program would then capture how different factors interact and whether the agent performs specific actions, i.e., show particular behaviour. For Schelling’s segregation model, it would be the decision either to move (and in what direction) or not. For Traffic Basic it would lead to a unique conceptualization of heterogeneous agents. But for such simple models it would not necessarily take benefit of the scale-advantage that quantum computers have, because most of the computation occurs on traditional computers and the limited scope of the decision logic of these models. Rather, it invites to developing much more rich and very different representations of how decisions are made by humans. Different brain functions may all be captured: memory, awareness, attitudes, considerations, etc. If one agent’s decision-making structure would fit in a quantum computer, experiments can already be set up, running one agent after the other (just as it happens on traditional computers). And if a small, reasonable number of agents would fit, one could imagine group-level developments. If not of humans, this could represent companies that function together, either in a value chain or as competitors in a market. Because of this, it may be revolutionary:  let’s consider this as quantum agent-based modelling.

Using entanglement

Intuitively one could consider the entanglement if qubits to be either represent the connection between different functions in decision making, the dependencies between agents that would typically interact, or the effects of policy interventions on agent decisions. Entanglement of qubits could also represent the interaction of time steps, capturing path dependencies of choices, limiting/determining future options. This is the reverse of memory: what if the simulation captures some form of anticipation by entangling future options in current choices. Simulations of decisions may then be limited, myopic in their ability to forecast. By thinking through such experiments, doing the work, it may inspire new heuristics that represent bounded rationality of human decision making. For Schelling’s model this could be the local entanglement restricting movement, it could be restricting movement because of future anticipated events, which contributes to keep the status quo. For Traffic Basic, one could forecast traffic jams and discover heuristics to avoid them which, in turn may inspire policy interventions.

Quantum programs representing system-level phenomena

The other end of the spectrum can also be conceived. As well as observing other agents, agents could also interact with a system in order to make their observations and decisions where the system with which they interact with itself is a quantum program. The system could be an environmental, or physical system, for example. It would be able to have the stochastic, complex nature that real world systems show. For some systems, problems could possibly be represented in an innovative way. For Schelling’s model, it could be the natural system with resources that agents benefit from if they are in the surroundings; resources having their own dynamics depending on usage. For Traffic Basic, it may represent complexities in the road system that agents can account for while adjusting their speed.

Towards a roadmap for quantum computing in the social sciences

What would be needed to use quantum computation in the social sciences? What can we achieve by taking the power of high-performance computing combined with quantum computers when the latter scale up? Would it be possible to reinvent how we try to predict the behaviour of humans by embracing the domain of uncertainty that also is essential in how we may conceptualise cognition and decision-making? Is quantum agent-based modelling at one point feasible? And how do the potential advantages compare to bringing it into other methods in the social sciences (e.g. choice models)?

A roadmap would include the following activities:

  • Conceptualise human decision-making and interactions in terms of quantum computing. What are promising avenues of the ideas presented here and possibly others?
  • Develop instruction sets/logical building blocks that are ontologically linked to decision-making in the social sciences. Connect to developments for higher-level programming languages for quantum computing.
  • Develop a first example. One could think of reproducing one of the traditional models. Either an agent-based model, such as Schelling’s model of segregation or Basic Traffic, or a cellular automata model, such as game-of-life. The latter may be conceptualized with a relatively small number of cells and could be a valuable demonstration of the possibilities.
  • Develop quantum computing software for agent-based modelling, e.g., as a quantum extension for NetLogo, MESA, or for other agent-based modelling packages.

Let us become inspired to develop a more detailed roadmap for quantum computing for the social sciences. Who wants to join in making this dream a reality?

Notes

[1] https://newsroom.ibm.com/2022-11-09-IBM-Unveils-400-Qubit-Plus-Quantum-Processor-and-Next-Generation-IBM-Quantum-System-Two

[2] https://www.fastcompany.com/90992708/ibm-quantum-system-two

[3] https://www.ibm.com/roadmaps/quantum/

[4] https://github.com/Qiskit/qiskit-ibm-runtime

References

Blunt, Nick S., Joan Camps, Ophelia Crawford, Róbert Izsák, Sebastian Leontica, Arjun Mirani, Alexandra E. Moylett, et al. “Perspective on the Current State-of-the-Art of Quantum Computing for Drug Discovery Applications.” Journal of Chemical Theory and Computation 18, no. 12 (December 13, 2022): 7001–23. https://doi.org/10.1021/acs.jctc.2c00574.

Coccia, M., S. Roshani and M. Mosleh, “Evolution of Quantum Computing: Theoretical and Innovation Management Implications for Emerging Quantum Industry,” in IEEE Transactions on Engineering Management, vol. 71, pp. 2270-2280, 2024, https://doi: 10.1109/TEM.2022.3175633.

Di Meglio, Alberto, Karl Jansen, Ivano Tavernelli, Constantia Alexandrou, Srinivasan Arunachalam, Christian W. Bauer, Kerstin Borras, et al. “Quantum Computing for High-Energy Physics: State of the Art and Challenges.” PRX Quantum 5, no. 3 (August 5, 2024): 037001. https://doi.org/10.1103/PRXQuantum.5.037001.

Gilbert, N., Agent-based models. SAGE Publications Ltd, 2007. ISBN 978-141-29496-44

Hassija, V., Chamola, V., Saxena, V., Chanana, V., Parashari, P., Mumtaz, S. and Guizani, M. (2020), Present landscape of quantum computing. IET Quantum Commun., 1: 42-48. https://doi.org/10.1049/iet-qtc.2020.0027

Sutor, R. S. (2019). Dancing with Qubits: How quantum computing works and how it can change the world. Packt Publishing Ltd.


Chappin, E. & Polhill, G (2024) Quantum computing in the social sciences. Review of Artificial Societies and Social Simulation, 25 Sep 2024. https://rofasss.org/2024/09/24/quant


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Escaping the modelling crisis

By Emile Chappin

Let me explain something I call the ‘modelling crisis’. It is something that many modellers in one way or another encounter. By being aware we may resolve such a crisis, avoid frustration, and, hopefully, save the world from some bad modelling.

Views on modelling

I first present two views on modelling. Bear with me!

[View 1: Model = world] The first view is that models capture things in the real world pretty well and some models are pretty much representative. And of course this is true. You can add many things to the model and you may have. But if you think along this line, you start seeing the model as if it is the world. At one point you may become rather optimistic about modelling. Well, I really mean to say, you become naive: the model is fabulous. The model can help anyone with any problem only somewhat related to the original idea behind this model. You don’t waste time worrying about the details and sell the model to everyone listening, and you’re quite convinced in the way you do this. You may come to a belief that the model is the truth.

[View 2: Model ≠ world] The second view is that the model can never represent the world adequately enough to really predict what is going on. And of course this is true. But if you think along this line, you can get pretty frustrated: the model is never good enough, because factor A is not in there, mechanism B is biased, etc. At one point you may become quite pessimistic about ‘the model’: will it help anyone anytime soon? You may come to the belief that the model is nonsense (and that modelling itself is nonsense).

As a modeller, you may encounter these views in your modelling journey: in how your model is perceived, in how your model is compared to other models and in the questions you’re asked about your model. And it may the case that you get stuck in either one of the views yourself. And you may not be aware, but you might still behave accordingly.

Possible consequences

Let’s conceive the idea of having a business doing modelling: we are ambitious and successful! What might happen over time with our business and with our clients?

  • Your clients love your business – Clients can ask us any question and they will get a very precise answer back! Anytime we give a good result, a result that comes true in some sense, we are praised, and our reputation grows. Anytime we give a bad result, something that turns out quite different from what we’d expected, we can blame the particular circumstances which could not have been foreseen or argue that this result is basically out of the original scope. Our modesty makes our reputation grow! And it makes us proud!
  • Assets need protection – Over time, our model/business reputation becomes more and more important. You should ask us for any modelling job because we’ve modelled (this) for decades. Any question goes into our fabulous model that can Answer Any Question In A Minute (AAQIAM). Our models became patchworks because of questions that were not so easy to fit in. But obviously, as a whole, the model is great. More than great: it is the best! The models are our key assets: they need to be protected. In a board meeting we decide that we should not show the insides of our models anymore. We should keep them secret.
  • Modelling schools – Habits emerge of how our models are used, what kind of analysis we do, and which we don’t. Core assumptions that we always make with our model are accepted and forgotten. We get used to those assumptions, we won’t change them anyway and probably we can’t. It is not really needed to think about the consequences of those assumptions anyway. We stick to the basics, represent the results in the way that the client can use it, and mention in footnotes how much detail is underneath, and that some caution is warranted in interpretation of the results. Other modelling schools may also emerge, but they really can’t deliver the precision/breadth of what we have been doing for decades, so they are not relevant, not really, anyway.
  • Distrusting all models – Another kind of people, typically not your clients, start distrusting the modelling business completely. They get upset in discussions: why worry about discussing the model details: there is always something missing anyway. And it is impossible to quantify anything, really. They decide that it is better to ignore model geeks completely and just follow their own reasoning. It doesn’t matter that this reasoning can’t be backed up with facts (such as a modelled reality). They don’t believe that it be done could anyway. So the problem is not their reasoning, it is the inability of quantitative science.

Here is the crisis

At this point, people stop debating the crucial elements in our models and the ambition for model innovation goes out of the window. I would say, we end up in a modelling crisis. At some point, decisions have to be made in the real world, and they can either be inspired by good modelling, by bad modelling, or not by modelling at all.

The way out of the modelling crisis

How can such a modelling crisis be resolved? First, we need to accept that the model ≠ world, so we don’t necessarily need to predict. We also need to accept that modelling can certainly be useful, for example when it helps to find clear and explicit reasoning/underpinning of an argument.

  • We should focus more on the problem that we really want to address, and for that problem, argue how modelling can actually contribute to a solution for that problem. This should result in better modelling questions, because modelling is a means, not an end. We should stop trying to outsource the thinking to a model.
  • Following from this point, we should be very explicit about the modelling purpose: in what way does the modelling contribute to solving the problem identified earlier? We have to be aware that different kinds of purposes lead to different styles of reasoning, and, consequently, to different strengths and weaknesses in the modelling that we do. Consider the differences between prediction, explanation, theoretical exposition, description and illustration as types of modelling purpose, see (Edmonds 2017), (more types are possible).
  • Following this point, we should accept the importance of creativity and the process in modelling. Science is about reasoned, reproducible work. But, paradoxically, good science does not come from a linear, step-by-step approach. Accepting this, modelling can help both in the creative process by exploring possible ideas, explicating an intuition as well as in justification and underpinning of a very particular reasoning. Next, it is important avoid mixing these perspectives up. The modelling process is as relevant as the model outcome. In the end, the reasoning should be standalone and strong (also without the model). But you may have needed the model to find it.
  • We should adhere to better modelling practices and develop the tooling to accommodate them. For ABM, many successful developments are ongoing: we should be explicit and transparent about assumptions we are making (e.g. the ODD protocol, Polhill et al. 2008). We should develop requirements and procedures for modelling studies, with respect to how the analysis is performed, also if clients don’t ask for it (validity, robustness of findings, sensitivity of outcomes, analysis of uncertainties). For some sectors, such requirements have been developed. The discussion around practices and validation is prominent in ABMs, where some ‘issues’ may be considered obvious (see for instance Heath, Hill, and Ciarallo 2009, the effort through CoMSES), but they should be asked for any type of model. In fact, we should share, debate on, and work with all types of models that are already out there (again, such as the great efforts through CoMSES), and consider forms of multi-modelling to save time and effort and benefit from strengths of different model formalisms.
  • We should start looking for good examples: get inspired and share them. Personally I like Basic Traffic from the NetLogo library, it does not predict you where traffic jams are, but it clearly shows the worth of slowing down earlier. Another may be the Limits to Growth, irrespective of its predictive power.
  • We should start doing it better ourselves, so that we show others that it can be done!

References

Heath, B., Hill, R. and Ciarallo, F. (2009). A Survey of Agent-Based Modeling Practices (January 1998 to July 2008). Journal of Artificial Societies and Social Simulation 12(4):9. http://jasss.soc.surrey.ac.uk/12/4/9.html

Polhill, J. Gary, Dawn Parker, Daniel Brown, and Volker Grimm. (2008). Using the ODD Protocol for Describing Three Agent-Based Social Simulation Models of Land-Use Change. Journal of Artificial Societies and Social Simulation 11(2): 3.

Edmonds, B. (2017) Five modelling purposes, Centre for Policy Modelling Discussion Paper CPM-17-238, http://cfpm.org/discussionpapers/192/


Chappin, E.J.L. (2018) Escaping the modelling crisis. Review of Artificial Societies and Social Simulation, 12th October 2018. https://rofasss.org/2018/10/12/ec/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)