By David Hales and Jim Doran
Jim Doran is an ABM pioneer. Specifically applying ABM to social phenomena. He has been working on these ideas since the 1960’s. His work made a major contribution to establishing the area as it exists today.
In fact Jim has made significant contributions in many areas related to computation such as Artificial Intelligence (AI), Distributed AI (DAI) and Multi-agent Systems (MAS).
I know Jim — he was my PhD supervisor (at the University of Essex) so I had regular meetings with him over a period of about four years. It is hard to capture both the depth and breadth of Jim’s approach. Basically he thinks big. I mean really big! — yet plausibly and precisely. This is a very difficult trick to pull off. Believe me I’ve tried.
He retired from Essex almost two decades ago but continues to work on a number of very innovative ABM related projects that are discussed in the interview.
The interview was conducted over e-mail in August. We did a couple of iterations and included references to the work mentioned.
According to your webpage, at the University of Essex [1] , your background was originally mathematics and then Artificial Intelligence (working with Donald Michie at Edinburgh). In those days AI was a very new area. I wonder if you could say a little about how you came to work with Michie and what kind of things you worked on?
Whilst reading Mathematics at Oxford, I both joined the University Archaeological Society (inspired by the TV archaeologist of the day, Sir Mortimer Wheeler) becoming a (lowest grade) digger and encountering some real archaeologists like Dennis Britten, David Clarke and Roy Hodson, and also, at postgraduate level, was lucky enough to come under the influence of a forward thinking and quite distinguished biometrist, Norman T. J. Bailey, who at that time was using a small computer (an Elliot 803, I think it was) to simulate epidemics — i.e. a variety of computer simulation of social phenomena (Bailey 1967). One day, Bailey told me of this crazy but energetic Reader at Edinburgh University, Donald Michie, who was trying to program computers to play games and to display AI, and who was recruiting assistants. In due course I got a job as an Research Assistant / Junior Research Fellow in Michie’s group (the EPU, for Experimental Programming Unit). During the war Michie had worked with and had been inspired by Alan Turing (see: Lee and Holtzman 1995) [2].
Given this was the very early days of AI, What was it like working at the EPU at that time? Did you meet any other early AI researchers there?
Well, I remember plenty of energy, plenty of parties and visitors from all over including both the USSR (not easy at that time!) and the USA. The people I was working alongside – notably, but not only, Rod Burstall [3], (the late) Robin Popplestone [4], Andrew Ortony [5] – have typically had very successful academic research careers.
I notice that you wrote a paper with Michie in 1966 “Experiments with the graph traverser program”. Am I right, that this is a very early implementation of a generalised search algorithm?
When I took up the research job in Edinburgh at the EPU, in 1964 I think, Donald Michie introduced me to the work by Arthur Samuel on a learning Checkers playing program (Samuel 1959) and proposed to me that I attempt to use Samuel’s rather successful ideas and heuristics to build a general problem solving program — as a rival to the existing if somewhat ineffective and pretentious Newell, Shaw and Simon GPS (Newell et al 1959). The Graph Traverser was the result – one of the first standardised heuristic search techniques and a significant contribution to the foundations of that branch of AI (Doran and Michie 1966) [6]. It’s relevant to ABM because cognition involves planning and AI planning systems often use heuristic search to create plans that when executed achieve desired goals.
Can you recall when you first became aware of and / or began to think about simulating social phenomena using computational agents?
I guess the answer to your question depends on the definition of “computational agent”. My definition of a “computational agent” (today!) is any locus of slightly human like decision-making or behaviour within a computational process. If there is more than one then we have a multi-agent system.
Given the broad context that brought me to the EPU it was inevitable that I would get to think about what is now called agent based modelling (ABM) of social systems – note that archaeology is all about social systems and their long term dynamics! Thus in my (rag bag!) postgraduate dissertation (1964), I briefly discussed how one might simulate on a computer the dynamics of the set of types of pottery (say) characteristic of a particular culture – thus an ABM of a particular type of social dynamics. By 1975 I was writing a critical review of past mathematical modelling and computer simulation in archaeology with prospects (chapter 11 of Doran and Hodson, 1975).
But I didn’t myself use the word “agent” in a publication until, I believe, 1985 in a chapter I contributed to the little book by Gilbert and Heath (1985). Earlier I tended to use the word “actor” with the same meaning. Of course, once Distributed AI emerged as a branch of AI, ABM too was bound to emerge.
Didn’t you write a paper once titled something like “experiments with a pleasure seeking ant in a grid world”? I ask this speculatively because I have some memory of it but can find no references to it on the web.
Yes. The title you are after is “Experiments with a pleasure seeking automaton” published in the volume Machine Intelligence 3 (edited by Michie from the EPU) in 1968. And there was a follow up paper in Machine Intelligence 4 in 1969 (Doran 1968; 1969). These early papers address the combination of heuristic search with planning, plan execution and action within a computational agent but, as you just remarked, they attracted very little attention.
You make an interesting point about how you, today, define a computational agent. Do you have any thoughts on how one would go about trying to identify “agents” in a computational, or other, process? It seems as humans we do this all the time, but could we formalise it in some way?
Yes. I have already had a go at this, in a very limited way. It really boils down to, given the specification of a complex system, searching thru it for subsystems that have particular properties e.g. that demonstrably have memory within their structure of what has happened to them. This is a matter of finding a consistent relationship between the content of the hypothetical agent’s hypothetical memory and the actual input-output history (within the containing complex system) of that hypothetical agent – but the searches get very large. See, for example, my 2002 paper “Agents and MAS in STaMs” (Doran 2002).
From your experience what would you say are the main benefits and limitations of working with agent-based models of social phenomena?
The great benefit is, I feel, precision – the same benefit that mathematical models bring to science generally – including the precise handling of cognitive factors. The computer supports the derivation of the precise consequences of precise assumptions way beyond the powers of the human brain. A downside is that precision often implies particularisation. One can state easily enough that “cooperation is usually beneficial in complex environments”, but demonstrating the truth or otherwise of this vague thesis in computational terms requires precise specification of “cooperation, “complex” and “environment” and one often ends up trying to prove many different results corresponding to the many different interpretations of the thesis.
You’ve produced a number of works that could be termed “computationally assisted thought experiments”, for example, your work on foreknowledge (Doran 1997) and collective misbelief (1998). What do you think makes for a “good” computational thought experiment?
If an experiment and its results casts light upon the properties of real social systems or of possible social systems (and what social systems are NOT possible?), then that has got to be good if only by adding to our store of currently useless knowledge!
Perhaps I should clarify: I distinguish sharply between human societies (and other natural societies) and computational societies. The latter may be used as models of the former, but can be conceived, created and studied in their own right. If I build a couple of hundred or so learning and intercommunicating robots and let them play around in my back garden, perhaps they will evolve a type of society that has NEVER existed before… Or can it be proved that this is impossible?
The recently reissued classic book “Simulating Societies” (Gilbert and Doran 1994, 2018) contains contributions from several of the early researchers working in the area. Could you say a little about how this group came together?
Well – better to ask Nigel Gilbert this question – he organised the meeting that gave rise to the book, and although it’s quite likely I was involved in the choice of invitees, I have no memory. But note there were two main types of contributor – the mainstream social science oriented and the archaeologically oriented, corresponding to Nigel and myself respectively.
Looking back, what would you say have been the main successes in the area?
So many projects have been completed and are ongoing — I’m not going to try to pick out one or two as particularly successful. But getting the whole idea of social science ABM established and widely accepted as useful or potentially useful (along with AI, of course) is a massive achievement.
Looking forward, what do you think are the main challenges for the area?
There are many but I can give two broad challenges:
(i) Finding out how best to discover what levels of abstraction are both tractable and effective in particular modelling domains. At present I get the impression that the level of abstraction of a model is usually set by whatever seems natural or for which there is precedent – but that is too simple.
(Ii) Stopping the use of AI and social ABM being dominated by military and business applications that benefit only particular interests. I am quite pessimistic about this. It seems all too clear that when the very survival of nations, or totalitarian regimes, or massive global corporations is at stake, ethical and humanitarian restrictions and prohibitions, even those internationally agreed and promulgated by the UN, will likely be ignored. Compare, for example, the recent talk by Cristiano Castelfranchi entitled “For a Science-oriented AI and not Servant of the Business”. (Castelfranchi 2018)
What are you currently thinking about?
Three things. Firstly, my personal retirement project, MoHAT — how best to use AI and ABM to help discover effective methods of achieving much needed global cooperation.
The obvious approach is: collect LOTS of global data, build a theoretically supported and plausible model, try to validate it and then try out different ways of enhancing cooperation. MoHAT, by contrast, emphasises:
(i) Finding a high level of abstraction for modelling which is effective but tractable.
(ii) Finding particular long time span global models by reference to fundamental boundary conditions, not by way of observations at particular times and places. This involves a massive search through possible combinations of basic model elements but computers are good at that — hence AI Heuristic Search is key.
(iii) Trying to overcome the ubiquitous reluctance of global organisational structures, e.g. nation states, fully to cooperate – by exploring, for example what actions leading to enhanced global cooperation, if any, are available to one particular state.
Of course, any form of globalism is currently politically unpopular — MoHAT is swimming against the tide!
Full details of MoHAT (including some simple computer code) are in the corresponding project entry in my Research Gate profile (Doran 2018a).
Secondly, Gillian’s Hoop and how one assesses its plausibility as a “modern” metaphysical theory. Gillian’s Hoop is a somewhat wild speculation that one of my daughters came up with a few years ago: we are all avatars in a virtual world created by game players in a higher world who in fact are themselves avatars in a virtual world created by players in a yet higher world … with the upward chain of virtual worlds ultimately linking back to form a hoop! Think about that!
More generally I conjecture that metaphysical systems (e.g. the Roman Catholicism that I grew up with, Gillian’s Hoop, Iamblichus’ system [7], Homer’s) all emerge from the properties of our thought processes. The individual comes up with generalised beliefs and possibilities (e.g. Homer’s flying chariot) and these are socially propagated, revised and pulled together into coherent belief systems. This is little to do with what is there, much more to do with the processes that modify beliefs. This is not a new idea, of course, but it would be good to ground it in some computational modelling.
Again, there is a project description on Research Gate (Doran 2018b).
Finally, I’m thinking about planning and imagination and their interactions and consequences. I’ve put together a computational version of our basic subjective stream of thoughts (incorporating both directed and associative thinking) that can be used to address imagination and its uses. This is not as difficult to come up with as might at first appear. And then comes a conjecture — given ANY set of beliefs, concepts, memories etc in a particular representation system (cf. AI Knowledge Representation studies) it will be possible to define a (or a few) modification processes that bring about generalisations and imaginations – all needed for planning — which is all about deploying imaginations usefully.
In fact I am tempted to follow my nose and assert that:
Imagination is required for planning (itself required for survival in complex environments) and necessarily leads to “metaphysical” belief systems
Might be a good place to stop – any further and I am really into fantasy land…
Notes
- Archived copy of Jim Doran’s University of Essex homepage: https://bit.ly/2Pdk4Nf
- Also see an online video of some of the interviews, including with Michie, used as a source for the Lee and Holtzman paper: https://youtu.be/6p3mhkNgRXs
- https://en.wikipedia.org/wiki/Rod_Burstall
- https://en.wikipedia.org/wiki/Robin_Popplestone
- https://www.researchgate.net/profile/Andrew_Orton
- See also discussion of the historical context of the Graph Traverser in Russell and Norvig (1995).
- https://en.wikipedia.org/wiki/Iamblichus
References
Bailey, Norman T. J. (1967) The simulation of stochastic epidemics in two dimensions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 4: Biology and Problems of Health, 237–257, University of California Press, Berkeley, Calif. https://bit.ly/2or7sqp
Castelfranchi, C. (2018) For a Science-oriented AI and not Servant of the Business. Powerpoint file available from the author on request at Research Gate: https://www.researchgate.net/profile/Cristiano_Castelfranchi
Doran, J.E and Michie, D. (1966) Experiments with the Graph Traverser Program. September 1966. Proceedings of The Royal Society A 294(1437):235-259.
Doran, J.E. (1968) Experiments with a pleasure seeking automaton. In Machine Intelligence 3 (ed. D. Michie) Edinburgh University Press, pp 195-216.
Doran, J.E. (1969) Planning and generalization in an automaton-environment system. In Machine Intelligence 4 (eds. B. Meltzer and D. Michie) Edinburgh University Press. pp 433-454.
Doran, J.E and Hodson, F.R (1975) Mathematics and Computers in Archaeology. Edinburgh University Press, 1975 [and Harvard University Press, 1976]
Doran, J.E. (1997) Foreknowledge in Artificial Societies. In: Conte R., Hegselmann R., Terna P. (eds) Simulating Social Phenomena. Lecture Notes in Economics and Mathematical Systems, vol 456. Springer, Berlin, Heidelberg. https://bit.ly/2Pf5Onv
Doran, J.E. (1998) Simulating Collective Misbelief. Journal of Artificial Societies and Social Simulation vol. 1, no. 1, http://jasss.soc.surrey.ac.uk/1/1/3.html
Doran, J.E. (2002) Agents and MAS in STaMs. In Foundations and Applications of Multi-Agent Systems: UKMAS Workshop 1996-2000, Selected Papers (eds. M d’Inverno, M Luck, M Fisher, C Preist), Springer Verlag, LNCS 2403, July 2002, pp. 131-151. https://bit.ly/2wsrHYG
Doran, J.E. (2018a) MoHAT — a new AI heuristic search based method of DISCOVERING and USING tractable and reliable agent-based computational models of human society. Research Gate Project: https://bit.ly/2lST35a
Doran, J.E. (2018b) An Investigation of Gillian’s HOOP: a speculation in computer games, virtual reality and METAPHYSICS. Research Gate Project: https://bit.ly/2C990zn
Gilbert, N. and Doran, J.E. eds. (2018) Simulating Societies: The Computer Simulation of Social Phenomena. Routledge Library Editions: Artificial Intelligence, Vol 6, Routledge: London and New York.
Gilbert, N. and Heath, C. (1985) Social Action and Artificial Intelligence. London: Gower.
Lee, J. and Holtzman, G. (1995) 50 Years after breaking the codes: interviews with two of the Bletchley Park scientists. IEEE Annals of the History of Computing, vol. 17, no. 1, pp. 32-43. https://ieeexplore.ieee.org/document/366512/
Newell, A.; Shaw, J.C.; Simon, H.A. (1959) Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.
Russell, S. and Norvig, P. (1995) Artificial Intelligence: A Modern Approach. Prentice-Hall, First edition, pp. 86, 115-117.
Samuel, Arthur L. (1959) “Some Studies in Machine Learning Using the Game of Checkers”. IBM Journal of Research and Development. doi:10.1147/rd.441.0206.
Hales, D. and Doran, J, (2018) Agent-Based Modelling Pioneers: An Interview with Jim Doran, Review of Artificial Societies and Social Simulation, 4th September 2018. https://rofasss.org/2018/09/04/dh/
© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)