Tag Archives: davidhales

Predicting Social Systems – a Challenge

By Bruce Edmonds, Gary Polhill and David Hales

(Part of the Prediction-Thread)

There is a lot of pressure on social scientists to predict. Not only is an ability to predict implicit in all requests to assess or optimise policy options before they are tried, but prediction is also the “gold standard” of science. However, there is a debate among modellers of complex social systems about whether this is possible to any meaningful extent. In this context, the aim of this paper is to issue the following challenge:

Are there any documented examples of models that predict useful aspects of complex social systems?

To do this the paper will:

  1. define prediction in a way that corresponds to what a wider audience might expect of it
  2. give some illustrative examples of prediction and non-prediction
  3. request examples where the successful prediction of social systems is claimed
  4. and outline the aspects on which these examples will be analysed

About Prediction

We start by defining prediction, taken from (Edmonds et al. 2019). This is a pragmatic definition designed to encapsulate common sense usage – what a wider public (e.g. policy makers or grant givers) might reasonably expect from “a prediction”.

By ‘prediction’, we mean the ability to reliably anticipate well-defined aspects of data that is not currently known to a useful degree of accuracy via computations using the model.

Let us clarify the language in this.

  • It has to be reliable. That is, one can rely upon its prediction as one makes this – a model that predicts erratically and only occasionally predicts is no help, since one does not whether to believe any particular prediction. This usually means that (a) it has made successful predictions for several independent cases and (b) the conditions under which it works is (roughly) known.
  • What is predicted has to be unknown at the time of prediction. That is, the prediction has to be made before the prediction is verified. Predicting known data (as when a model is checked on out-of-sample data) is not sufficient [1]. Nor is the practice of looking for phenomena that is consistent with the results of a model, after they have been generated (due to ignoring all the phenomena that is not consistent with the model in this process).
  • What is being predicted is well defined. That is, How to use the model to make a prediction about observed data is clear. An abstract model that is very suggestive – appears to predict phenomena but in a vague and undefined manner but where one has to invent the mapping between model and data to make this work – may be useful as a way of thinking about phenomena, but this is different from empirical prediction.
  • Which aspects of data about being predicted is open. As Watts (2014) points out, this is not restricted to point numerical predictions of some measurable value but could be a wider pattern. Examples of this include: a probabilistic prediction, a range of values, a negative prediction (this will not happen), or a second-order characteristic (such as the shape of a distribution or a correlation between variables). What is important is that (a) this is a useful characteristic to predict and (b) that this can be checked by an independent actor. Thus, for example, when predicting a value, the accuracy of that prediction depends on its use.
  • The prediction has to use the model in an essential manner. Claiming to predict something obviously inevitable which does not use the model is insufficient – the model has to distinguish which of the possible outcomes is being predicted at the time.

Thus, prediction is different from other kinds of scientific/empirical uses, such as description and explanation (Edmonds et al. 2019). Some modellers use “prediction” to mean any output from a model, regardless of its relationship to any observation of what is being modelled [2]. Others use “prediction” for any empirical fitting of data, regardless of whether that data is known before hand. However here we wish to be clearer and avoid any “post-truth” softening of the meaning of the word for two reasons (a) distinguishing different kinds of model use is crucial in matters of model checking or validation and (b) these “softer” kinds of empirical purpose will simply confuse the wider public when if talk to themabout “prediction”. One suspects that modellers have accepted these other meanings because it then allows them to claim they can predict (Edmonds 2017).

Some Examples

Nate Silver and his team aim to predict future social phenomena, such as the results of elections and the outcome of sports competitions. He correctly predicted the outcomes of all 50 electoral colleges in Obama’s election before it happened. This is a data-hungry approach, which involves the long-term development of simulations that carefully see what can be inferred from the available data, with repeated trial and error. The forecasts are probabilistic and repeated many times. As well as making predictions, his unit tries to establish the level of uncertainty in those predictions – being honest about the probability of those predictions coming about given the likely levels of error and bias in the data. These models are not agent-based in nature but tend to be of a mostly statistical nature, thus it is debatable whether these are treated as complex systems – it certainly does not use any theory from complexity science. His book (Silver 2012) describes his approach. Post hoc analysis of predictions – explaining why it worked or not – is kept distinct from the predictive models themselves – this analysis may inform changes to the predictive model but is not then incorporated into the model. The analysis is thus kept independent of the predictive model so it can be an effective check.

Many models in economics and ecology claim to “predict” but on inspection, this only means there is a fit to some empirical data. For example, (Meese & Rogoff 1983) looked at 40 econometric models where they claimed they were predicting some time-series. However, 37 out of 40 models failed completely when tested on newly available data from the same time series that they claimed to predict. Clearly, although presented as being predictive models, they could not predict unknown data. Although we do not know for sure, presumably what happened was that these models had been (explicitly or implicitly) fitted to the out-of-sample data, because the out-of-sample data was already known to the modeller. That is, if the model failed to fit the out-of-sample data when the model was tested, it was then adjusted until it did work, or alternatively, only those models that fitted the out-of-sample data were published.

The Challenge

The challenge is envisioned as happening like this.

  1. We publicise this paper requesting that people send us example of prediction or near-prediction on complex social systems with pointers to the appropriate documentation.
  2. We collect these and analyse them according to the characteristics and questions described below.
  3. We will post some interim results in January 2020 [3], in order to prompt more examples and to stimulate discussion. The final deadline for examples is the end of March 2020.
  4. We will publish the list of all the examples sent to us on the web, and present our summary and conclusions at Social Simulation 2020 in Milan and have a discussion there about the nature and prospects for the prediction of complex social systems. Anyone who contributed an example will be invited to be a co-author if they wish to be so-named.

How suggestions will be judged

For each suggestion, a number of answers will be sought – namely to the following questions:

  • What are the papers or documents that describe the model?
  • Is there an explicit claim that the model can predict (as opposed to might in the future)?
  • What kind of characteristics are being predicted (number, probabilistic, range…)?
  • Is there evidence of a prediction being made before the prediction was verified?
  • Is there evidence of the model being used for a series of independent predictions?
  • Were any of the predictions verified by a team that is independent of the one that made the prediction?
  • Is there evidence of the same team or similar models making failed predictions?
  • To what extent did the model need extensive calibration/adjustment before the prediction?
  • What role does theory play (if any) in the model?
  • Are the conditions under which predictive ability claimed described?

Of course, negative answers to any of the above about a particular model does not mean that the model cannot predict. What we are assessing is the evidence that a model can predict something meaningful about complex social systems. (Silver 2012) describes the method by which they attempt prediction, but this method might be different from that described in most theory-based academic papers.

Possible Outcomes

This exercise might shed some light of some interesting questions, such as:

  • What kind of prediction of complex social systems has been attempted?
  • Are there any examples where the reliable prediction of complex social systems has been achieved?
  • Are there certain kinds of social phenomena which seem to more amenable to prediction than others?
  • Does aiming to predict with a model entail any difference in method than projects with other aims?
  • Are there any commonalities among the projects that achieve reliable prediction?
  • Is there anything we could (collectively) do that would encourage or document good prediction?

It might well be that whether prediction is achievable might depend on exactly what is meant by the word.

Acknowledgements

This paper resulted from a “lively discussion” after Gary’s (Polhill et al. 2019) talk about prediction at the Social Simulation conference in Mainz. Many thanks to all those who joined in this. Of course, prior to this we have had many discussions about prediction. These have included Gary’s previous attempt at a prediction competition (Polhill 2018) and Scott Moss’s arguments about prediction in economics (which has many parallels with the debate here).

Notes

[1] This is sufficient for other empirical purposes, such as explanation (Edmonds et al. 2019)

[2] Confusingly they sometimes the word “forecasting” for what we mean by predict here.

[3] Assuming we have any submitted examples to talk about

References

Edmonds, B. & Adoha, L. (2019) Using agent-based simulation to inform policy – what could possibly go wrong? In Davidson, P. & Verhargen, H. (Eds.) (2019). Multi-Agent-Based Simulation XIX, 19th International Workshop, MABS 2018, Stockholm, Sweden, July 14, 2018, Revised Selected Papers. Lecture Notes in AI, 11463, Springer, pp. 1-16. DOI: 10.1007/978-3-030-22270-3_1 (see also http://cfpm.org/discussionpapers/236)

Edmonds, B. (2017) The post-truth drift in social simulation. Social Simulation Conference (SSC2017), Dublin, Ireland. (http://cfpm.org/discussionpapers/195)

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Grimm V, Revilla E, Berger U, Jeltsch F, Mooij WM, Railsback SF, Thulke H-H, Weiner J, Wiegand T, DeAngelis DL (2005) Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science 310: 987-991.

Meese, R.A. & Rogoff, K. (1983) Empirical Exchange Rate models of the Seventies – do they fit out of sample? Journal of International Economics, 14:3-24.

Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/

Polhill, G., Hare, H., Anzola, D., Bauermann, T., French, T., Post, H. and Salt, D. (2019) Using ABMs for prediction: Two thought experiments and a workshop. Social Simulation 2019, Mainz.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Thorngate, W. & Edmonds, B. (2013) Measuring simulation-observation fit: An introduction to ordinal pattern analysis. Journal of Artificial Societies and Social Simulation, 16(2):14. http://jasss.soc.surrey.ac.uk/16/2/4.html

Watts, D. J. (2014). Common Sense and Sociological Explanations. American Journal of Sociology, 120(2), 313-351.


Edmonds, B., Polhill, G and Hales, D. (2019) Predicting Social Systems – a Challenge. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2018/11/04/predicting-social-systems-a-challenge


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Vision for a more rigorous “replication first” modelling journal

By David Hales

A proposal for yet another journal? My first reaction to any such suggestion is to argue that we already have far too many journals. However, hear me out.

My vision is for a modelling journal that is far more rigorous than what we currently have. It would be aimed at work in which a significant aspect of the result is derived from the output of a complex system type computer model in an empirical way.

I propose that the journal would incorporate, as part of the reviewing process, at least one replication of the model by an independent reviewer. Hence models would be verified as independently replicated before being published.

In addition the published article would include an appendix detailing the issues raised during the replication process.

Carrying out such an exercise would almost certainly lead to clarifications of the original article such that it would easier to replicate by others and give more confidence in the results. Both readers and authors would gain significantly from this.

I would be much more willing to take modelling articles seriously if I knew they had already been independently replicated.

Here is a question that immediately springs to mind: replicating a model is a time consuming and costly business requiring significant expertise. Why would a reviewer do this?

One possible solution would be to provide an incentive in the following form. Final articles published in the journal would include the replicators as co-authors of the paper – specifically credited with the independent replication work that they write up in the appendix.

This would mean that good, clear and interesting initial articles would be desirable to replicate since the reviewer / replicator would obtain citations.

This could be a good task for an able graduate student allowing them to gain experience, contacts and citations.

Why would people submit good work to such a journal? This is not as easy to answer. It would almost certainly mean more work from their perspective and a time delay (since replication would almost certainly take more time than traditional review). However there is the benefit of actually getting a replication of their model and producing a final article that others would be able to engage with more easily.

Also I think it would be necessary, given the above aspects, to put quite a high bar on what is accepted for review / replication in the first place. Articles reviewed would have to present significant and new results in areas of fairly wide interest. Hence incremental or highly specific models would be ruled out. Also articles that did not contain enough detail to even attempt a replication would be rejected on that basis. Hence one can envisage a two stage review process where the editors decide if the submitted paper is “right” for a full replication review before soliciting replications.

My vision is of a low output, high quality, high initial rejection journal. Perhaps publishing 3 articles every 6 months. Ideally this would support a reputation for high quality over time.


Hales, D. (2018) Vision for a more rigorous “replication first” modelling journal. Review of Artificial Societies and Social Simulation, 5th November 2018. https://rofasss.org/2018/11/05/dh/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Agent-Based Modelling Pioneers: An Interview with Jim Doran

By David Hales and Jim Doran

Jim Doran is an ABM pioneer. Specifically applying ABM to social phenomena. He has been working on these ideas since the 1960’s. His work made a major contribution to establishing the area as it exists today.

In fact Jim has made significant contributions in many areas related to computation such as Artificial Intelligence (AI), Distributed AI (DAI) and Multi-agent Systems (MAS).

I know Jim — he was my PhD supervisor (at the University of Essex) so I had regular meetings with him over a period of about four years. It is hard to capture both the depth and breadth of Jim’s approach. Basically he thinks big. I mean really big! — yet plausibly and precisely. This is a very difficult trick to pull off. Believe me I’ve tried.

He retired from Essex almost two decades ago but continues to work on a number of very innovative ABM related projects that are discussed in the interview.

The interview was conducted over e-mail in August. We did a couple of iterations and included references to the work mentioned.


According to your webpage, at the University of Essex [1] , your background was originally mathematics and then Artificial Intelligence (working with Donald Michie at Edinburgh). In those days AI was a very new area. I wonder if you could say a little about how you came to work with Michie and what kind of things you worked on?

Whilst reading Mathematics at Oxford, I both joined the University Archaeological Society (inspired by the TV archaeologist of the day, Sir Mortimer Wheeler) becoming a (lowest grade) digger and encountering some real archaeologists like Dennis Britten, David Clarke and Roy Hodson, and also, at postgraduate level, was lucky enough to come under the influence of a forward thinking and quite distinguished biometrist, Norman T. J. Bailey, who at that time was using a small computer (an Elliot 803, I think it was) to simulate epidemics — i.e. a variety of computer simulation of social phenomena (Bailey 1967). One day, Bailey told me of this crazy but energetic Reader at Edinburgh University, Donald Michie, who was trying to program computers to play games and to display AI, and who was recruiting assistants. In due course I got a job as an Research Assistant / Junior Research Fellow in Michie’s group (the EPU, for Experimental Programming Unit). During the war Michie had worked with and had been inspired by Alan Turing (see: Lee and Holtzman 1995) [2].

Given this was the very early days of AI, What was it like working at the EPU at that time? Did you meet any other early AI researchers there?

Well, I remember plenty of energy, plenty of parties and visitors from all over including both the USSR (not easy at that time!) and the USA. The people I was working alongside – notably, but not only, Rod Burstall [3], (the late) Robin Popplestone [4], Andrew Ortony [5] – have typically had very successful academic research careers.

I notice that you wrote a paper with Michie in 1966 “Experiments with the graph traverser program”. Am I right, that this is a very early implementation of a generalised search algorithm?

When I took up the research job in Edinburgh at the EPU, in 1964 I think, Donald Michie introduced me to the work by Arthur Samuel on a learning Checkers playing program (Samuel 1959) and proposed to me that I attempt to use Samuel’s rather successful ideas and heuristics to build a general problem solving program — as a rival to the existing if somewhat ineffective and pretentious Newell, Shaw and Simon GPS (Newell et al 1959). The Graph Traverser was the result – one of the first standardised heuristic search techniques and a significant contribution to the foundations of that branch of AI (Doran and Michie 1966) [6]. It’s relevant to ABM because cognition involves planning and AI planning systems often use heuristic search to create plans that when executed achieve desired goals.

Can you recall when you first became aware of and / or began to think about simulating social phenomena using computational agents?

I guess the answer to your question depends on the definition of “computational agent”. My definition of a “computational agent” (today!) is any locus of slightly human like decision-making or behaviour within a computational process. If there is more than one then we have a multi-agent system.

Given the broad context that brought me to the EPU it was inevitable that I would get to think about what is now called agent based modelling (ABM) of social systems – note that archaeology is all about social systems and their long term dynamics! Thus in my (rag bag!) postgraduate dissertation (1964), I briefly discussed how one might simulate on a computer the dynamics of the set of types of pottery (say) characteristic of a particular culture – thus an ABM of a particular type of social dynamics. By 1975 I was writing a critical review of past mathematical modelling and computer simulation in archaeology with prospects (chapter 11 of Doran and Hodson, 1975).

But I didn’t myself use the word “agent” in a publication until, I believe, 1985 in a chapter I contributed to the little book by Gilbert and Heath (1985). Earlier I tended to use the word “actor” with the same meaning. Of course, once Distributed AI emerged as a branch of AI, ABM too was bound to emerge.

Didn’t you write a paper once titled something like “experiments with a pleasure seeking ant in a grid world”? I ask this speculatively because I have some memory of it but can find no references to it on the web.

Yes. The title you are after is “Experiments with a pleasure seeking automaton” published in the volume Machine Intelligence 3 (edited by Michie from the EPU) in 1968. And there was a follow up paper in Machine Intelligence 4 in 1969 (Doran 1968; 1969). These early papers address the combination of heuristic search with planning, plan execution and action within a computational agent but, as you just remarked, they attracted very little attention.

You make an interesting point about how you, today, define a computational agent. Do you have any thoughts on how one would go about trying to identify “agents” in a computational, or other, process? It seems as humans we do this all the time, but could we formalise it in some way?

Yes. I have already had a go at this, in a very limited way. It really boils down to, given the specification of a complex system, searching thru it for subsystems that have particular properties e.g. that demonstrably have memory within their structure of what has happened to them. This is a matter of finding a consistent relationship between the content of the hypothetical agent’s hypothetical memory and the actual input-output history (within the containing complex system) of that hypothetical agent – but the searches get very large. See, for example, my 2002 paper “Agents and MAS in STaMs” (Doran 2002).

From your experience what would you say are the main benefits and limitations of working with agent-based models of social phenomena?

The great benefit is, I feel, precision – the same benefit that mathematical models bring to science generally – including the precise handling of cognitive factors. The computer supports the derivation of the precise consequences of precise assumptions way beyond the powers of the human brain. A downside is that precision often implies particularisation. One can state easily enough that “cooperation is usually beneficial in complex environments”, but demonstrating the truth or otherwise of this vague thesis in computational terms requires precise specification of “cooperation, “complex” and “environment” and one often ends up trying to prove many different results corresponding to the many different interpretations of the thesis.

You’ve produced a number of works that could be termed “computationally assisted thought experiments”, for example, your work on foreknowledge (Doran 1997) and collective misbelief (1998). What do you think makes for a “good” computational thought experiment?

If an experiment and its results casts light upon the properties of real social systems or of possible social systems (and what social systems are NOT possible?), then that has got to be good if only by adding to our store of currently useless knowledge!

Perhaps I should clarify: I distinguish sharply between human societies (and other natural societies) and computational societies. The latter may be used as models of the former, but can be conceived, created and studied in their own right. If I build a couple of hundred or so learning and intercommunicating robots and let them play around in my back garden, perhaps they will evolve a type of society that has NEVER existed before… Or can it be proved that this is impossible?

The recently reissued classic book “Simulating Societies” (Gilbert and Doran 1994, 2018) contains contributions from several of the early researchers working in the area. Could you say a little about how this group came together?

Well – better to ask Nigel Gilbert this question – he organised the meeting that gave rise to the book, and although it’s quite likely I was involved in the choice of invitees, I have no memory. But note there were two main types of contributor – the mainstream social science oriented and the archaeologically oriented, corresponding to Nigel and myself respectively.

Looking back, what would you say have been the main successes in the area?

So many projects have been completed and are ongoing — I’m not going to try to pick out one or two as particularly successful. But getting the whole idea of social science ABM established and widely accepted as useful or potentially useful (along with AI, of course) is a massive achievement.

Looking forward, what do you think are the main challenges for the area?

There are many but I can give two broad challenges:

(i) Finding out how best to discover what levels of abstraction are both tractable and effective in particular modelling domains. At present I get the impression that the level of abstraction of a model is usually set by whatever seems natural or for which there is precedent – but that is too simple.

(Ii) Stopping the use of AI and social ABM being dominated by military and business applications that benefit only particular interests. I am quite pessimistic about this. It seems all too clear that when the very survival of nations, or totalitarian regimes, or massive global corporations is at stake, ethical and humanitarian restrictions and prohibitions, even those internationally agreed and promulgated by the UN, will likely be ignored. Compare, for example, the recent talk by Cristiano Castelfranchi entitled “For a Science-oriented AI and not Servant of the Business”. (Castelfranchi 2018)

What are you currently thinking about?

Three things. Firstly, my personal retirement project, MoHAT — how best to use AI and ABM to help discover effective methods of achieving much needed global cooperation.

The obvious approach is: collect LOTS of global data, build a theoretically supported and plausible model, try to validate it and then try out different ways of enhancing cooperation. MoHAT, by contrast, emphasises:

(i) Finding a high level of abstraction for modelling which is effective but tractable.

(ii) Finding particular long time span global models by reference to fundamental boundary conditions, not by way of observations at particular times and places. This involves a massive search through possible combinations of basic model elements but computers are good at that — hence AI Heuristic Search is key.

(iii) Trying to overcome the ubiquitous reluctance of global organisational structures, e.g. nation states, fully to cooperate – by exploring, for example what actions leading to enhanced global cooperation, if any, are available to one particular state.

Of course, any form of globalism is currently politically unpopular — MoHAT is swimming against the tide!

Full details of MoHAT (including some simple computer code) are in the corresponding project entry in my Research Gate profile (Doran 2018a).

Secondly, Gillian’s Hoop and how one assesses its plausibility as a “modern” metaphysical theory. Gillian’s Hoop is a somewhat wild speculation that one of my daughters came up with a few years ago: we are all avatars in a virtual world created by game players in a higher world who in fact are themselves avatars in a virtual world created by players in a yet higher world … with the upward chain of virtual worlds ultimately linking back to form a hoop! Think about that!

More generally I conjecture that metaphysical systems (e.g. the Roman Catholicism that I grew up with, Gillian’s Hoop, Iamblichus’ system [7], Homer’s) all emerge from the properties of our thought processes. The individual comes up with generalised beliefs and possibilities (e.g. Homer’s flying chariot) and these are socially propagated, revised and pulled together into coherent belief systems. This is little to do with what is there, much more to do with the processes that modify beliefs. This is not a new idea, of course, but it would be good to ground it in some computational modelling.

Again, there is a project description on Research Gate (Doran 2018b).

Finally, I’m thinking about planning and imagination and their interactions and consequences. I’ve put together a computational version of our basic subjective stream of thoughts (incorporating both directed and associative thinking) that can be used to address imagination and its uses. This is not as difficult to come up with as might at first appear. And then comes a conjecture — given ANY set of beliefs, concepts, memories etc in a particular representation system (cf. AI Knowledge Representation studies) it will be possible to define a (or a few) modification processes that bring about generalisations and imaginations – all needed for planning — which is all about deploying imaginations usefully.

In fact I am tempted to follow my nose and assert that:

Imagination is required for planning (itself required for survival in complex environments) and necessarily leads to “metaphysical” belief systems

Might be a good place to stop – any further and I am really into fantasy land…

Notes

  1. Archived copy of Jim Doran’s University of Essex homepage: https://bit.ly/2Pdk4Nf
  2. Also see an online video of some of the interviews, including with Michie, used as a source for the Lee and Holtzman paper: https://youtu.be/6p3mhkNgRXs
  3. https://en.wikipedia.org/wiki/Rod_Burstall
  4. https://en.wikipedia.org/wiki/Robin_Popplestone
  5. https://www.researchgate.net/profile/Andrew_Orton
  6. See also discussion of the historical context of the Graph Traverser in Russell and Norvig (1995).
  7. https://en.wikipedia.org/wiki/Iamblichus

References

Bailey, Norman T. J. (1967) The simulation of stochastic epidemics in two dimensions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 4: Biology and Problems of Health, 237–257, University of California Press, Berkeley, Calif. https://bit.ly/2or7sqp

Castelfranchi, C. (2018) For a Science-oriented AI and not Servant of the Business. Powerpoint file available from the author on request at Research Gate: https://www.researchgate.net/profile/Cristiano_Castelfranchi

Doran, J.E and Michie, D. (1966) Experiments with the Graph Traverser Program. September 1966. Proceedings of The Royal Society A 294(1437):235-259.

Doran, J.E. (1968) Experiments with a pleasure seeking automaton. In Machine Intelligence 3 (ed. D. Michie) Edinburgh University Press, pp 195-216.

Doran, J.E. (1969) Planning and generalization in an automaton-environment system. In Machine Intelligence 4 (eds. B. Meltzer and D. Michie) Edinburgh University Press. pp 433-454.

Doran, J.E and Hodson, F.R (1975) Mathematics and Computers in Archaeology. Edinburgh University Press, 1975 [and Harvard University Press, 1976]

Doran, J.E. (1997) Foreknowledge in Artificial Societies. In: Conte R., Hegselmann R., Terna P. (eds) Simulating Social Phenomena. Lecture Notes in Economics and Mathematical Systems, vol 456. Springer, Berlin, Heidelberg. https://bit.ly/2Pf5Onv

Doran, J.E. (1998) Simulating Collective Misbelief. Journal of Artificial Societies and Social Simulation vol. 1, no. 1, http://jasss.soc.surrey.ac.uk/1/1/3.html

Doran, J.E. (2002) Agents and MAS in STaMs. In Foundations and Applications of Multi-Agent Systems: UKMAS Workshop 1996-2000, Selected Papers (eds. M d’Inverno, M Luck, M Fisher, C Preist), Springer Verlag, LNCS 2403, July 2002, pp. 131-151. https://bit.ly/2wsrHYG

Doran, J.E. (2018a) MoHAT — a new AI heuristic search based method of DISCOVERING and USING tractable and reliable agent-based computational models of human society. Research Gate Project: https://bit.ly/2lST35a

Doran, J.E. (2018b) An Investigation of Gillian’s HOOP: a speculation in computer games, virtual reality and METAPHYSICS. Research Gate Project: https://bit.ly/2C990zn

Gilbert, N. and Doran, J.E. eds. (2018) Simulating Societies: The Computer Simulation of Social Phenomena. Routledge Library Editions: Artificial Intelligence, Vol 6, Routledge: London and New York.

Gilbert, N. and Heath, C. (1985) Social Action and Artificial Intelligence. London: Gower.

Lee, J. and Holtzman, G. (1995) 50 Years after breaking the codes: interviews with two of the Bletchley Park scientists. IEEE Annals of the History of Computing, vol. 17, no. 1, pp. 32-43. https://ieeexplore.ieee.org/document/366512/

Newell, A.; Shaw, J.C.; Simon, H.A. (1959) Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

Russell, S. and Norvig, P. (1995) Artificial Intelligence: A Modern Approach. Prentice-Hall, First edition, pp. 86, 115-117.

Samuel, Arthur L. (1959) “Some Studies in Machine Learning Using the Game of Checkers”. IBM Journal of Research and Development. doi:10.1147/rd.441.0206.


Hales, D. and Doran, J, (2018) Agent-Based Modelling Pioneers: An Interview with Jim Doran, Review of Artificial Societies and Social Simulation, 4th September 2018. https://rofasss.org/2018/09/04/dh/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)