Tag Archives: ABM

Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM

By Sebastian Achter, Melania Borit, Edmund Chattoe-Brown, Christiane Palaretti and Peer-Olaf Siebers

The initiative presented below arose from a Lorentz Center workshop on Integrating Qualitative and Quantitative Evidence using Social Simulation (8-12 April 2019, Leiden, the Netherlands). At the beginning of this workshop, the attenders divided themselves into teams aiming to work on specific challenges within the broad domain of the workshop topic. Our team took up the challenge of looking at “Rigour, Transparency, and Reuse”. The aim that emerged from our initial discussions was to create a framework for augmenting rigour and transparency (RAT) of data use in ABM when both designing, analysing and publishing such models.

One element of the framework that the group worked on was a roadmap of the modelling process in ABM, with particular reference to the use of different kinds of data. This roadmap was used to generate the second element of the framework: A protocol consisting of a set of questions, which, if answered by the modeller, would ensure that the published model was as rigorous and transparent in terms of data use, as it needs to be in order for the reader to understand and reproduce it.

The group (which had diverse modelling approaches and spanned a number of disciplines) recognised the challenges of this approach and much of the week was spent examining cases and defining terms so that the approach did not assume one particular kind of theory, one particular aim of modelling, and so on. To this end, we intend that the framework should be thoroughly tested against real research to ensure its general applicability and ease of use.

The team was also very keen not to “reinvent the wheel”, but to try develop the RAT approach (in connection with data use) to augment and “join up” existing protocols or documentation standards for specific parts of the modelling process. For example, the ODD protocol (Grimm et al. 2010) and its variants are generally accepted as the established way of documenting ABM but do not request rigorous documentation/justification of the data used for the modelling process.

The plan to move forward with the development of the framework is organised around three journal articles and associated dissemination activities:

  • A literature review of best (data use) documentation and practice in other disciplines and research methods (e.g. PRISMA – Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
  • A literature review of available documentation tools in ABM (e.g. ODD and its variants, DOE, the “Info” pane of NetLogo, EABSS)
  • An initial statement of the goals of RAT, the roadmap, the protocol and the process of testing these resources for usability and effectiveness
  • A presentation, poster, and round table at SSC 2019 (Mainz)

We would appreciate suggestions for items that should be included in the literature reviews, “beta testers” and critical readers for the roadmap and protocol (from as many disciplines and modelling approaches as possible), reactions (whether positive or negative) to the initiative itself (including joining it!) and participation in the various activities we plan at Mainz. If you are interested in any of these roles, please email Melania Borit (melania.borit@uit.no).

Acknowledgements

Chattoe-Brown’s contribution to this research is funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5 (PI: Professor Bruce Edmonds, Centre for Policy Modelling, Manchester Metropolitan University: https://gtr.ukri.org/projects?ref=ES%2FS015159%2F1).

References

Grimm, V., Berger, U., DeAngelis, D. L., Polhill, J. G., Giske, J. and Railsback, S. F. (2010) ‘The ODD Protocol: A Review and First Update’, Ecological Modelling, 221(23):2760–2768. doi:10.1016/j.ecolmodel.2010.08.019


Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. and Siebers, P.-O.(2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Escaping the modelling crisis

By Emile Chappin

Let me explain something I call the ‘modelling crisis’. It is something that many modellers in one way or another encounter. By being aware we may resolve such a crisis, avoid frustration, and, hopefully, save the world from some bad modelling.

Views on modelling

I first present two views on modelling. Bear with me!

[View 1: Model = world] The first view is that models capture things in the real world pretty well and some models are pretty much representative. And of course this is true. You can add many things to the model and you may have. But if you think along this line, you start seeing the model as if it is the world. At one point you may become rather optimistic about modelling. Well, I really mean to say, you become naive: the model is fabulous. The model can help anyone with any problem only somewhat related to the original idea behind this model. You don’t waste time worrying about the details and sell the model to everyone listening, and you’re quite convinced in the way you do this. You may come to a belief that the model is the truth.

[View 2: Model ≠ world] The second view is that the model can never represent the world adequately enough to really predict what is going on. And of course this is true. But if you think along this line, you can get pretty frustrated: the model is never good enough, because factor A is not in there, mechanism B is biased, etc. At one point you may become quite pessimistic about ‘the model’: will it help anyone anytime soon? You may come to the belief that the model is nonsense (and that modelling itself is nonsense).

As a modeller, you may encounter these views in your modelling journey: in how your model is perceived, in how your model is compared to other models and in the questions you’re asked about your model. And it may the case that you get stuck in either one of the views yourself. And you may not be aware, but you might still behave accordingly.

Possible consequences

Let’s conceive the idea of having a business doing modelling: we are ambitious and successful! What might happen over time with our business and with our clients?

  • Your clients love your business – Clients can ask us any question and they will get a very precise answer back! Anytime we give a good result, a result that comes true in some sense, we are praised, and our reputation grows. Anytime we give a bad result, something that turns out quite different from what we’d expected, we can blame the particular circumstances which could not have been foreseen or argue that this result is basically out of the original scope. Our modesty makes our reputation grow! And it makes us proud!
  • Assets need protection – Over time, our model/business reputation becomes more and more important. You should ask us for any modelling job because we’ve modelled (this) for decades. Any question goes into our fabulous model that can Answer Any Question In A Minute (AAQIAM). Our models became patchworks because of questions that were not so easy to fit in. But obviously, as a whole, the model is great. More than great: it is the best! The models are our key assets: they need to be protected. In a board meeting we decide that we should not show the insides of our models anymore. We should keep them secret.
  • Modelling schools – Habits emerge of how our models are used, what kind of analysis we do, and which we don’t. Core assumptions that we always make with our model are accepted and forgotten. We get used to those assumptions, we won’t change them anyway and probably we can’t. It is not really needed to think about the consequences of those assumptions anyway. We stick to the basics, represent the results in the way that the client can use it, and mention in footnotes how much detail is underneath, and that some caution is warranted in interpretation of the results. Other modelling schools may also emerge, but they really can’t deliver the precision/breadth of what we have been doing for decades, so they are not relevant, not really, anyway.
  • Distrusting all models – Another kind of people, typically not your clients, start distrusting the modelling business completely. They get upset in discussions: why worry about discussing the model details: there is always something missing anyway. And it is impossible to quantify anything, really. They decide that it is better to ignore model geeks completely and just follow their own reasoning. It doesn’t matter that this reasoning can’t be backed up with facts (such as a modelled reality). They don’t believe that it be done could anyway. So the problem is not their reasoning, it is the inability of quantitative science.

Here is the crisis

At this point, people stop debating the crucial elements in our models and the ambition for model innovation goes out of the window. I would say, we end up in a modelling crisis. At some point, decisions have to be made in the real world, and they can either be inspired by good modelling, by bad modelling, or not by modelling at all.

The way out of the modelling crisis

How can such a modelling crisis be resolved? First, we need to accept that the model ≠ world, so we don’t necessarily need to predict. We also need to accept that modelling can certainly be useful, for example when it helps to find clear and explicit reasoning/underpinning of an argument.

  • We should focus more on the problem that we really want to address, and for that problem, argue how modelling can actually contribute to a solution for that problem. This should result in better modelling questions, because modelling is a means, not an end. We should stop trying to outsource the thinking to a model.
  • Following from this point, we should be very explicit about the modelling purpose: in what way does the modelling contribute to solving the problem identified earlier? We have to be aware that different kinds of purposes lead to different styles of reasoning, and, consequently, to different strengths and weaknesses in the modelling that we do. Consider the differences between prediction, explanation, theoretical exposition, description and illustration as types of modelling purpose, see (Edmonds 2017), (more types are possible).
  • Following this point, we should accept the importance of creativity and the process in modelling. Science is about reasoned, reproducible work. But, paradoxically, good science does not come from a linear, step-by-step approach. Accepting this, modelling can help both in the creative process by exploring possible ideas, explicating an intuition as well as in justification and underpinning of a very particular reasoning. Next, it is important avoid mixing these perspectives up. The modelling process is as relevant as the model outcome. In the end, the reasoning should be standalone and strong (also without the model). But you may have needed the model to find it.
  • We should adhere to better modelling practices and develop the tooling to accommodate them. For ABM, many successful developments are ongoing: we should be explicit and transparent about assumptions we are making (e.g. the ODD protocol, Polhill et al. 2008). We should develop requirements and procedures for modelling studies, with respect to how the analysis is performed, also if clients don’t ask for it (validity, robustness of findings, sensitivity of outcomes, analysis of uncertainties). For some sectors, such requirements have been developed. The discussion around practices and validation is prominent in ABMs, where some ‘issues’ may be considered obvious (see for instance Heath, Hill, and Ciarallo 2009, the effort through CoMSES), but they should be asked for any type of model. In fact, we should share, debate on, and work with all types of models that are already out there (again, such as the great efforts through CoMSES), and consider forms of multi-modelling to save time and effort and benefit from strengths of different model formalisms.
  • We should start looking for good examples: get inspired and share them. Personally I like Basic Traffic from the NetLogo library, it does not predict you where traffic jams are, but it clearly shows the worth of slowing down earlier. Another may be the Limits to Growth, irrespective of its predictive power.
  • We should start doing it better ourselves, so that we show others that it can be done!

References

Heath, B., Hill, R. and Ciarallo, F. (2009). A Survey of Agent-Based Modeling Practices (January 1998 to July 2008). Journal of Artificial Societies and Social Simulation 12(4):9. http://jasss.soc.surrey.ac.uk/12/4/9.html

Polhill, J. Gary, Dawn Parker, Daniel Brown, and Volker Grimm. (2008). Using the ODD Protocol for Describing Three Agent-Based Social Simulation Models of Land-Use Change. Journal of Artificial Societies and Social Simulation 11(2): 3.

Edmonds, B. (2017) Five modelling purposes, Centre for Policy Modelling Discussion Paper CPM-17-238, http://cfpm.org/discussionpapers/192/


Chappin, E.J.L. (2018) Escaping the modelling crisis. Review of Artificial Societies and Social Simulation, 12th October 2018. https://rofasss.org/2018/10/12/ec/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Social Simulation as a meeting place: report of SSC 2018 Stockholm

By Gert Jan Hofstede

Last month, August 20-24, Stockholm provided the glorious scenery for the 14th annual conference of ESSA, the European Social Simulation Association. 14 years is an age at which we start to look in the mirror, and that was indeed the conference theme. What is Social Simulation, and where is it headed? I’ll briefly present my observations and reflections.

First, I was happy to see that there is life in this community. Some of the founders are no longer with us, or not in Stockholm, but their legacy is alive, and a multifarious, clever lot of researchers are ready to continue the journey. Actually, rather suddenly I find myself an older man in the community, and I consider that very good news.

Second, there is quality. During all the week, I have not had to endure one single poor, boring talk. There is real content and it is being delivered very well. The organizers also did their bit by providing lively formats.

Third, there is a sense of direction. Our community aims for relevance, and rigour only in so far as it serves relevance, not for its own sake. In the mirror, we see a responsible teenager discipline, ready to take on the world.

It would be infeasible for me to try and summarise all the sessions that I have attended. Instead, let me share the meta-model of our discipline that presented itself to me on Friday (see figure 1).

Figure 1: Social Simulation as a meeting place

The main message of this figure is that social simulation, as it appeared at SSC Stockholm, is a meeting place of three very different worlds:

  • There are always two levels: the agents doing their things, and the resulting system-level patterns of behaviour. Agent-based models connect these two levels through their mechanisms.
  • These could also either be about agents, or about system-level patterns.
  • Real life. Once more, there are agents here, called in this case ‘stakeholders’, and there are those who might know about system behaviour, called ‘experts’.

Not all researchers, nor all disciplines, have equal experience with all three worlds. You, reader, are probably drawn more to some than to others. But it is my conviction that we need all three to keep our discipline fruitful. Also, that we social simulators have an essential contribution to make in creating our models. Notably, what we do is:

  • We define the focus and scope of models
  • We select or think up mechanisms for agents
  • We select from among the six possible actions in figure 1, to create a convincing message to our target community.

This last point deserves some thought, because we need an audience. Target communities tend to prefer one of the three worlds, so ‘one story does not fit all’. In their 1995 seminal book ‘Artificial Societies’ (that keeps being reprinted), Nigel Gilbert and Rosaria Conte deplore the demise of theory development in sociology, and state that ‘a wide gap continues to exist between empirical research and theorizing’ (p. 5). In my view these worlds are still wide apart. Perhaps the emergence of the Web has, since the appearance of that book, tilted the balance even further in favour of data. Yet without theory, data is meaningless. In 2012, Flaminio Squazzoni, in his book ‘Agent based computational sociology’, put this into words nicely. He concluded (p. 172) “Tighter links between observations and theory are productive, if and only if they are mediated by formalized models.” Quite so. It would appear that in this quote, Flaminio assumes that these observations are taken from real life. After SSC 2018, however, I have become convinced that there is not necessarily unity between the world of stakeholders and experts on the one hand, and the world of surveys and experiments on the other. In fact, agent-based models can help to reconcile the two.

Perhaps I can attempt a brief discussion of each day’s keynote, at the hand of figure 1. I found the mix of keynotes inspiring.

  • On Tuesday, it was real-world time. Bruce Edmonds argued for putting as much real-life as possible into agent-based models, calling it “context”. My hunch is that what Bruce calls the “social context” can be understood in generic terms and bring this field a lot further. A big question is what I’d like to call the zooming factor: the more you zoom in, the more you need to know about context; the more you zoom out, the more you need theory for achieving generic validity.
  • On Wednesday, the model ruled. Milena Tsvekova described what I call ‘Procrustes experiments’, where experiments follow the design of an ABM. She granted a place for real life: “the comparison between ABM and experiment is not perfect, because it depends on how you frame the situation”.
  • Thursday was theory day. Julie Zahle made us think and talk about our method and the worldview behind it.

To avoid boring you all stiff with more detail, let me just summarize the general atmosphere by giving you a brief list of tongue breakers from SSC 2018: gregariousness, heteroskedasticiy, methodological, Valdemars Udde, Vapnik Chervonenkis. See you in Mainz (23-27 September 2019)!


Hofstede, G.J. (2018) Social Simulation as a meeting place: report of SSC 2018 Stockholm. Review of Artificial Societies and Social Simulation, 18th September 2018. https://rofasss.org/2018/09/19/gh


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Agent-Based Modelling Pioneers: An Interview with Jim Doran

By David Hales and Jim Doran

Jim Doran is an ABM pioneer. Specifically applying ABM to social phenomena. He has been working on these ideas since the 1960’s. His work made a major contribution to establishing the area as it exists today.

In fact Jim has made significant contributions in many areas related to computation such as Artificial Intelligence (AI), Distributed AI (DAI) and Multi-agent Systems (MAS).

I know Jim — he was my PhD supervisor (at the University of Essex) so I had regular meetings with him over a period of about four years. It is hard to capture both the depth and breadth of Jim’s approach. Basically he thinks big. I mean really big! — yet plausibly and precisely. This is a very difficult trick to pull off. Believe me I’ve tried.

He retired from Essex almost two decades ago but continues to work on a number of very innovative ABM related projects that are discussed in the interview.

The interview was conducted over e-mail in August. We did a couple of iterations and included references to the work mentioned.


According to your webpage, at the University of Essex [1] , your background was originally mathematics and then Artificial Intelligence (working with Donald Michie at Edinburgh). In those days AI was a very new area. I wonder if you could say a little about how you came to work with Michie and what kind of things you worked on?

Whilst reading Mathematics at Oxford, I both joined the University Archaeological Society (inspired by the TV archaeologist of the day, Sir Mortimer Wheeler) becoming a (lowest grade) digger and encountering some real archaeologists like Dennis Britten, David Clarke and Roy Hodson, and also, at postgraduate level, was lucky enough to come under the influence of a forward thinking and quite distinguished biometrist, Norman T. J. Bailey, who at that time was using a small computer (an Elliot 803, I think it was) to simulate epidemics — i.e. a variety of computer simulation of social phenomena (Bailey 1967). One day, Bailey told me of this crazy but energetic Reader at Edinburgh University, Donald Michie, who was trying to program computers to play games and to display AI, and who was recruiting assistants. In due course I got a job as an Research Assistant / Junior Research Fellow in Michie’s group (the EPU, for Experimental Programming Unit). During the war Michie had worked with and had been inspired by Alan Turing (see: Lee and Holtzman 1995) [2].

Given this was the very early days of AI, What was it like working at the EPU at that time? Did you meet any other early AI researchers there?

Well, I remember plenty of energy, plenty of parties and visitors from all over including both the USSR (not easy at that time!) and the USA. The people I was working alongside – notably, but not only, Rod Burstall [3], (the late) Robin Popplestone [4], Andrew Ortony [5] – have typically had very successful academic research careers.

I notice that you wrote a paper with Michie in 1966 “Experiments with the graph traverser program”. Am I right, that this is a very early implementation of a generalised search algorithm?

When I took up the research job in Edinburgh at the EPU, in 1964 I think, Donald Michie introduced me to the work by Arthur Samuel on a learning Checkers playing program (Samuel 1959) and proposed to me that I attempt to use Samuel’s rather successful ideas and heuristics to build a general problem solving program — as a rival to the existing if somewhat ineffective and pretentious Newell, Shaw and Simon GPS (Newell et al 1959). The Graph Traverser was the result – one of the first standardised heuristic search techniques and a significant contribution to the foundations of that branch of AI (Doran and Michie 1966) [6]. It’s relevant to ABM because cognition involves planning and AI planning systems often use heuristic search to create plans that when executed achieve desired goals.

Can you recall when you first became aware of and / or began to think about simulating social phenomena using computational agents?

I guess the answer to your question depends on the definition of “computational agent”. My definition of a “computational agent” (today!) is any locus of slightly human like decision-making or behaviour within a computational process. If there is more than one then we have a multi-agent system.

Given the broad context that brought me to the EPU it was inevitable that I would get to think about what is now called agent based modelling (ABM) of social systems – note that archaeology is all about social systems and their long term dynamics! Thus in my (rag bag!) postgraduate dissertation (1964), I briefly discussed how one might simulate on a computer the dynamics of the set of types of pottery (say) characteristic of a particular culture – thus an ABM of a particular type of social dynamics. By 1975 I was writing a critical review of past mathematical modelling and computer simulation in archaeology with prospects (chapter 11 of Doran and Hodson, 1975).

But I didn’t myself use the word “agent” in a publication until, I believe, 1985 in a chapter I contributed to the little book by Gilbert and Heath (1985). Earlier I tended to use the word “actor” with the same meaning. Of course, once Distributed AI emerged as a branch of AI, ABM too was bound to emerge.

Didn’t you write a paper once titled something like “experiments with a pleasure seeking ant in a grid world”? I ask this speculatively because I have some memory of it but can find no references to it on the web.

Yes. The title you are after is “Experiments with a pleasure seeking automaton” published in the volume Machine Intelligence 3 (edited by Michie from the EPU) in 1968. And there was a follow up paper in Machine Intelligence 4 in 1969 (Doran 1968; 1969). These early papers address the combination of heuristic search with planning, plan execution and action within a computational agent but, as you just remarked, they attracted very little attention.

You make an interesting point about how you, today, define a computational agent. Do you have any thoughts on how one would go about trying to identify “agents” in a computational, or other, process? It seems as humans we do this all the time, but could we formalise it in some way?

Yes. I have already had a go at this, in a very limited way. It really boils down to, given the specification of a complex system, searching thru it for subsystems that have particular properties e.g. that demonstrably have memory within their structure of what has happened to them. This is a matter of finding a consistent relationship between the content of the hypothetical agent’s hypothetical memory and the actual input-output history (within the containing complex system) of that hypothetical agent – but the searches get very large. See, for example, my 2002 paper “Agents and MAS in STaMs” (Doran 2002).

From your experience what would you say are the main benefits and limitations of working with agent-based models of social phenomena?

The great benefit is, I feel, precision – the same benefit that mathematical models bring to science generally – including the precise handling of cognitive factors. The computer supports the derivation of the precise consequences of precise assumptions way beyond the powers of the human brain. A downside is that precision often implies particularisation. One can state easily enough that “cooperation is usually beneficial in complex environments”, but demonstrating the truth or otherwise of this vague thesis in computational terms requires precise specification of “cooperation, “complex” and “environment” and one often ends up trying to prove many different results corresponding to the many different interpretations of the thesis.

You’ve produced a number of works that could be termed “computationally assisted thought experiments”, for example, your work on foreknowledge (Doran 1997) and collective misbelief (1998). What do you think makes for a “good” computational thought experiment?

If an experiment and its results casts light upon the properties of real social systems or of possible social systems (and what social systems are NOT possible?), then that has got to be good if only by adding to our store of currently useless knowledge!

Perhaps I should clarify: I distinguish sharply between human societies (and other natural societies) and computational societies. The latter may be used as models of the former, but can be conceived, created and studied in their own right. If I build a couple of hundred or so learning and intercommunicating robots and let them play around in my back garden, perhaps they will evolve a type of society that has NEVER existed before… Or can it be proved that this is impossible?

The recently reissued classic book “Simulating Societies” (Gilbert and Doran 1994, 2018) contains contributions from several of the early researchers working in the area. Could you say a little about how this group came together?

Well – better to ask Nigel Gilbert this question – he organised the meeting that gave rise to the book, and although it’s quite likely I was involved in the choice of invitees, I have no memory. But note there were two main types of contributor – the mainstream social science oriented and the archaeologically oriented, corresponding to Nigel and myself respectively.

Looking back, what would you say have been the main successes in the area?

So many projects have been completed and are ongoing — I’m not going to try to pick out one or two as particularly successful. But getting the whole idea of social science ABM established and widely accepted as useful or potentially useful (along with AI, of course) is a massive achievement.

Looking forward, what do you think are the main challenges for the area?

There are many but I can give two broad challenges:

(i) Finding out how best to discover what levels of abstraction are both tractable and effective in particular modelling domains. At present I get the impression that the level of abstraction of a model is usually set by whatever seems natural or for which there is precedent – but that is too simple.

(Ii) Stopping the use of AI and social ABM being dominated by military and business applications that benefit only particular interests. I am quite pessimistic about this. It seems all too clear that when the very survival of nations, or totalitarian regimes, or massive global corporations is at stake, ethical and humanitarian restrictions and prohibitions, even those internationally agreed and promulgated by the UN, will likely be ignored. Compare, for example, the recent talk by Cristiano Castelfranchi entitled “For a Science-oriented AI and not Servant of the Business”. (Castelfranchi 2018)

What are you currently thinking about?

Three things. Firstly, my personal retirement project, MoHAT — how best to use AI and ABM to help discover effective methods of achieving much needed global cooperation.

The obvious approach is: collect LOTS of global data, build a theoretically supported and plausible model, try to validate it and then try out different ways of enhancing cooperation. MoHAT, by contrast, emphasises:

(i) Finding a high level of abstraction for modelling which is effective but tractable.

(ii) Finding particular long time span global models by reference to fundamental boundary conditions, not by way of observations at particular times and places. This involves a massive search through possible combinations of basic model elements but computers are good at that — hence AI Heuristic Search is key.

(iii) Trying to overcome the ubiquitous reluctance of global organisational structures, e.g. nation states, fully to cooperate – by exploring, for example what actions leading to enhanced global cooperation, if any, are available to one particular state.

Of course, any form of globalism is currently politically unpopular — MoHAT is swimming against the tide!

Full details of MoHAT (including some simple computer code) are in the corresponding project entry in my Research Gate profile (Doran 2018a).

Secondly, Gillian’s Hoop and how one assesses its plausibility as a “modern” metaphysical theory. Gillian’s Hoop is a somewhat wild speculation that one of my daughters came up with a few years ago: we are all avatars in a virtual world created by game players in a higher world who in fact are themselves avatars in a virtual world created by players in a yet higher world … with the upward chain of virtual worlds ultimately linking back to form a hoop! Think about that!

More generally I conjecture that metaphysical systems (e.g. the Roman Catholicism that I grew up with, Gillian’s Hoop, Iamblichus’ system [7], Homer’s) all emerge from the properties of our thought processes. The individual comes up with generalised beliefs and possibilities (e.g. Homer’s flying chariot) and these are socially propagated, revised and pulled together into coherent belief systems. This is little to do with what is there, much more to do with the processes that modify beliefs. This is not a new idea, of course, but it would be good to ground it in some computational modelling.

Again, there is a project description on Research Gate (Doran 2018b).

Finally, I’m thinking about planning and imagination and their interactions and consequences. I’ve put together a computational version of our basic subjective stream of thoughts (incorporating both directed and associative thinking) that can be used to address imagination and its uses. This is not as difficult to come up with as might at first appear. And then comes a conjecture — given ANY set of beliefs, concepts, memories etc in a particular representation system (cf. AI Knowledge Representation studies) it will be possible to define a (or a few) modification processes that bring about generalisations and imaginations – all needed for planning — which is all about deploying imaginations usefully.

In fact I am tempted to follow my nose and assert that:

Imagination is required for planning (itself required for survival in complex environments) and necessarily leads to “metaphysical” belief systems

Might be a good place to stop – any further and I am really into fantasy land…

Notes

  1. Archived copy of Jim Doran’s University of Essex homepage: https://bit.ly/2Pdk4Nf
  2. Also see an online video of some of the interviews, including with Michie, used as a source for the Lee and Holtzman paper: https://youtu.be/6p3mhkNgRXs
  3. https://en.wikipedia.org/wiki/Rod_Burstall
  4. https://en.wikipedia.org/wiki/Robin_Popplestone
  5. https://www.researchgate.net/profile/Andrew_Orton
  6. See also discussion of the historical context of the Graph Traverser in Russell and Norvig (1995).
  7. https://en.wikipedia.org/wiki/Iamblichus

References

Bailey, Norman T. J. (1967) The simulation of stochastic epidemics in two dimensions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 4: Biology and Problems of Health, 237–257, University of California Press, Berkeley, Calif. https://bit.ly/2or7sqp

Castelfranchi, C. (2018) For a Science-oriented AI and not Servant of the Business. Powerpoint file available from the author on request at Research Gate: https://www.researchgate.net/profile/Cristiano_Castelfranchi

Doran, J.E and Michie, D. (1966) Experiments with the Graph Traverser Program. September 1966. Proceedings of The Royal Society A 294(1437):235-259.

Doran, J.E. (1968) Experiments with a pleasure seeking automaton. In Machine Intelligence 3 (ed. D. Michie) Edinburgh University Press, pp 195-216.

Doran, J.E. (1969) Planning and generalization in an automaton-environment system. In Machine Intelligence 4 (eds. B. Meltzer and D. Michie) Edinburgh University Press. pp 433-454.

Doran, J.E and Hodson, F.R (1975) Mathematics and Computers in Archaeology. Edinburgh University Press, 1975 [and Harvard University Press, 1976]

Doran, J.E. (1997) Foreknowledge in Artificial Societies. In: Conte R., Hegselmann R., Terna P. (eds) Simulating Social Phenomena. Lecture Notes in Economics and Mathematical Systems, vol 456. Springer, Berlin, Heidelberg. https://bit.ly/2Pf5Onv

Doran, J.E. (1998) Simulating Collective Misbelief. Journal of Artificial Societies and Social Simulation vol. 1, no. 1, http://jasss.soc.surrey.ac.uk/1/1/3.html

Doran, J.E. (2002) Agents and MAS in STaMs. In Foundations and Applications of Multi-Agent Systems: UKMAS Workshop 1996-2000, Selected Papers (eds. M d’Inverno, M Luck, M Fisher, C Preist), Springer Verlag, LNCS 2403, July 2002, pp. 131-151. https://bit.ly/2wsrHYG

Doran, J.E. (2018a) MoHAT — a new AI heuristic search based method of DISCOVERING and USING tractable and reliable agent-based computational models of human society. Research Gate Project: https://bit.ly/2lST35a

Doran, J.E. (2018b) An Investigation of Gillian’s HOOP: a speculation in computer games, virtual reality and METAPHYSICS. Research Gate Project: https://bit.ly/2C990zn

Gilbert, N. and Doran, J.E. eds. (2018) Simulating Societies: The Computer Simulation of Social Phenomena. Routledge Library Editions: Artificial Intelligence, Vol 6, Routledge: London and New York.

Gilbert, N. and Heath, C. (1985) Social Action and Artificial Intelligence. London: Gower.

Lee, J. and Holtzman, G. (1995) 50 Years after breaking the codes: interviews with two of the Bletchley Park scientists. IEEE Annals of the History of Computing, vol. 17, no. 1, pp. 32-43. https://ieeexplore.ieee.org/document/366512/

Newell, A.; Shaw, J.C.; Simon, H.A. (1959) Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

Russell, S. and Norvig, P. (1995) Artificial Intelligence: A Modern Approach. Prentice-Hall, First edition, pp. 86, 115-117.

Samuel, Arthur L. (1959) “Some Studies in Machine Learning Using the Game of Checkers”. IBM Journal of Research and Development. doi:10.1147/rd.441.0206.


Hales, D. and Doran, J, (2018) Agent-Based Modelling Pioneers: An Interview with Jim Doran, Review of Artificial Societies and Social Simulation, 4th September 2018. https://rofasss.org/2018/09/04/dh/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

A bad assumption: a simpler model is more general

By Bruce Edmonds

If one adds in some extra detail to a general model it can become more specific — that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general – it is just you can imagine it would be more general.

To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) – it is much less general than the original, because it is true for far fewer cases.

This is not very surprising – a claim that a model has general validity is a very strong claim – it is unlikely to be achieved by arm-chair reflection or by merely leaving out most of the observed processes.

Only under some special conditions does simplification result in greater generality:

  • When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
  • When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
  • When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)

In other cases, where you compare like with like (i.e. you don’t move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.

References

Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.

Edmonds, B. (2007) Simplicity is Not Truth-Indicative. In Gershenson, C.et al. (2007) Philosophy and Complexity. World Scientific, 65-80.

Edmonds, B. (2017) Different Modelling Purposes. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 39-58.

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144.


Edmonds, B. (2018) A bad assumption: a simpler model is more general. Review of Artificial Societies and Social Simulation, 28th August 2018. https://rofasss.org/2018/08/28/be-2/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Query: What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table?

By Edmund Chattoe-Brown

On one level this is a straightforward request. The earliest convincing example I have found is Hägerstrand (1965, p. 381) an article that seems to be undeservedly neglected because it is also the earliest example of a simulation I have been able to identify that demonstrates independent calibration and validation (Gilbert and Troitzsch 2005, p. 17).1

However, my attempts to find the earliest examples are motivated two more substantive issues (which may help to focus the search for earlier candidates). Firstly, what is the value of a canon (and giving due intellectual credit) for the success of ABM? The Schelling model is widely known and taught but it is not calibrated and validated. If a calibrated and validated model already existed in 1965, should it not be more widely cited? If we mostly cite a non-empirical model, might we give the impression that this is all that ABM can do? Also, failing to cite an article means that it cannot form the basis for debate. Is the Hägerstrand model in some sense “better” or “more important” than the Schelling model? This is a discussion we cannot have without awareness of the Hägerstrand model in the first place.

The second (and related) point regards the progress made by ABM and how those outside the community might judge it. Looking at ABM research now, the great majority of models appear to be non-empirical (Angus and Hassani-Mahmooei 2015, Table 5 in section 4.5). Without citations of articles like Hägerstrand (and even Clarkson and Meltzer), the non-expert reader of ABM might be led to conclude that it is too early (or too difficult) to produce such calibrated and validated models. But if this was done 50 years ago, and is not being much publicised, might we be using up our credibility as a “new” field still finding its feet?) If there are reasons for not doing, or not wanting to do, what Hägerstrand managed, let us be obliged to be clear what they are and not simply hide behind widespread neglect of such examples2.)

Notes

  1. I have excluded an even earlier example of considerable interest (Clarkson and Meltzer 1960 which also includes an attempt at calibration and validation but has never been cited in JASSS) for two reasons. Firstly, it deals with the modelling of a single agent and therefore involves no interaction. Secondly, it appears that the validation may effectively be using the “same” data as the calibration in that protocols elicited from an investment officer regarding portfolio selection are then tested against choices made by that same investment officer.
  2. And, of course, this is a vicious circle because in our increasingly pressurised academic world, people only tend to read and cite what is already cited.

References

Angus, Simon D. and Hassani-Mahmooei, Behrooz (2015) ‘“Anarchy” Reigns: A Quantitative Analysis of Agent-Based Modelling Publication Practices in JASSS, 2001-2012’, Journal of Artificial Societies and Social Simulation, 18(4), October, article 16, .

Clarkson, Geoffrey P. and Meltzer, Allan H. (1960) ‘Portfolio Selection: A Heuristic Approach, The Journal of Finance, 15(4), December, pp. 465-480.

Gilbert, Nigel and Troitzsch, Klaus G. (2005) Simulation for the Social Scientist, 2nd edition (Buckingham: Open University Press).

Hägerstrand, Torsten (1965) ‘A Monte Carlo Approach to Diffusion’, Archives Européennes de Sociologie, 6(1), May, Special Issue on Simulation in Sociology, pp. 43-67.


Chattoe-Brown, E. (2018) What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table? Review of Artificial Societies and Social Simulation, 11th June 2018. https://rofasss.org/2018/06/11/ecb/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)