Some Philosophical Viewpoints on Social Simulation

By Bruce Edmonds

How one thinks about knowledge can have a significant impact on how one develops models as well as how one might judge a good model.

  • Pragmatism. Under this view a simulation is a tool for a particular purpose. Different purposes will imply different tests for a good model. What is useful for one purpose might well not be good for another – different kinds of models and modelling processes might be good for each purpose. A simulation whose purpose is to explore the theoretical implications of some assumptions might well be very different from one aiming to explain some observed data. An example of this approach is (Edmonds & al. 2019).
  • Social Constructivism. Here knowledge about social phenomena (including simulation models) are collectively constructed. There is no other kind of knowledge than this. Each simulation is a way of thinking about social reality and plays a part in constructing it so. What is a suitable construction may vary over time and between cultures etc. What a group of people construct is not necessarily limited to simulations that are related to empirical data. (Ahrweiler & Gilbert 2005) seem to take this view but this is more explicit in some of the participatory modelling work, where the aim is to construct a simulation that is acceptable to a group of people, e.g. (Etienne 2014).
  • Relativism. There are no bad models, only different ways of mediating between your thought and reality (Morgan 1999). If you work hard on developing your model, you do not get a better model, only a different one. This might be a consequence of holding to an Epistemological Constructivist position.
  • Descriptive Realism. A simulation is a picture of some aspect of reality (albeit at a much lower ‘resolution’ and imperfectly). If one obtains a faithful representation of some aspect of reality as a model, one can use it for many different purposes. Could imply very complicated models (depending on what one observes and decides is relevant), which might themselves be difficult to understand. I suspect that many people have this in mind as they develop models, but few explicitly take this approach. Maybe an example is (Fieldhouse et al. 2016).
  • Classic Positivism. Here, the empirical fit and the analytic understanding of the simulation is all that matters, nothing else. Models should be tested against data and discarded if inadequate (or they compete and one is currently ahead empirically). Also they should be simple enough that they can be thoroughly understood. There is no obligation to be descriptively realistic. Many physics approaches to social phenomena follow this path (e.g Helbing 2010, Galam 2012).

Of course, few authors make their philosophical position explicit – usually one has to infer it from their text and modelling style.

References

Ahrweiler, P. and Gilbert, N. (2005). Caffè Nero: the Evaluation of Social Simulation. Journal of Artificial Societies and Social Simulation 8(4):14. http://jasss.soc.surrey.ac.uk/8/4/14.html

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. and Squazzoni. F. (in press) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Etienne, M. (ed.) (2014) Companion Modelling: A Participatory Approach to Support Sustainable Development. Springer

Fieldhouse, E., Lessard-Phillips, L. and Edmonds, B. (2016) Cascade or echo chamber? A complex agent-based simulation of voter turnout. Party Politics. 22(2):241-256. DOI:10.1177/1354068815605671

Galam, S. (2012) Sociophysics: A Physicist’s modeling of psycho-political phenomena. Springer.

Helbing, D. (2010). Quantitative sociodynamics: stochastic methods and models of social interaction processes. Springer.

Morgan, M. S., Morrison, M., & Skinner, Q. (Eds.). (1999). Models as mediators: Perspectives on natural and social science (Vol. 52). Cambridge University Press.


Edmonds, B. (2019) Some Philosophical Viewpoints on Social Simulation. Review of Artificial Societies and Social Simulation, 2nd July 2019. https://rofasss.org/2019/07/02/phil-view/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM

By Sebastian Achter, Melania Borit, Edmund Chattoe-Brown, Christiane Palaretti and Peer-Olaf Siebers

The initiative presented below arose from a Lorentz Center workshop on Integrating Qualitative and Quantitative Evidence using Social Simulation (8-12 April 2019, Leiden, the Netherlands). At the beginning of this workshop, the attenders divided themselves into teams aiming to work on specific challenges within the broad domain of the workshop topic. Our team took up the challenge of looking at “Rigour, Transparency, and Reuse”. The aim that emerged from our initial discussions was to create a framework for augmenting rigour and transparency (RAT) of data use in ABM when both designing, analysing and publishing such models.

One element of the framework that the group worked on was a roadmap of the modelling process in ABM, with particular reference to the use of different kinds of data. This roadmap was used to generate the second element of the framework: A protocol consisting of a set of questions, which, if answered by the modeller, would ensure that the published model was as rigorous and transparent in terms of data use, as it needs to be in order for the reader to understand and reproduce it.

The group (which had diverse modelling approaches and spanned a number of disciplines) recognised the challenges of this approach and much of the week was spent examining cases and defining terms so that the approach did not assume one particular kind of theory, one particular aim of modelling, and so on. To this end, we intend that the framework should be thoroughly tested against real research to ensure its general applicability and ease of use.

The team was also very keen not to “reinvent the wheel”, but to try develop the RAT approach (in connection with data use) to augment and “join up” existing protocols or documentation standards for specific parts of the modelling process. For example, the ODD protocol (Grimm et al. 2010) and its variants are generally accepted as the established way of documenting ABM but do not request rigorous documentation/justification of the data used for the modelling process.

The plan to move forward with the development of the framework is organised around three journal articles and associated dissemination activities:

  • A literature review of best (data use) documentation and practice in other disciplines and research methods (e.g. PRISMA – Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
  • A literature review of available documentation tools in ABM (e.g. ODD and its variants, DOE, the “Info” pane of NetLogo, EABSS)
  • An initial statement of the goals of RAT, the roadmap, the protocol and the process of testing these resources for usability and effectiveness
  • A presentation, poster, and round table at SSC 2019 (Mainz)

We would appreciate suggestions for items that should be included in the literature reviews, “beta testers” and critical readers for the roadmap and protocol (from as many disciplines and modelling approaches as possible), reactions (whether positive or negative) to the initiative itself (including joining it!) and participation in the various activities we plan at Mainz. If you are interested in any of these roles, please email Melania Borit (melania.borit@uit.no).

Acknowledgements

Chattoe-Brown’s contribution to this research is funded by the project “Towards Realistic Computational Models Of Social Influence Dynamics” (ES/S015159/1) funded by ESRC via ORA Round 5 (PI: Professor Bruce Edmonds, Centre for Policy Modelling, Manchester Metropolitan University: https://gtr.ukri.org/projects?ref=ES%2FS015159%2F1).

References

Grimm, V., Berger, U., DeAngelis, D. L., Polhill, J. G., Giske, J. and Railsback, S. F. (2010) ‘The ODD Protocol: A Review and First Update’, Ecological Modelling, 221(23):2760–2768. doi:10.1016/j.ecolmodel.2010.08.019


Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. and Siebers, P.-O.(2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Vision for a more rigorous “replication first” modelling journal

By David Hales

A proposal for yet another journal? My first reaction to any such suggestion is to argue that we already have far too many journals. However, hear me out.

My vision is for a modelling journal that is far more rigorous than what we currently have. It would be aimed at work in which a significant aspect of the result is derived from the output of a complex system type computer model in an empirical way.

I propose that the journal would incorporate, as part of the reviewing process, at least one replication of the model by an independent reviewer. Hence models would be verified as independently replicated before being published.

In addition the published article would include an appendix detailing the issues raised during the replication process.

Carrying out such an exercise would almost certainly lead to clarifications of the original article such that it would easier to replicate by others and give more confidence in the results. Both readers and authors would gain significantly from this.

I would be much more willing to take modelling articles seriously if I knew they had already been independently replicated.

Here is a question that immediately springs to mind: replicating a model is a time consuming and costly business requiring significant expertise. Why would a reviewer do this?

One possible solution would be to provide an incentive in the following form. Final articles published in the journal would include the replicators as co-authors of the paper – specifically credited with the independent replication work that they write up in the appendix.

This would mean that good, clear and interesting initial articles would be desirable to replicate since the reviewer / replicator would obtain citations.

This could be a good task for an able graduate student allowing them to gain experience, contacts and citations.

Why would people submit good work to such a journal? This is not as easy to answer. It would almost certainly mean more work from their perspective and a time delay (since replication would almost certainly take more time than traditional review). However there is the benefit of actually getting a replication of their model and producing a final article that others would be able to engage with more easily.

Also I think it would be necessary, given the above aspects, to put quite a high bar on what is accepted for review / replication in the first place. Articles reviewed would have to present significant and new results in areas of fairly wide interest. Hence incremental or highly specific models would be ruled out. Also articles that did not contain enough detail to even attempt a replication would be rejected on that basis. Hence one can envisage a two stage review process where the editors decide if the submitted paper is “right” for a full replication review before soliciting replications.

My vision is of a low output, high quality, high initial rejection journal. Perhaps publishing 3 articles every 6 months. Ideally this would support a reputation for high quality over time.


Hales, D. (2018) Vision for a more rigorous “replication first” modelling journal. Review of Artificial Societies and Social Simulation, 5th November 2018. https://rofasss.org/2018/11/05/dh/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Escaping the modelling crisis

By Emile Chappin

Let me explain something I call the ‘modelling crisis’. It is something that many modellers in one way or another encounter. By being aware we may resolve such a crisis, avoid frustration, and, hopefully, save the world from some bad modelling.

Views on modelling

I first present two views on modelling. Bear with me!

[View 1: Model = world] The first view is that models capture things in the real world pretty well and some models are pretty much representative. And of course this is true. You can add many things to the model and you may have. But if you think along this line, you start seeing the model as if it is the world. At one point you may become rather optimistic about modelling. Well, I really mean to say, you become naive: the model is fabulous. The model can help anyone with any problem only somewhat related to the original idea behind this model. You don’t waste time worrying about the details and sell the model to everyone listening, and you’re quite convinced in the way you do this. You may come to a belief that the model is the truth.

[View 2: Model ≠ world] The second view is that the model can never represent the world adequately enough to really predict what is going on. And of course this is true. But if you think along this line, you can get pretty frustrated: the model is never good enough, because factor A is not in there, mechanism B is biased, etc. At one point you may become quite pessimistic about ‘the model’: will it help anyone anytime soon? You may come to the belief that the model is nonsense (and that modelling itself is nonsense).

As a modeller, you may encounter these views in your modelling journey: in how your model is perceived, in how your model is compared to other models and in the questions you’re asked about your model. And it may the case that you get stuck in either one of the views yourself. And you may not be aware, but you might still behave accordingly.

Possible consequences

Let’s conceive the idea of having a business doing modelling: we are ambitious and successful! What might happen over time with our business and with our clients?

  • Your clients love your business – Clients can ask us any question and they will get a very precise answer back! Anytime we give a good result, a result that comes true in some sense, we are praised, and our reputation grows. Anytime we give a bad result, something that turns out quite different from what we’d expected, we can blame the particular circumstances which could not have been foreseen or argue that this result is basically out of the original scope. Our modesty makes our reputation grow! And it makes us proud!
  • Assets need protection – Over time, our model/business reputation becomes more and more important. You should ask us for any modelling job because we’ve modelled (this) for decades. Any question goes into our fabulous model that can Answer Any Question In A Minute (AAQIAM). Our models became patchworks because of questions that were not so easy to fit in. But obviously, as a whole, the model is great. More than great: it is the best! The models are our key assets: they need to be protected. In a board meeting we decide that we should not show the insides of our models anymore. We should keep them secret.
  • Modelling schools – Habits emerge of how our models are used, what kind of analysis we do, and which we don’t. Core assumptions that we always make with our model are accepted and forgotten. We get used to those assumptions, we won’t change them anyway and probably we can’t. It is not really needed to think about the consequences of those assumptions anyway. We stick to the basics, represent the results in the way that the client can use it, and mention in footnotes how much detail is underneath, and that some caution is warranted in interpretation of the results. Other modelling schools may also emerge, but they really can’t deliver the precision/breadth of what we have been doing for decades, so they are not relevant, not really, anyway.
  • Distrusting all models – Another kind of people, typically not your clients, start distrusting the modelling business completely. They get upset in discussions: why worry about discussing the model details: there is always something missing anyway. And it is impossible to quantify anything, really. They decide that it is better to ignore model geeks completely and just follow their own reasoning. It doesn’t matter that this reasoning can’t be backed up with facts (such as a modelled reality). They don’t believe that it be done could anyway. So the problem is not their reasoning, it is the inability of quantitative science.

Here is the crisis

At this point, people stop debating the crucial elements in our models and the ambition for model innovation goes out of the window. I would say, we end up in a modelling crisis. At some point, decisions have to be made in the real world, and they can either be inspired by good modelling, by bad modelling, or not by modelling at all.

The way out of the modelling crisis

How can such a modelling crisis be resolved? First, we need to accept that the model ≠ world, so we don’t necessarily need to predict. We also need to accept that modelling can certainly be useful, for example when it helps to find clear and explicit reasoning/underpinning of an argument.

  • We should focus more on the problem that we really want to address, and for that problem, argue how modelling can actually contribute to a solution for that problem. This should result in better modelling questions, because modelling is a means, not an end. We should stop trying to outsource the thinking to a model.
  • Following from this point, we should be very explicit about the modelling purpose: in what way does the modelling contribute to solving the problem identified earlier? We have to be aware that different kinds of purposes lead to different styles of reasoning, and, consequently, to different strengths and weaknesses in the modelling that we do. Consider the differences between prediction, explanation, theoretical exposition, description and illustration as types of modelling purpose, see (Edmonds 2017), (more types are possible).
  • Following this point, we should accept the importance of creativity and the process in modelling. Science is about reasoned, reproducible work. But, paradoxically, good science does not come from a linear, step-by-step approach. Accepting this, modelling can help both in the creative process by exploring possible ideas, explicating an intuition as well as in justification and underpinning of a very particular reasoning. Next, it is important avoid mixing these perspectives up. The modelling process is as relevant as the model outcome. In the end, the reasoning should be standalone and strong (also without the model). But you may have needed the model to find it.
  • We should adhere to better modelling practices and develop the tooling to accommodate them. For ABM, many successful developments are ongoing: we should be explicit and transparent about assumptions we are making (e.g. the ODD protocol, Polhill et al. 2008). We should develop requirements and procedures for modelling studies, with respect to how the analysis is performed, also if clients don’t ask for it (validity, robustness of findings, sensitivity of outcomes, analysis of uncertainties). For some sectors, such requirements have been developed. The discussion around practices and validation is prominent in ABMs, where some ‘issues’ may be considered obvious (see for instance Heath, Hill, and Ciarallo 2009, the effort through CoMSES), but they should be asked for any type of model. In fact, we should share, debate on, and work with all types of models that are already out there (again, such as the great efforts through CoMSES), and consider forms of multi-modelling to save time and effort and benefit from strengths of different model formalisms.
  • We should start looking for good examples: get inspired and share them. Personally I like Basic Traffic from the NetLogo library, it does not predict you where traffic jams are, but it clearly shows the worth of slowing down earlier. Another may be the Limits to Growth, irrespective of its predictive power.
  • We should start doing it better ourselves, so that we show others that it can be done!

References

Heath, B., Hill, R. and Ciarallo, F. (2009). A Survey of Agent-Based Modeling Practices (January 1998 to July 2008). Journal of Artificial Societies and Social Simulation 12(4):9. http://jasss.soc.surrey.ac.uk/12/4/9.html

Polhill, J. Gary, Dawn Parker, Daniel Brown, and Volker Grimm. (2008). Using the ODD Protocol for Describing Three Agent-Based Social Simulation Models of Land-Use Change. Journal of Artificial Societies and Social Simulation 11(2): 3.

Edmonds, B. (2017) Five modelling purposes, Centre for Policy Modelling Discussion Paper CPM-17-238, http://cfpm.org/discussionpapers/192/


Chappin, E.J.L. (2018) Escaping the modelling crisis. Review of Artificial Societies and Social Simulation, 12th October 2018. https://rofasss.org/2018/10/12/ec/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Social Simulation as a meeting place: report of SSC 2018 Stockholm

By Gert Jan Hofstede

Last month, August 20-24, Stockholm provided the glorious scenery for the 14th annual conference of ESSA, the European Social Simulation Association. 14 years is an age at which we start to look in the mirror, and that was indeed the conference theme. What is Social Simulation, and where is it headed? I’ll briefly present my observations and reflections.

First, I was happy to see that there is life in this community. Some of the founders are no longer with us, or not in Stockholm, but their legacy is alive, and a multifarious, clever lot of researchers are ready to continue the journey. Actually, rather suddenly I find myself an older man in the community, and I consider that very good news.

Second, there is quality. During all the week, I have not had to endure one single poor, boring talk. There is real content and it is being delivered very well. The organizers also did their bit by providing lively formats.

Third, there is a sense of direction. Our community aims for relevance, and rigour only in so far as it serves relevance, not for its own sake. In the mirror, we see a responsible teenager discipline, ready to take on the world.

It would be infeasible for me to try and summarise all the sessions that I have attended. Instead, let me share the meta-model of our discipline that presented itself to me on Friday (see figure 1).

Figure 1: Social Simulation as a meeting place

The main message of this figure is that social simulation, as it appeared at SSC Stockholm, is a meeting place of three very different worlds:

  • There are always two levels: the agents doing their things, and the resulting system-level patterns of behaviour. Agent-based models connect these two levels through their mechanisms.
  • These could also either be about agents, or about system-level patterns.
  • Real life. Once more, there are agents here, called in this case ‘stakeholders’, and there are those who might know about system behaviour, called ‘experts’.

Not all researchers, nor all disciplines, have equal experience with all three worlds. You, reader, are probably drawn more to some than to others. But it is my conviction that we need all three to keep our discipline fruitful. Also, that we social simulators have an essential contribution to make in creating our models. Notably, what we do is:

  • We define the focus and scope of models
  • We select or think up mechanisms for agents
  • We select from among the six possible actions in figure 1, to create a convincing message to our target community.

This last point deserves some thought, because we need an audience. Target communities tend to prefer one of the three worlds, so ‘one story does not fit all’. In their 1995 seminal book ‘Artificial Societies’ (that keeps being reprinted), Nigel Gilbert and Rosaria Conte deplore the demise of theory development in sociology, and state that ‘a wide gap continues to exist between empirical research and theorizing’ (p. 5). In my view these worlds are still wide apart. Perhaps the emergence of the Web has, since the appearance of that book, tilted the balance even further in favour of data. Yet without theory, data is meaningless. In 2012, Flaminio Squazzoni, in his book ‘Agent based computational sociology’, put this into words nicely. He concluded (p. 172) “Tighter links between observations and theory are productive, if and only if they are mediated by formalized models.” Quite so. It would appear that in this quote, Flaminio assumes that these observations are taken from real life. After SSC 2018, however, I have become convinced that there is not necessarily unity between the world of stakeholders and experts on the one hand, and the world of surveys and experiments on the other. In fact, agent-based models can help to reconcile the two.

Perhaps I can attempt a brief discussion of each day’s keynote, at the hand of figure 1. I found the mix of keynotes inspiring.

  • On Tuesday, it was real-world time. Bruce Edmonds argued for putting as much real-life as possible into agent-based models, calling it “context”. My hunch is that what Bruce calls the “social context” can be understood in generic terms and bring this field a lot further. A big question is what I’d like to call the zooming factor: the more you zoom in, the more you need to know about context; the more you zoom out, the more you need theory for achieving generic validity.
  • On Wednesday, the model ruled. Milena Tsvekova described what I call ‘Procrustes experiments’, where experiments follow the design of an ABM. She granted a place for real life: “the comparison between ABM and experiment is not perfect, because it depends on how you frame the situation”.
  • Thursday was theory day. Julie Zahle made us think and talk about our method and the worldview behind it.

To avoid boring you all stiff with more detail, let me just summarize the general atmosphere by giving you a brief list of tongue breakers from SSC 2018: gregariousness, heteroskedasticiy, methodological, Valdemars Udde, Vapnik Chervonenkis. See you in Mainz (23-27 September 2019)!


Hofstede, G.J. (2018) Social Simulation as a meeting place: report of SSC 2018 Stockholm. Review of Artificial Societies and Social Simulation, 18th September 2018. https://rofasss.org/2018/09/19/gh


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Agent-Based Modelling Pioneers: An Interview with Jim Doran

By David Hales and Jim Doran

Jim Doran is an ABM pioneer. Specifically applying ABM to social phenomena. He has been working on these ideas since the 1960’s. His work made a major contribution to establishing the area as it exists today.

In fact Jim has made significant contributions in many areas related to computation such as Artificial Intelligence (AI), Distributed AI (DAI) and Multi-agent Systems (MAS).

I know Jim — he was my PhD supervisor (at the University of Essex) so I had regular meetings with him over a period of about four years. It is hard to capture both the depth and breadth of Jim’s approach. Basically he thinks big. I mean really big! — yet plausibly and precisely. This is a very difficult trick to pull off. Believe me I’ve tried.

He retired from Essex almost two decades ago but continues to work on a number of very innovative ABM related projects that are discussed in the interview.

The interview was conducted over e-mail in August. We did a couple of iterations and included references to the work mentioned.


According to your webpage, at the University of Essex [1] , your background was originally mathematics and then Artificial Intelligence (working with Donald Michie at Edinburgh). In those days AI was a very new area. I wonder if you could say a little about how you came to work with Michie and what kind of things you worked on?

Whilst reading Mathematics at Oxford, I both joined the University Archaeological Society (inspired by the TV archaeologist of the day, Sir Mortimer Wheeler) becoming a (lowest grade) digger and encountering some real archaeologists like Dennis Britten, David Clarke and Roy Hodson, and also, at postgraduate level, was lucky enough to come under the influence of a forward thinking and quite distinguished biometrist, Norman T. J. Bailey, who at that time was using a small computer (an Elliot 803, I think it was) to simulate epidemics — i.e. a variety of computer simulation of social phenomena (Bailey 1967). One day, Bailey told me of this crazy but energetic Reader at Edinburgh University, Donald Michie, who was trying to program computers to play games and to display AI, and who was recruiting assistants. In due course I got a job as an Research Assistant / Junior Research Fellow in Michie’s group (the EPU, for Experimental Programming Unit). During the war Michie had worked with and had been inspired by Alan Turing (see: Lee and Holtzman 1995) [2].

Given this was the very early days of AI, What was it like working at the EPU at that time? Did you meet any other early AI researchers there?

Well, I remember plenty of energy, plenty of parties and visitors from all over including both the USSR (not easy at that time!) and the USA. The people I was working alongside – notably, but not only, Rod Burstall [3], (the late) Robin Popplestone [4], Andrew Ortony [5] – have typically had very successful academic research careers.

I notice that you wrote a paper with Michie in 1966 “Experiments with the graph traverser program”. Am I right, that this is a very early implementation of a generalised search algorithm?

When I took up the research job in Edinburgh at the EPU, in 1964 I think, Donald Michie introduced me to the work by Arthur Samuel on a learning Checkers playing program (Samuel 1959) and proposed to me that I attempt to use Samuel’s rather successful ideas and heuristics to build a general problem solving program — as a rival to the existing if somewhat ineffective and pretentious Newell, Shaw and Simon GPS (Newell et al 1959). The Graph Traverser was the result – one of the first standardised heuristic search techniques and a significant contribution to the foundations of that branch of AI (Doran and Michie 1966) [6]. It’s relevant to ABM because cognition involves planning and AI planning systems often use heuristic search to create plans that when executed achieve desired goals.

Can you recall when you first became aware of and / or began to think about simulating social phenomena using computational agents?

I guess the answer to your question depends on the definition of “computational agent”. My definition of a “computational agent” (today!) is any locus of slightly human like decision-making or behaviour within a computational process. If there is more than one then we have a multi-agent system.

Given the broad context that brought me to the EPU it was inevitable that I would get to think about what is now called agent based modelling (ABM) of social systems – note that archaeology is all about social systems and their long term dynamics! Thus in my (rag bag!) postgraduate dissertation (1964), I briefly discussed how one might simulate on a computer the dynamics of the set of types of pottery (say) characteristic of a particular culture – thus an ABM of a particular type of social dynamics. By 1975 I was writing a critical review of past mathematical modelling and computer simulation in archaeology with prospects (chapter 11 of Doran and Hodson, 1975).

But I didn’t myself use the word “agent” in a publication until, I believe, 1985 in a chapter I contributed to the little book by Gilbert and Heath (1985). Earlier I tended to use the word “actor” with the same meaning. Of course, once Distributed AI emerged as a branch of AI, ABM too was bound to emerge.

Didn’t you write a paper once titled something like “experiments with a pleasure seeking ant in a grid world”? I ask this speculatively because I have some memory of it but can find no references to it on the web.

Yes. The title you are after is “Experiments with a pleasure seeking automaton” published in the volume Machine Intelligence 3 (edited by Michie from the EPU) in 1968. And there was a follow up paper in Machine Intelligence 4 in 1969 (Doran 1968; 1969). These early papers address the combination of heuristic search with planning, plan execution and action within a computational agent but, as you just remarked, they attracted very little attention.

You make an interesting point about how you, today, define a computational agent. Do you have any thoughts on how one would go about trying to identify “agents” in a computational, or other, process? It seems as humans we do this all the time, but could we formalise it in some way?

Yes. I have already had a go at this, in a very limited way. It really boils down to, given the specification of a complex system, searching thru it for subsystems that have particular properties e.g. that demonstrably have memory within their structure of what has happened to them. This is a matter of finding a consistent relationship between the content of the hypothetical agent’s hypothetical memory and the actual input-output history (within the containing complex system) of that hypothetical agent – but the searches get very large. See, for example, my 2002 paper “Agents and MAS in STaMs” (Doran 2002).

From your experience what would you say are the main benefits and limitations of working with agent-based models of social phenomena?

The great benefit is, I feel, precision – the same benefit that mathematical models bring to science generally – including the precise handling of cognitive factors. The computer supports the derivation of the precise consequences of precise assumptions way beyond the powers of the human brain. A downside is that precision often implies particularisation. One can state easily enough that “cooperation is usually beneficial in complex environments”, but demonstrating the truth or otherwise of this vague thesis in computational terms requires precise specification of “cooperation, “complex” and “environment” and one often ends up trying to prove many different results corresponding to the many different interpretations of the thesis.

You’ve produced a number of works that could be termed “computationally assisted thought experiments”, for example, your work on foreknowledge (Doran 1997) and collective misbelief (1998). What do you think makes for a “good” computational thought experiment?

If an experiment and its results casts light upon the properties of real social systems or of possible social systems (and what social systems are NOT possible?), then that has got to be good if only by adding to our store of currently useless knowledge!

Perhaps I should clarify: I distinguish sharply between human societies (and other natural societies) and computational societies. The latter may be used as models of the former, but can be conceived, created and studied in their own right. If I build a couple of hundred or so learning and intercommunicating robots and let them play around in my back garden, perhaps they will evolve a type of society that has NEVER existed before… Or can it be proved that this is impossible?

The recently reissued classic book “Simulating Societies” (Gilbert and Doran 1994, 2018) contains contributions from several of the early researchers working in the area. Could you say a little about how this group came together?

Well – better to ask Nigel Gilbert this question – he organised the meeting that gave rise to the book, and although it’s quite likely I was involved in the choice of invitees, I have no memory. But note there were two main types of contributor – the mainstream social science oriented and the archaeologically oriented, corresponding to Nigel and myself respectively.

Looking back, what would you say have been the main successes in the area?

So many projects have been completed and are ongoing — I’m not going to try to pick out one or two as particularly successful. But getting the whole idea of social science ABM established and widely accepted as useful or potentially useful (along with AI, of course) is a massive achievement.

Looking forward, what do you think are the main challenges for the area?

There are many but I can give two broad challenges:

(i) Finding out how best to discover what levels of abstraction are both tractable and effective in particular modelling domains. At present I get the impression that the level of abstraction of a model is usually set by whatever seems natural or for which there is precedent – but that is too simple.

(Ii) Stopping the use of AI and social ABM being dominated by military and business applications that benefit only particular interests. I am quite pessimistic about this. It seems all too clear that when the very survival of nations, or totalitarian regimes, or massive global corporations is at stake, ethical and humanitarian restrictions and prohibitions, even those internationally agreed and promulgated by the UN, will likely be ignored. Compare, for example, the recent talk by Cristiano Castelfranchi entitled “For a Science-oriented AI and not Servant of the Business”. (Castelfranchi 2018)

What are you currently thinking about?

Three things. Firstly, my personal retirement project, MoHAT — how best to use AI and ABM to help discover effective methods of achieving much needed global cooperation.

The obvious approach is: collect LOTS of global data, build a theoretically supported and plausible model, try to validate it and then try out different ways of enhancing cooperation. MoHAT, by contrast, emphasises:

(i) Finding a high level of abstraction for modelling which is effective but tractable.

(ii) Finding particular long time span global models by reference to fundamental boundary conditions, not by way of observations at particular times and places. This involves a massive search through possible combinations of basic model elements but computers are good at that — hence AI Heuristic Search is key.

(iii) Trying to overcome the ubiquitous reluctance of global organisational structures, e.g. nation states, fully to cooperate – by exploring, for example what actions leading to enhanced global cooperation, if any, are available to one particular state.

Of course, any form of globalism is currently politically unpopular — MoHAT is swimming against the tide!

Full details of MoHAT (including some simple computer code) are in the corresponding project entry in my Research Gate profile (Doran 2018a).

Secondly, Gillian’s Hoop and how one assesses its plausibility as a “modern” metaphysical theory. Gillian’s Hoop is a somewhat wild speculation that one of my daughters came up with a few years ago: we are all avatars in a virtual world created by game players in a higher world who in fact are themselves avatars in a virtual world created by players in a yet higher world … with the upward chain of virtual worlds ultimately linking back to form a hoop! Think about that!

More generally I conjecture that metaphysical systems (e.g. the Roman Catholicism that I grew up with, Gillian’s Hoop, Iamblichus’ system [7], Homer’s) all emerge from the properties of our thought processes. The individual comes up with generalised beliefs and possibilities (e.g. Homer’s flying chariot) and these are socially propagated, revised and pulled together into coherent belief systems. This is little to do with what is there, much more to do with the processes that modify beliefs. This is not a new idea, of course, but it would be good to ground it in some computational modelling.

Again, there is a project description on Research Gate (Doran 2018b).

Finally, I’m thinking about planning and imagination and their interactions and consequences. I’ve put together a computational version of our basic subjective stream of thoughts (incorporating both directed and associative thinking) that can be used to address imagination and its uses. This is not as difficult to come up with as might at first appear. And then comes a conjecture — given ANY set of beliefs, concepts, memories etc in a particular representation system (cf. AI Knowledge Representation studies) it will be possible to define a (or a few) modification processes that bring about generalisations and imaginations – all needed for planning — which is all about deploying imaginations usefully.

In fact I am tempted to follow my nose and assert that:

Imagination is required for planning (itself required for survival in complex environments) and necessarily leads to “metaphysical” belief systems

Might be a good place to stop – any further and I am really into fantasy land…

Notes

  1. Archived copy of Jim Doran’s University of Essex homepage: https://bit.ly/2Pdk4Nf
  2. Also see an online video of some of the interviews, including with Michie, used as a source for the Lee and Holtzman paper: https://youtu.be/6p3mhkNgRXs
  3. https://en.wikipedia.org/wiki/Rod_Burstall
  4. https://en.wikipedia.org/wiki/Robin_Popplestone
  5. https://www.researchgate.net/profile/Andrew_Orton
  6. See also discussion of the historical context of the Graph Traverser in Russell and Norvig (1995).
  7. https://en.wikipedia.org/wiki/Iamblichus

References

Bailey, Norman T. J. (1967) The simulation of stochastic epidemics in two dimensions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 4: Biology and Problems of Health, 237–257, University of California Press, Berkeley, Calif. https://bit.ly/2or7sqp

Castelfranchi, C. (2018) For a Science-oriented AI and not Servant of the Business. Powerpoint file available from the author on request at Research Gate: https://www.researchgate.net/profile/Cristiano_Castelfranchi

Doran, J.E and Michie, D. (1966) Experiments with the Graph Traverser Program. September 1966. Proceedings of The Royal Society A 294(1437):235-259.

Doran, J.E. (1968) Experiments with a pleasure seeking automaton. In Machine Intelligence 3 (ed. D. Michie) Edinburgh University Press, pp 195-216.

Doran, J.E. (1969) Planning and generalization in an automaton-environment system. In Machine Intelligence 4 (eds. B. Meltzer and D. Michie) Edinburgh University Press. pp 433-454.

Doran, J.E and Hodson, F.R (1975) Mathematics and Computers in Archaeology. Edinburgh University Press, 1975 [and Harvard University Press, 1976]

Doran, J.E. (1997) Foreknowledge in Artificial Societies. In: Conte R., Hegselmann R., Terna P. (eds) Simulating Social Phenomena. Lecture Notes in Economics and Mathematical Systems, vol 456. Springer, Berlin, Heidelberg. https://bit.ly/2Pf5Onv

Doran, J.E. (1998) Simulating Collective Misbelief. Journal of Artificial Societies and Social Simulation vol. 1, no. 1, http://jasss.soc.surrey.ac.uk/1/1/3.html

Doran, J.E. (2002) Agents and MAS in STaMs. In Foundations and Applications of Multi-Agent Systems: UKMAS Workshop 1996-2000, Selected Papers (eds. M d’Inverno, M Luck, M Fisher, C Preist), Springer Verlag, LNCS 2403, July 2002, pp. 131-151. https://bit.ly/2wsrHYG

Doran, J.E. (2018a) MoHAT — a new AI heuristic search based method of DISCOVERING and USING tractable and reliable agent-based computational models of human society. Research Gate Project: https://bit.ly/2lST35a

Doran, J.E. (2018b) An Investigation of Gillian’s HOOP: a speculation in computer games, virtual reality and METAPHYSICS. Research Gate Project: https://bit.ly/2C990zn

Gilbert, N. and Doran, J.E. eds. (2018) Simulating Societies: The Computer Simulation of Social Phenomena. Routledge Library Editions: Artificial Intelligence, Vol 6, Routledge: London and New York.

Gilbert, N. and Heath, C. (1985) Social Action and Artificial Intelligence. London: Gower.

Lee, J. and Holtzman, G. (1995) 50 Years after breaking the codes: interviews with two of the Bletchley Park scientists. IEEE Annals of the History of Computing, vol. 17, no. 1, pp. 32-43. https://ieeexplore.ieee.org/document/366512/

Newell, A.; Shaw, J.C.; Simon, H.A. (1959) Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

Russell, S. and Norvig, P. (1995) Artificial Intelligence: A Modern Approach. Prentice-Hall, First edition, pp. 86, 115-117.

Samuel, Arthur L. (1959) “Some Studies in Machine Learning Using the Game of Checkers”. IBM Journal of Research and Development. doi:10.1147/rd.441.0206.


Hales, D. and Doran, J, (2018) Agent-Based Modelling Pioneers: An Interview with Jim Doran, Review of Artificial Societies and Social Simulation, 4th September 2018. https://rofasss.org/2018/09/04/dh/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

A bad assumption: a simpler model is more general

By Bruce Edmonds

If one adds in some extra detail to a general model it can become more specific — that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general – it is just you can imagine it would be more general.

To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) – it is much less general than the original, because it is true for far fewer cases.

This is not very surprising – a claim that a model has general validity is a very strong claim – it is unlikely to be achieved by arm-chair reflection or by merely leaving out most of the observed processes.

Only under some special conditions does simplification result in greater generality:

  • When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
  • When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
  • When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)

In other cases, where you compare like with like (i.e. you don’t move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.

References

Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 801-822.

Edmonds, B. (2007) Simplicity is Not Truth-Indicative. In Gershenson, C.et al. (2007) Philosophy and Complexity. World Scientific, 65-80.

Edmonds, B. (2017) Different Modelling Purposes. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity – a handbook, 2nd edition. Springer, 39-58.

Edmonds, B. and Moss, S. (2005) From KISS to KIDS – an ‘anti-simplistic’ modelling approach. In P. Davidsson et al. (Eds.): Multi Agent Based Simulation 2004. Springer, Lecture Notes in Artificial Intelligence, 3415:130–144.


Edmonds, B. (2018) A bad assumption: a simpler model is more general. Review of Artificial Societies and Social Simulation, 28th August 2018. https://rofasss.org/2018/08/28/be-2/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Continuous model development: a plea for persistent virtual worlds

By Mike Bithell

Consider the following scenario:-

A policy maker has a new idea and wishes to know what might be the effect of implementing the associated policy or regulation. They ask an agent-based modeller for help. The modeller replies that the situation looks interesting. They will start a project to develop a new model from scratch and it will take three years. The policy maker replies they want the results tomorrow afternoon. On being informed that this is not possible (or that the model will of necessity be bad) the policy maker looks elsewhere.

Clearly this will not do. Yet it seems, at present that every new problem leads to the development of a new “hero” ABM developed from the ground up. I would like to argue that for practical policy problems we need a different approach., one in which persistent models are developed that outlast the life of individual research projects, and are continuously developed, updated and challenged against the kinds of multiple data streams that are now becoming available in the social realm.

By way of comparison consider the case of global weather and climate models. These are large models developed over many years. They are typically hundreds of thousands of lines of code, and are difficult for any single individual to fully comprehend. Their history goes back to the early 20th century, when Richardson made the first numerical weather forecast for Europe, doing all the calculations by hand. Despite the forecast being incorrect (a better understanding of how to set up initial conditions was needed) he was not deterred: His vision of future forecasts involved a large room full of “computers” (i.e. people) each calculating the numerics for their part of the globe and pooling the results to enable forecasting in real time (Richardson 1922). With the advent of digital computing in the 1950s these models began to be developed systematically, and their skill at representing the weather and climate has undergone continuous improvement (see e.g. Lynch 2006). At the present time there are perhaps a few tens of such models that operate globally, with various strengths and weaknesses,. Their development is very far from complete: The systems they represent are complex, and the models very complicated, but they gain their effectiveness through being run continually, tested and re-tested against data,, with new components being repeatedly improved and developed by multiple teams over the last 50 years. They are not simple to set up and run, but they persist over time and remain close to the state-of-the –art and to the research community.

I suggest that we need something like this in agent-based modelling. A suite of communally developed models that are not abstract, but that represent substantial real systems, such as large cities, countries or regions,; that are persistent and continually developed, on a code base that is largely stable; and more importantly undergo continual testing and validation. At the moment this last part of the loop is not typically closed: models are developed and scenarios proposed, but the model is not then updated in the light of new evidence, and then re-used and extended: the PhD has finished, or the project ended, and the next new problem leads to a new model. Persistent models, being repeatedly run by many, would gradually have bugs and inconsistencies discovered and corrected(although new ones would also inevitably be introduced), could be very complicated because continually tested, and continually available for interpretation and development of understanding, and become steadily better documented. Accumulated sets of results would show their strengths and weaknesses for particular kinds of issues, and where more work was most urgently needed.

In this way when, say ,the mayor London wanted to know the effect of a given policy, a set of state-of the-art models of London would already exist which could be used to test out the policy given the best available current knowledge. The city model would be embedded in a lager model or models of the UK, or even the EU, so as to be sure that boundary conditions would not be a problem, and to see what the wider unanticipated consequences might be. The output from such models might be very uncertain: “forecasts” (saying what will happen, as opposed to what kind of things might happen) would not be the goal, but the history of repeated testing and output would demonstrate what level of confidence was warranted in the types of behaviour displayed by the results: preferably this would at least be better than random guesswork. Nor would such a set of models rule out or substitute for other kinds of model: idealised, theoretical, abstract and applied case studies would still be needed to develop understanding and new ideas.

The kind of development of models for policy is already taking place in to an extent (see e.g. Waldrop 2018), but is currently very limited. However, in the face of current urgent and pressing problems, such as climate change, eco-system destruction, global financial insecurity, continuing widespread poverty and failure to approach sustainable development goals in any meaningful way, the ad-hoc make-a-new-model every time approach is inadequate. To build confidence in ABM as a tool that can be relied on for real world policy we need persistent virtual worlds.

References

Lynch, P. (2006). The Emergence of Numerical Weather Prediction: Richardson’s Dream. Cambridge: Cambridge University Press.

Richardson, L. F. (1922). Weather Prediction by Numerical Process (reprinted 2007). Cambridge: Cambridge University Press.

Waldrop, M. (2018). Free Agents. Science, 13, 360, 144-147. DOI: 10.1126/science.360.6385.144


Bithell, M. (2018) Continuous model development: a plea for persistent virtual worlds, Review of Artificial Societies and Social Simulation, 22nd August 2018. https://rofasss.org/2018/08/22/mb


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)

Why the social simulation community should tackle prediction

By Gary Polhill

(Part of the Prediction-Thread)

On 4 May 2002, Scott Moss (2002) reported in the Proceedings of the National Academy of Sciences of the United States of America that he had recently approached the e-mail discussion list of the International Institute of Forecasters to ask whether anyone had an example of a correct econometric forecast of an extreme event. None of the respondents were able to provide a satisfactory answer.

As reported by Hassan et al. (2013), on 28 April 2009, Scott Moss asked a similar question of the members of the SIMSOC mailing list: “Does anyone know of a correct, real-time, model-based policy-impact forecast?” [1] No-one responded with such an example, and Hassan et al. note that the ensuing discussion questioned why we are bothering with agent-based models (ABMs). Papers such as Epstein’s (2008) suggest this is not an uncommon conversation.

On 23 March 2018, I wrote an email [2] to the SIMSOC mailing list asking for expressions of interest in a prediction competition to be held at the Social Simulation Conference in Stockholm in 2018. I received two such expressions, and consequently announced on 10 May 2018 that the competition would go ahead. [3] By 22 May 2018, however, one of the two had pulled out because of lack of data, and I contacted the list to say the competition would be replaced with a workshop. [4]

Why the problem with prediction? As Edmonds (2017), discussing different modelling purposes, says, prediction is extremely challenging in the type of complex social system in which an agent-based model would justifiably be applied. He doesn’t go as far as stating that prediction is impossible; but with Aodha (2017, p. 819) he says, in the final chapter of the same book, that modellers should “stop using the word predict” and policymakers should “stop expecting the word predict”. At a minimum, this suggests a strong aversion to prediction within the social simulation community.

Nagel (1979) gives attention to why prediction is hard in the social sciences. Not least amongst the reasons offered is the fact that social systems may adapt according to predictions made – whether those predictions are right or wrong. Nagel gives two examples of this: suicidal predictions are those in which a predicted event does not happen because steps are taken to avert the predicted event; self-fulfilling prophecies are events that occur largely because they have been predicted, but arguably would not have occurred otherwise.

The advent of empirical ABM, as hailed by Janssen and Ostrom’s (2006) editorial introduction to a special issue of Ecology and Society on the subject, naturally raises the question of using ABMs to make predictions, at least insofar as “predict” in this context means using an ABM to generate new knowledge about the empirical world that can be tested by observing it. There are various reasons why developing ABMs with the purpose of prediction is a goal worth pursuing. Three of them are:

  • Developing predictions, Edmonds (2017) notes, is an iterative process, requiring testing and adapting a model against various data. Engaging with such a process with ABMs offers vital opportunities to learn and develop methodology, not least on the collection and use of data in ABMs, but also in areas such as model design, calibration, validation and sensitivity analysis. We should expect, or at least be prepared for, our predictions to fail often. Then, the value is in what we learn from these failures, both about the systems we are modelling, and about the approach taken.
  • There is undeniably a demand for predictions in complex social systems. That demand will not go away just because a small group of people claim that prediction is impossible. A key question is how we want that demand to be met. Presumably at least some of the people engaged in empirical ABM have chosen an agent-based approach over simpler, more established alternatives because they believe ABMs to be sufficiently better to be worth the extra effort of their development. We don’t know whether ABMs can be better at prediction, but such knowledge would at least be useful.
  • Edmonds (2017) says that predictions should be reliable and useful. Reliability pertains both to having a reasonable comprehension of the conditions of application of the model, and to the predictions being consistently right when the conditions apply. Usefulness means that the knowledge the prediction supplies is of value with respect to its accuracy. For example, a weather forecast stating that tomorrow the mean temperature on the Earth’s surface will be between –100 and +100 Celsius is not especially useful (at least to its inhabitants). However, a more general point is that we are accustomed to predictions being phrased in particular ways because of the methods used to generate them. Attempting prediction using ABM may lead to a situation in which we develop different language around prediction, which in turn could have added benefits: (a) gaining a better understanding of what ABM offers that other approaches do not; (b) managing the expectations of those who demand predictions regarding what predictions should look like.

Prediction is not the only reason to engage in a modelling exercise. However, in future if the social simulation community is asked for an example of a correct prediction of an ABM, it would be desirable to be able to point to a body of research and methodology that has been developed as a result of trying to achieve this aim, and ideally to be able to supply a number of examples of success. This would be better than a fraught conversation about the point of modelling, and consequent attempts to divert attention to all of the other reasons to build an ABM that aren’t to do with prediction. To this end, it would be good if the social simulation community embraced the challenge, and provided a supportive environment to those with the courage to take it on.

Notes

  1. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;fb704db4.0904 (Cited in Hassan et al. (2013))
  2. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;14ecabbf.1803
  3. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=SIMSOC;1802c445.1805
  4. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=simsoc;ffe62b05.1805

References

Aodha, L. n. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 801-822.

Edmonds, B. (2017) Different modelling purposes. In Edmonds, B. and Meyer, R. (eds.) Simulating Social Complexity. Second Edition. Springer. pp. 39-58.

Epstein, J. (2008) Why model? Journal of Artificial Societies and Social Simulation 11 (4), 12. http://jasss.soc.surrey.ac.uk/11/4/12.html

Hassan, S., Arroyo, J., Galán, J. M., Antunes, L. and Pavón, J. (2013) Asking the oracle: introducing forecasting principles into agent-based modelling. Journal of Artificial Societies and Social Simulation 16 (3), 13. http://jasss.soc.surrey.ac.uk/16/3/13.html

Janssen, M. A. and Ostrom, E. (2006) Empirically based, agent-based models. Ecology and Society 11 (2), 37. http://www.ecologyandsociety.org/vol11/iss2/art37/

Moss, S. (2002) Policy analysis from first principles. Proceedings of the National Academy of Sciences of the United States of America 99 (suppl. 3), 7267-7274. http://doi.org/10.1073/pnas.092080699

Nagel, E. (1979) The Structure of Science: Problems in the Logic of Scientific Explanation. Hackett Publishing Company.


Polhill, G. (2018) Why the social simulation community should tackle prediction, Review of Artificial Societies and Social Simulation, 6th August 2018. https://rofasss.org/2018/08/06/gp/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0) 

RofASSS is also for discussion about RofASSS!

Comments concerning, the philosophy, working, structure or editorial decisions of RofASSS may be discussed on the site.  Comments intended for this purpose should be submitted using the normal submission page, by using the “Comment on RofASSS” option there. These are permanent and public, but not intended to be cited.

There is a paper describing RofASSS, its motivation, structure etc. here: The proposal for RofASSS. Comments about this paper are welcome.


For discussion about social simulation research