Tag Archives: interpretation

The Challenge of Validation

By Martin Neumann

Introduction

In November 2021 Chattoe-Brown initiated a discussion at the SimSoc list on validation which generated quite some traffic on the list. The interest in this topic revealed that empirical validation provides a notorious challenge for agent-based modelling. The discussion raised many important points and questions which even motivated a “smalltalk about big things” at the Social Simulation Fest 2022. Many contributors highlighted that validation cannot be reduced to the comparison of numbers between simulated and empirical data. Without attempting a comprehensive review of this insightful discussion, it has been emphasized that different kinds of science call for different kinds of quality criteria. Prediction might be one criterium that is particularly important in statistics, but that is not sufficient for agent-based social simulation. For instance, agent-based modelling is specifically suited for studying complex systems and turbulent phenomena. Modelling also enables studying alternative and counterfactual scenarios which deviates from the paradigm of prediction as quality criterion. Besides output validation, other quality criteria for agent-based models include for instance input validation or process validation, reflecting the realism of the initialization and the mechanisms implemented in the model.

Qualitative validation procedures

The brief introduction is by no means an exhaustive summary of the broad discussion on validation. Already the measurement of empirical data can be put into question. Less discussed however, had been the role which qualitative methods potentially could play in this endeavor. In fact, there has been a long debate in the community of qualitative social research on this issue as well. Like agent-based social simulation also qualitative methods are challenged by the notion of validation. It has been noted that already the vocabulary that is used in attempts to ensure scientific rigor has a background in a positivist understanding of science whereas qualitative researcher often take up constructivist or poststructuralist positions (Cho and Trent 2006). For this reason, in qualitative research sometimes the notion of trustworthiness (Lincoln and Guba 1985) is preferred rather than speaking of validation. In an influential article (according to google scholar cited more than 17.000 times in May 2023) Creswell and Miller (2000) distinguish between a postpositivist, a constructivist, and a critical paradigm as well as between the lens of the researcher, the lens of the study participants, and the lens of external people and assign different validity procedures for qualitative research to the combinations of these different paradigms and lenses.

Paradigm/ lenspostpositivistconstructivistcritical
Lens of researchertriangulationDisconfirming evidenceReflexivity
Lens of study participantsMember checkingEngagement in the fieldCollaboration
Lens of external peopleAudit trialThick descriptionPeer debriefing
Table 1. validity procedures according to Creswell and Miller (2000).

While it remains contested if the validation procedure depends on the research design, this is at least a source of different accounts. Others differentiate between transactional and transformational validity (Cho and Trent 2006). The former concentrates on formal techniques in the research process for avoiding misunderstandings. Such procedures include for instance, techniques such as member checking. The latter account perceives research as an emancipatory process on behalf of the research subjects. This goes along with questioning the notion of absolute truth in the domain of human sciences which calls for alternative sources for the legitimacy of science such as emancipation of the researched subjects. This concept of emancipatory research resonates with participatory modelling approaches. In fact, in participatory modelling accounts some of these procedures are well-known even though they differ in terminology. The participatory approach originates from research on resource management (Pahl-Wostl 2002). For this purpose, integrated assessment models have been developed, inspired by the concept of post-normal science (Funtowicz and Ravetz 1993). Post-normal science emphasizes the communication of uncertainty, justification of practice, and complexity. This approach recognizes the legitimacy of multiple perspectives on an issue, both with respect to multiple scientific disciplines as well as lay men involved in the issue. For instance, Wynne (1992) analyzed the knowledge claims of sheep farmers in the interaction with scientists and authorities. In such an extended peer community of a citizen science (Stilgoe 2009), lay men of the affected communities play an active role in knowledge production, not only because of moral principles of fairness but to increase the quality of science (Fjelland 2016). One of the most well-known participatory approaches is the so-called companion modelling (ComMod) developed at CIRAD, a French agricultural research center for international development. The term companion modelling has been coined originally by (Barreteau et al 2003) and been further developed to a research paradigm for decision making in complex situations to support sustainable development (Étienne 2014). In fact, these approaches have a strong emancipatory component and rely on collaboration and member checking for ensuring resonance and practicality of the models (Tesfatsion 2021).

An interpretive validation procedure

While the participatory approaches show a convergence of methods between modelling and qualitative research even though they differ in terminology, in the following a further approach for examining the trustworthiness of simulation scenarios will be introduced that has not been considered so far, namely interpretive methods from qualitative research. A strong feature of agent-based modelling is that it allows for studying “what-if” questions. The ex-ante investigation of possible alternative futures enables identifying possible options of action alternatives but also detecting early warning signals of undesired developments. For this purpose, counterfactual scenarios are an important feature of agent-based modelling. It is important to note in this context that counterfactuals do not match empirical data. In the following it is suggested to examine the trustworthiness of counterfactual scenarios by using methods from objective hermeneutics (Oevermann 2002), the so-called sequence analysis (Kurt and Herbrik 2014). In terms of Creswell and Miller (2000) the examination of trustworthiness is from the lens of the researcher and a constructivist paradigm. For this purpose, simulation results have to be transformed into narrative scenarios, a method which is described in (Lotzmann and Neumann 2017).   

In the social sciences, sequence analysis is regarded as the central instrument of hermeneutic interpretation of meaning. It is “a method of interpretation that attempts to reconstruct the meaning of any kind of human action sequence by sequence, i.e. sense unit by sense unit […]. Sequence analysis is guided by the assumption that in the succession of actions […] contexts of meaning are realized …” (Kurt and Herbrik 2014: 281). A first important rule is the sequential procedure. The interpretation takes place in the sequence that the protocol to be analyzed itself specifies. It is assumed that each sequence point closes possibilities on the one hand and opens new possibilities on the other hand. This is done practically by sketching a series of stories in which the respective sequence passage would make sense. The basic question that can be asked of each sequence passage can be summarized as, “Consider who might have addressed this utterance to whom, under what conditions, with what justification, and what purpose?” (Schneider 1995: 140). The answers to these questions are the thought-experimentally designed stories. These stories are examined for commonalities and differences and condensed into readings. Through the generation of readings, certain possibilities of connection to the interpreted sequence passage become visible at the same time. In this sense, each step of interpretation makes sequentially spaces of possibility visible and at the same time closes other spaces of possibility.

In the following it will be argued that this method enables an examination of the trustworthiness of counterfactual scenarios using the example of a counterfactual simulation scenario of a successful non-violent conflict regulation within a criminal group: ‘They had a meeting at their lawyer’s office to assess the value of his investment, and Achim complied with the request. Thus, trust was restored, and the group continued their criminal activities’ (names are fictitious). Following Dickel and Neumann (2021) it is argued that this is a meaningful story. It is an example of how the linking of the algorithmic rules generates something new from the individual parts of the empirical material. However, it also shows how the individual pieces of the puzzle of the empirical data material are put together to form a collage that tells a story that makes sense. A sequence that can be interpreted in a meaningful way is produced. It should be noted, however, that this is a counterfactual sequence. In fact, a significantly different sequence is found in the empirical data: ‘Achim was ordered to his lawyer’s office. Instead of his lawyer, however, Toby and three thugs were waiting for him. They forced him to his knees and pointed a machine gun at his stomach’. In fact, this was by no means a non-violent form of conflict regulation. However, after Achim (in the real case) was forced to his knees by three thugs and threatened with a machine gun, the way to non-violent conflict regulation was hardly open any more. The sequence generated by the simulation, on the other hand, shows a way how the violence could have been avoided – a way that was not taken in reality. Is this now a programming error in the modeling? On the contrary, it is argued that it demonstrates the trustworthiness of the counterfactual scenario: from a methodological point of view a comparison of the factual with the counterfactual is instructive: Factually, Achim had a machine gun pointed at his stomach. Counterfactually, Achim agreed on a settlement. From a sequence-analytic perspective, this is a logical conclusion to a story, even if it does not apply to the factual course of events. Thus, the sequence analysis shows that the simulation here has decided between two possibilities, a path branching in which certain possibilities open and others close.

The trustworthiness of a counterfactual narrative is shown by whether 1) a meaningful case structure can be generated at all, or whether the narrative reveals itself as an absurd series of sequence passages from which no rules of action can be reconstructed. 2) it can be tested whether the case structure withstands a confrontation with the ‘external context’ and can be interpreted as a plausible structural variation. If both are given, scenarios can be read as explorations of the space of cultural possibilities, or of a cultural horizon (in this case: a specific criminal milieu). Thereby the interpretation of the counterfactual scenario provides a means for assessing the trustworthiness of the simulation.

References

Barreteau, O., et al. (2003). Our companion modelling approach. Journal of Artificial Societies and Social Simulation 6(2): 1. https://www.jasss.org/6/2/1.html

Cho, J., Trent, A. (2006). Validity in qualitative research revisited. Qualitative Research 6(3), 319-340. https://doi.org/10.1177/1468794106065006

Creswell, J., Miller, D. (2000). Determining validity in qualitative research. Theory into Practice 39(3), 124-130. https://doi.org/10.1207/s15430421tip3903_2

Dickel, S., Neumann. M. (2021). Hermeneutik sozialer Simulationen: Zur Interpretation digital erzeugter Narrative. Sozialer Sinn 22(2): 252-287. https://doi.org/10.1515/sosi-2021-0013

Étienne, M. (2014)(Ed.). Companion Modelling: A Participatory Approach to Support Sustainable Development. Springer, Dordrecht. https://link.springer.com/book/10.1007/978-94-017-8557-0

Fjelland, R. (2016). When Laypeople are Right and Experts are Wrong: Lessons from Love Canal. International Journal for Philosophy of Chemistry 22(1): 105–125. https://www.hyle.org/journal/issues/22-1/fjelland.pdf

Funtowicz, S., Ravetz, J. (1993). Science for the post-normal age. Futures 31(7): 735-755. https://doi.org/10.1016/0016-3287(93)90022-L

Kurt, R.; Herbrik, R. (2014). Sozialwissenschaftliche Hermeneutik und hermeneutische Wissenssoziologie. In: Baur, N.; Blasius, J. (eds.): Handbuch Methoden der empirischen Sozialforschung, pp. 473–491. Springer VS, Wiesbaden. https://link.springer.com/chapter/10.1007/978-3-658-21308-4_37

Lotzmann, U., Neumann, M. (2017). Simulation for interpretation. A methodology for growing virtual cultures. Journal of Artificial Societies and Social Simulation 20(3): 13. https://www.jasss.org/20/3/13.html

Lincoln, Y.S., Guba, E.G. (1985). Naturalistic Inquiry. Sage, Beverly Hill.

Oevermann, U. (2002). Klinische Soziologie auf der Basis der Methodologie der objektiven Hermeneutik. Manifest der objektiv hermeneutischen Sozialforschung http://www.ihsk.de/publikationen/Ulrich_Oevermann-Manifest_der_objektiv_hermeneutischen_Sozialforschung.pdf (Download am 01.03.2020).

Pohl-Wostl, C. (2002). Participative and Stakeholder-Based Policy Design, Evaluation and Modeling Processes. Integrated Assessment 3(1): 3-14. https://doi.org/10.1076/iaij.3.1.3.7409

Schneider, W. L. (1995). Objektive Hermeneutik als Forschungsmethode der Systemtheorie. Soziale Systeme 1(1): 135–158.

Stilgoe, J. (2009). Citizen Scientists: Reconnecting Science with Civil Society. Demos, London.

Tesfatsion, L. (2021). “Agent-Based Modeling: The Right Mathematics for Social Science?,” Keynote address, 16th Annual Social Simulation Conference (virtual), sponsored by the European Social Simulation Association (ESSA), September 20-24, 2021.

Wynne, B. (1992). Misunderstood misunderstanding: social identities and public uptake of science. Public Understanding of Science 1(3): 281–304.


Neumann, M. (2023) The Challenge of Validation. Review of Artificial Societies and Social Simulation, 18th Apr 2023. https://rofasss.org/2023/04/18/ChallengeValidation


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)The Challenge of Validation

The Poverty of Suggestivism – the dangers of “suggests that” modelling

By Bruce Edmonds

Vagueness and refutation

A model[1] is basically composed of two parts (Zeigler 1976, Wartofsky 1979):

  1. A set of entities (such as mathematical equations, logical rules, computer code etc.) which can be used to make some inferences as to the consequences of that set (usually in conjunction with some data and parameter values)
  2. A mapping from this set to what it aims to represent – what the bits mean

Whilst a lot of attention has been paid to the internal rigour of the set of entities and the inferences that are made from them (1), the mapping to what that represents (2) has often been left as implicit or incompletely described – sometimes only indicated by the labels given to its parts. The result is a model that vaguely relates to its target, suggesting its properties analogically. There is not a well-defined way that the model is to be applied to anything observed, but a new map is invented each time it is used to think about a particular case. I call this way of modelling “Suggestivism”, because the model “suggests” things about what is being modelled.

This is partly a recapitulation of Popper’s critique of vague theories in his book “The Poverty of Historicism” (1957). He characterised such theories as “irrefutable”, because whatever the facts, these theories could be made to fit them. Irrefutability is an indicator of a lack of precise mapping to reality – such vagueness makes refutation very hard. However, it is only an indicator; there may be other reasons than vagueness for it not being possible to test a theory – it is their disconnection from well-defined empirical reference that is the issue here.

Some might go as far as suggesting that any model or theory that is not refutable is “unscientific”, but this goes too far, implying a very restricted definition of what ‘science’ is. We need analogies to think about what we are doing and to gain insight into what we are studying, e.g. (Hartman 1997) – for humans they are unavoidable, ‘baked’ into the way language works (Lakoff 1987). A model might make a set of ideas clear and help map out the consequences of a set of assumptions/structures/processes. Many of these suggestivist models relate to a set of ideas and it is the ideas that relate to what is observed (albeit informally) (Edmonds 2001). However, such models do not capture anything reliable about what they refer to, and in that sense are not part of the set of the established statements and theories that is at the core of science  (Arnold 2014).

The dangers of suggestivist modelling

As above, there are valid uses of abstract or theoretical modelling where this is explicitly acknowledged and where no conclusions about observed phenomena are made. So what are the dangers of suggestivist modelling – why am I making such a fuss about it?

Firstly, that people often seem to confuse a model as an analogy – a way of thinking about stuff – and a model that tells us reliably about what we are studying. Thus they give undue weight to the analyses of abstract models that are, in fact, just thought experiments. Making models is a very intimate way of theorising – one spends an extended period of time interacting with one’s model: developing, checking, analysing etc. The result is a particularly strong version of “Kuhnian Spectacles” (Kuhn 1962) causing us to see the world though our model for weeks after. Under this strong influence it is natural to confuse what we can reliably infer about the world and how we are currently perceiving/thinking about it. Good scientists should then pause and wait for this effect to wear off so that they can effectively critique what they have done, its limitations and what its implications are. However, often in the rush to get their work out, modellers often do not do this, resulting in a sloppy set of suggestive interpretations of their modelling.

Secondly, empirical modelling is hard. It is far easier (and, frankly, more fun) to play with non-empirical models. A scientific culture that treats suggestivist modelling as substantial progress and significantly rewards modellers that do it, will effectively divert a lot of modelling effort in this direction. Chattoe-Brown (2018) displayed evidence of this in his survey of opinion dynamics models – abstract, suggestivist modelling got far more reward (in terms of citations) than those that tried to relate their model to empirical data in a direct manner. Abstract modelling has a role in science, but if it is easier and more rewarding then the field will become unbalanced. It may give the impression of progress but not deliver on this impression. In a more mature science, researchers working on measurement methods (steps from observation to models) and collecting good data are as important as the theorists (Moss 1998).

Thirdly, it is hard to judge suggestivist models. Given their connection to the modelling target is vague there cannot be any decisive test of its success. Good modellers should declare the exact purpose of their model, e.g. that is analogical or merely exploring the consequences of theory (Edmonds et al. 2019), but then accept the consequences of this choice – namely, that it excludes  making conclusions about the observed world. If it is for a theoretical exploration then the comprehensiveness of the exploration, the scope of the exploration and the applicability of the model can be judged, but if the model is analogical or illustrative then this is harder. Whilst one model may suggest X, another may suggest the opposite. It is quite easy to fix a model to get the outcomes one wants. Clearly, if a model makes startling suggestions – illustrating totally new ideas or making a counter-example to widely held assumptions – then this helps science by widening the pool of theories or hypotheses that are considered. However most suggestivist modelling does not do this.

Fourthly, their sheer flexibility of as to application causes problems – if one works hard enough one can invent mappings to a wide range of cases, the limits are only those of our imagination. In effect, having a vague mapping from model to what it models adds in huge flexibility in a similar way to having a large number of free (non-empirical) parameters. This flexibility gives an impression of generality, and many desire simple and general models for complex phenomena. However, this is illusory because a different mapping is needed for each case, to make it apply. Given the above (1)+(2) definition of a model this means that, in fact, it is a different model for each case – what a model refers to, is part of the model. The same flexibility makes such models impossible to refute, since one can just adjust the mapping to save them. The apparent generality and lack of refutation means that such models hang around in the literature, due to their surface attractiveness.

Finally, these kinds of model are hugely influential beyond the community of modellers to the wider public including policy actors. Narratives that start in abstract models make their way out and can be very influential (Vranckx 1999). Despite the lack of rigorous mapping from model to reality, suggestivist models look impressive, look scientific. For example, very abstract models from the Neo-Classical ‘Chicago School’ of economists supported narratives about the optimal efficiency of markets, leading to a reluctance to regulate them (Krugman 2009). A lack of regulation seemed to be one of the factors behind the 2007/8 economic crash (Baily et al 2008). Modellers may understand that other modellers get over-enthusiastic and over-interpret their models, but others may not. It is the duty of modellers to give an accurate impression of the reliability of any modelling results and not to over-hype them.

How to recognise a suggestivist model

It can be hard to detangle how empirically vague a model is, because many descriptions about modelling work do not focus on making the mapping to what it represents precise. The reasons for this are various, for example: the modeller might be conflating reality and what is in the model in their minds, the researcher is new to modelling and has not really decided what the purpose of their model is, the modeller might be over-keen to establish the importance of their work and so is hyping the motivation and conclusions, they might simply not got around to thinking enough about the relationship between their model and what it might represent, or they might not have bothered to make the relationship explicit in their description. Whatever the reason the reader of any description of such work is often left with an archaeological problem: trying to unearth what the relationship might be, based on indirect clues only. The only way to know for certain is to take a case one knows about and try and apply the model to it, but this is a time consuming process and relies upon having a case with suitable data available. However, there are some indicators, albeit fallible ones, including the following.

  • A relatively simple model is interpreted as explaining a wide range of observed, complex phenomena
  • No data from an observed case study is compared to data from the model (often no data is brought in at all, merely abstract observations) – despite this, conclusions about some observed phenomena are made
  • The purpose of the model is not explicitly declared
  • The language of the paper seems to conflate talking about the model with what is being modelled
  • In the paper there are sudden abstraction ‘jumps’ between the motivation and the description of the model and back again to the interpretation of the results in terms of that motivation. The abstraction jumps involved are large and justified by some a priori theory or modelling precedents rather than evidence.

How to avoid suggestivist modelling

How to avoid the dangers of suggestivist modelling should be clear from the above discussion, but I will make them explicit here.

  • Be clear about the model purpose – that is does the model aim to achieve, which indicates how it should be judged by others (Edmonds et al 2019)
  • Do not make any conclusions about the real world if you have not related the model to any data
  • Do not make any policy conclusions – things that might affect other people’s lives – without at least some independent validation of the model outcomes
  • Document how a model relates (or should relate) to data, the nature of that data and maybe even the process whereby that data should be obtained (Achter et al 2019)
  • Be explicit as possible about what kinds of phenomena the model applies to – the limits of its scope
  • Keep the language about the model and what is being modelled distinct – for any statement it should be clear whether it is talking about the model or what it models (Edmonds 2020)
  • Highlight any bold assumptions in the specification of the model or describe what empirical foundation there is for them – be honest about these

Conclusion

Models can serve many different purposes (Epstein 2008). This is fine as long as the purpose of models are always made clear, and model results are not interpreted further than their established purpose allows. Research which gives the impression that analogical, illustrative or theoretical modelling can tell us anything reliable about observed complex phenomena is not only sloppy science, but can have a deleterious impact – giving an impression of progress whilst diverting attention from empirically reliable work. Like a bad investment: if it looks too good and too easy to be true, it probably isn’t.

Notes

[1] We often use the word “model” in a lazy way to indicate (1) rather than (1)+(2) in this definition, but a set of entities without any meaning or mapping to anything else is not a model, as it does not represent anything. For example, a random set of equations or program instructions does not make a model.

Acknowledgements

Bruce Edmonds is supported as part of the ESRC-funded, UK part of the “ToRealSim” project, grant number ES/S015159/1.

References

Achter, S., Borit, M., Chattoe-Brown, E., Palaretti, C. & Siebers, P.-O. (2019) Cherchez Le RAT: A Proposed Plan for Augmenting Rigour and Transparency of Data Use in ABM. Review of Artificial Societies and Social Simulation, 4th June 2019. https://rofasss.org/2019/06/04/rat/

Arnold, E. (2014). What’s wrong with social simulations?. The Monist, 97(3), 359-377. DOI:10.5840/monist201497323

Baily, M. N., Litan, R. E., & Johnson, M. S. (2008). The origins of the financial crisis. Fixing Finance Series – Paper 3, The Brookings Institution. https://www.brookings.edu/wp-content/uploads/2016/06/11_origins_crisis_baily_litan.pdf

Chattoe-Brown, E. (2018) What is the earliest example of a social science simulation (that is nonetheless arguably an ABM) and shows real and simulated data in the same figure or table? Review of Artificial Societies and Social Simulation, 11th June 2018. https://rofasss.org/2018/06/11/ecb/

Edmonds, B. (2001) The Use of Models – making MABS actually work. In. Moss, S. and Davidsson, P. (eds.), Multi Agent Based Simulation, Lecture Notes in Artificial Intelligence, 1979:15-32. http://cfpm.org/cpmrep74.html

Edmonds, B. (2020) Basic Modelling Hygiene – keep descriptions about models and what they model clearly distinct. Review of Artificial Societies and Social Simulation, 22nd May 2020. https://rofasss.org/2020/05/22/modelling-hygiene/

Edmonds, B., le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root H. & Squazzoni. F. (2019) Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3):6. http://jasss.soc.surrey.ac.uk/22/3/6.html.

Epstein, J. M. (2008). Why model?. Journal of artificial societies and social simulation, 11(4), 12. https://jasss.soc.surrey.ac.uk/11/4/12.html

Hartmann, S. (1997): Modelling and the Aims of Science. In: Weingartner, P. et al (ed.) : The Role of Pragmatics in Contemporary Philosophy: Contributions of the Austrian Ludwig Wittgenstein Society. Vol. 5. Wien und Kirchberg: Digi-Buch. pp. 380-385. https://epub.ub.uni-muenchen.de/25393/

Krugman, P. (2009) How Did Economists Get It So Wrong? New York Times, Sept. 2nd 2009. https://www.nytimes.com/2009/09/06/magazine/06Economic-t.html

Kuhn, T.S. (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

Lakoff, G. (1987) Women, fire, and dangerous things. University of Chicago Press, Chicago.

Morgan, M. S., & Morrison, M. (1999). Models as mediators. Cambridge: Cambridge University Press.

Moss, S. (1998) Social Simulation Models and Reality: Three Approaches. Centre for Policy Modelling  Discussion Paper: CPM-98-35, http://cfpm.org/cpmrep35.html

Popper, K. (1957). The poverty of historicism. Routledge.

Vranckx, An. (1999) Science, Fiction & the Appeal of Complexity. In Aerts, Diederik, Serge Gutwirth, Sonja Smets, and Luk Van Langehove, (eds.) Science, Technology, and Social Change: The Orange Book of “Einstein Meets Magritte.” Brussels: Vrije Universiteit Brussel; Dordrecht: Kluwer., pp. 283–301.

Wartofsky, M. W. (1979). The model muddle: Proposals for an immodest realism. In Models (pp. 1-11). Springer, Dordrecht.

Zeigler, B. P. (1976). Theory of Modeling and Simulation. Wiley Interscience, New York.


Edmonds, B. (2022) The Poverty of Suggestivism – the dangers of "suggests that" modelling. Review of Artificial Societies and Social Simulation, 28th Feb 2022. https://rofasss.org/2022/02/28/poverty-suggestivism


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)