Tag Archives: regress

The Danger of too much Compassion – how modellers can easily deceive themselves

By Andreas Tolk

(A contribution to the: JASSS-Covid19-Thread)

In 2017, Shermer observed that in cases where moral and epistemological considerations are deeply intertwined, it is human nature to cherry-pick the results and data that support the current world view (Shermer 2017). In other words, we tend to look for data justifying our moral conviction. The same is an inherent challenge for simulations as well: we tend to favour our underlying assumptions and biases – often even unconsciously – when we implement our simulation systems. If now others use this simulation system in support of predictive analysis, we are in danger of philosophical regress: a series of statements in which a logical procedure is continually reapplied to its own result without approaching a useful conclusion. As stated in an earlier paper of mine (Tolk 2017):

The danger of the simulationist’s regress is that such predictions are made by the theory, and then the implementation of the theory in form of the simulation system is used to conduct a simulation experiment that is then used as supporting evidence. This, however, is exactly the regress we wanted to avoid: we test a hypothesis by implementing it as a simulation, and then use the simulated data in lieu of empirical data as supporting evidence justifying the propositions: we create a series of statements – the theory, the simulation, and the resulting simulated data – in which a logical procedure is continually reapplied to its own result….

In particular in cases where moral and epistemological considerations are deeply intertwined, it is human nature to cherry-pick the results and data that support the current world view (Shermer 2017). Simulationists are not immune to this, and as they can implement their beliefs into a complex simulation system that now can be used by others to gain quasi-empirical numerical insight into the behavior of the described complex system, their implemented world view can easily be confused with a surrogate for real world experiments.

I am afraid that we may have fallen into such a fallacy in some of our efforts to use simulation to better understand the Covid-19 crisis and what we can do. This is for sure a moral problem, as at the end of our recommendations this is about human lives! And we assumed that the recommendations of the medical community for social distancing and other non pharmaceutical interventions (NPI) is the best we can do, as it saves many lives. So we built our models to clearly demonstrate the benefits of social distancing and other NPIs, which leads to danger of regress: we assume that NPIs are the best action, so we write a simulation to show that NPIs are the best action, and then we use these simulations to prove that NPIs are the best action. But can we actually use empirical data to support these assumptions? Looking closely at the data, the correlation of success – measured as flattening the curves – and the amount and strictness of the NPIs is not always observable. So we may have missed something, as our model-based predictions are not supported as we hope for, which is a problem: do we just collect the wrong data and should use something else to validate the models, or are the models insufficient to explain the data? And how do we ensure that our passion doesn’t interfere with our scientific objectivity?

One way to address this issue is diversity of opinion implemented as a set of orchestrated models, to use a multitude of models instead of just one. In another comment, the idea of using exploratory analysis to support decision making under deep uncertainty is mentioned. I highly recommend to have a look at (Marchau, Bloemen & Popper 2019) Decision Making Under Deep Uncertainty: From Theory to Practice. I am optimistic that if we are inclusive of a diversity of ideas – even if we don’t like them – and allow for computational evaluation of ALL options using exploratory analysis, we may find a way for better supporting the community.

References

Marchau, V. A., Walker, W. E., Bloemen, P. J., & Popper, S. W. (2019). Decision making under deep uncertainty. Springer. doi:10.1007/978-3-030-05252-2

Tolk, A. (2017, April). Bias ex silico: observations on simulationist’s regress. In Proceedings of the 50th Annual Simulation Symposium. Society for Computer Simulation International. ANSS ’17: Proceedings of the 50th Annual Simulation Symposium, April 2017 Article No.: 15 Pages 1–9. https://dl.acm.org/citation.cfm?id=3106403

Shermer, M. (2017) How to Convince Someone When Facts Fail – Why worldview threats undermine evidence. Scientific American, 316, 1, 69 (January 2017). doi:10.1038/scientificamerican0117-69


Tolk, A. (2020) The Danger of too much Compassion - how modellers can easily deceive themselves. Review of Artificial Societies and Social Simulation, 28th April 2020. https://rofasss.org/2020/04/28/self-deception/


© The authors under the Creative Commons’ Attribution-NoDerivs (CC BY-ND) Licence (v4.0)