1. Introduction
This Special Issue titled “Severe Uncertainty in Science, Medicine and Technology” aims to shed new light on the understanding of severe uncertainty and its multifaceted implications. The main idea of the papers of this collection is that, despite possible sophisticated statistical judgments towards future risks in science, medicine, and technology, severe forms of uncertainty still exist.
While ignorance is usually assumed to be a total absence of knowledge, uncertainty often refers to the incompleteness of knowledge or information. In its extreme form, this is called “severe uncertainty” but is also known as “fundamental,” “radical,” “deep,” “great,” or “genuine” uncertainty. A common characteristic of these notions is that it may be difficult to meaningfully conceptualize uncertainties in probabilistic terms (Knight 1921; Ellsberg 1961, Shackle 1961; Keynes 1973; Langlois 1994; Chiffi and Pietarinen 2017; Kay and King 2020). When uncertainties are mainly shaped by normative facets, these are referred to as normative uncertainties (Taebi et al. 2020).
With severe uncertainty in this special issue, we refer to situations in which the following issues are unknown, unclear or undefined:
- (i)
the adequate models to describe the relations between system’s variables;
- (ii)
the probability distribution to represent uncertainty about relevant parameters and variables; and/or
- (iii)
the correct theory of rational choice and the correct theory of epistemology to handle uncertainty.
- (iv)
the ethical dimensions that situations of uncertainty give rise to.
Classically, those forms of uncertainty that can be probabilistically quantified—as they are in many medical and engineering fields—are labeled as “risks” (Royal Society 1983). Admittedly, the distinction between risk and uncertainty is not always so sharp, and the two terms are often used interchangeably by experts and laypeople. While probabilistic risks are fairly well investigated in theories of risk, discussion on methodological tools and strategies regarding identifying and dealing with severe uncertainty has received less attention. Even though emerging research has contributed to reshaping the field, many scientific and technological decisions about future events occur under conditions of severe uncertainty rather than probabilistic risk. Thus far, a family of mathematical and argumentative methodologies have been proposed to provide rational (though not strictly probabilistic) approaches to decisions under fundamental uncertainty; the most relevant of these are potential surprise theory (Shackle 1961), scenario planning (van der Heijden 1996), possibility theory (Zadeh 1978), the Dempster-Shafer theory of belief functions (Shafer 1976), and hypothetical retrospection (Hansson 2007). A comprehensive introduction to different methodologies that cope with uncertainty is provided by Hansson (2018).
Let us expand on several aspects of severe uncertainty by reviewing an extract taken from a speech given by former US Secretary of Defense Donald Rumsfeld during the invasion of Iraq.
Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns, the ones we don’t know we don’t know. (U.S. Department of Defense 2002)
By ‘uncertain’ knowledge I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty […]. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence […] About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. (Keynes 1973, pp. 113–114)
Another relevant notion to potential surprises is the family of approaches that exploit the idea of scenario building, also known as scenario planning. Since scenarios typically do not forecast, these theories rely on imaginative narratives that indicate how various scenarios might influence our present decisions. Once those narratives have been composed, one uses them to aid in conjecturing about future scenarios (van der Heijden 1996; Martelli 2014). When structures do not repeat and we have little or no experience of potentially hazardous situations (Hansson 1996), narratives become an especially powerful tool for our imaginative capacities. Moreover, when facing low probabilities, the urgency of paying attention to less likely possibilities with a gravid impact increases. Yet, scenarios can and should be up and running well before dramatic decisions need to be made. In hindsight, when weighing the possible consequences of a decision, the narratives that have been constructed come in handy.
It is worth noting that plausible and narrative scenarios are not unlike scientific hypotheses: Hypotheses, even when they are, strictly speaking, false, may remain fruitful in many senses of the term. Importantly, narrative scenarios are not devoid of evidential value, which is useful as the weighing of evidence becomes all the more important when probabilities are small, and the events rarely repeat themselves. Yet, imaginative scenarios are not always fictions—in fact, they are quite the contrary. They can point out what does not work, what decisions are good in the sense of avoiding future harm, and what alternative actions may help lead towards achieving the desired goals. This indicates the existence of a strict connection between uncertainty and value-based decisions.
As commonly acknowledged in decision-making under risk (Hempel 1965), decision-making under severe uncertainty is a value-laden activity. Here, the non-epistemic facets (e.g., ethical, social, economic, political) of values especially rise to the podium. Recently, philosophers have begun providing rational analyses of forms of uncertainty in which normativity is a key component. Such forms are termed “moral uncertainty” or “normative uncertainty” (Lockhart 2000; Sepielli 2013, 2014; Bykvist 2017; MacAskill et al. 2020). The latter is a much broader concept than the former. Moral uncertainty is “uncertainty about what we all-thing-considered morally ought to do” (MacAskill et al. 2020, p. 2), while “normative uncertainty also applies to uncertainty about which theory of rational choice is correct and uncertainty about which theory of epistemology is correct” (MacAskill et al. 2020, pp. 2–3). Hence, normative uncertainty is about the conditions of inexactness, unpredictability, and ignorance, with respect to not only the state of factual knowledge, but also the normative dimensions of such conditions (Taebi et al. 2020). Thus, the concept of normative uncertainty involves value-based considerations in those aspects of decision-making related to epistemology, ethics, law, and planning.
Several argumentative strategies have been developed to cope with ethical judgments under severe uncertainty, such as hypothetical retrospection (Hansson 2007). According to hypothetical retrospection, the ethical values of decisions are evaluated assuming that a possible branch of future development has materialized. This evaluation is based on the values and information available when the original action took place, from the point of view of the imagined future point of retrospection. The decision rule for retrospective judgment then requires choosing an alternative that emerges as morally acceptable from all hypothetical retrospections in this branching temporal structure. At any rate, moral argumentation under severe uncertainty remains an emerging but extremely interesting field with opportunity for further developments.
Severe uncertainty affects science at many levels. Rescher (1999), for instance, emphasized many aspects associated with the uncertainty of future science. Answers to open questions are unpredictable and often take us by surprise. The very questions that future scientists will ask are unpredictable in the first place. Consequently, probabilistic risk assessment methods may soon lose their applicability. For instance, in the evaluation of trends, the impact of an unexpected event is often left out of account (Martelli 2014). While the significance of trends cannot be ruled out in theories of scientific change, the future is shaped by them, even if indirectly. Indeed, scientific progress is much more tied to uncertainty (unknown unknowns) than to (probabilistic) risk (known unknowns).
In the medical context, severe uncertainty has many clinical aspects. For instance, prognostic judgments face severe uncertainty, as probability measures are not often assigned to specific future events (Chiffi and Zanotti 2017; Chiffi 2021). This is because relevant prognostic information is not available when formulating a prognosis, and some future scenarios associated with a patient’s disease course might be previously inconceivable from a cognitive and methodological perspective. Despite this remarkable fact, prognostics remains classically based on the concept of probabilistic risk, without the recognition that it is frequently connected with events showing severe uncertainty. More generally, clinical uncertainty (also called “clinical equipoise”) has been considered in research ethics for the justification of a new trial (Fried 1974), and the structure of the future is considered to cover a key role in the assessment of the findings of the trial itself (Djulbegovic 2007).
Since it is difficult to deal with severe uncertainty using exact methods, it is often ignored (Alles 2009), and this is particularly true in technology development. This type of bias is common in contemporary science and society, where few disruptive and hazardous technological situations can be identified and fully evaluated. Nevertheless, ignoring the possible existence of unknown unknowns can be, at best, irresponsible, and at worst, existentially disastrous. Many technological disasters occur as a result of extremely unlikely scenarios but have extremely severe consequences, and these are ruled out in the design of technology. The necessity of designing technologies sensitive to (i.e., those values of safety and responsibility linked to) severe forms of uncertainty remains particularly challenging.
A well-known method of dealing with technological uncertainty is the precautionary principle; however, this principle does not seem to be the most optimal methodological tool for dealing with many technological innovation contexts. This is because this principle does not seem to be susceptible to different types of uncertainty and hazards associated with technological innovations. As noted by Sandin (1999), in every formulation of the precautionary principle, the uncertainty in question is scientific uncertainty; however, in complex contexts, uncertainty is often trans-scientific. This means that science cannot always answer the questions at stake. Suggestions to take into account the asymmetry of uncertainty (for the parties involved in the decision making), as well as spatial and temporal limitations and the interference with complex systems in balance, have been proposed to manage severe uncertainty (Hansson 1996). However, much remains to be done to deal with severe uncertainty and disruptive innovation in science, medicine, and technology and their epistemological, ethical, and socio-political implications. We hope that the present Special Issue provides a small step towards a better interdisciplinary understanding of severe forms of uncertainty.
2. The Content of the Special Issue
The contributions in this Special Issue of Perspectives on Science investigate a range of topics that pertain to the unfolding of severe uncertainty in science, medicine, and technology. We provide an overview of the papers contained in this issue and the contribution each makes to the philosophy of severe uncertainty.
Sven Ove Hansson investigates whether and how uncertainty can be either quantified or formalized. By going beyond some of the traditional definitions of uncertainty, he offers a comprehensive taxonomy by identifying eight major types of uncertainty. Many of these types are amenable to both quantification and non-quantitative formalization; however, it seems that there is still no one-size-fits-all model that can describe all their properties. Therefore, there is room to conduct more research in the formal modeling of uncertainty.
That uncertainty is not a homogeneous concept or problem is also the entry point of Stefano Moroni and Daniele Chiffi’s paper on decision-making under uncertainty. They deal with a precise form of decision-making: public decision-making (in contrast with everyday life and private decision-making) in the context of urban planning, where special issues arise particularly with regard to the adoption of new technologies. In this context, to improve the decision-making process, the authors suggest that uncertainty can be reduced by adopting rules that are simple and stable. This is because, on this account, rules have not only the social function of regulating public–private relations but also epistemic value.
The uncertainty surrounding the adoption of new technologies in society is also the topic of Philip J. Nickel, Olya Kudina, and Ibo van de Poel’s work. In particular, they focus on a precise form of uncertainty—moral uncertainty—as an analytical tool to explain the impact of new technologies on our moral values. The standard theories of “techno moral change” seem unable to fully grasp the “disruptive” character of technological innovation on individual and social moral norms. The authors argue that this explanatory gap can begin to be bridged by looking at the epistemic and deliberative dimensions of techno moral change. In this regard, a more complete account of “disruption” is developed with the adoption of a moral perspective.
The moral dimension of severe uncertainty is also at the heart of Viola Schiaffonati’s paper. She focuses on the topical case of autonomous robotics, in which severe uncertainty has a significant impact on the prediction of robots’ behavior in complex environments. To address this issue, we need both an epistemological and an ethical framework that will allow us to run better explorative experiments. Traditional accounts of experimentation in the natural sciences should be reconsidered in the case of exploratory experiments with new technologies, such as autonomous and intelligent systems.
The idea that uncertainty has an important ethical dimension is also shared by Malvina Ongaro and Mattia Andreoletti. In their paper, they highlight the importance of the analysis of non-empirical uncertainty in establishing the boundaries of scientific advising in policy decision-making. Recognizing the normative character of severe uncertainty is useful for questioning the idea that evidence-based policy decision-making can be truly impartial with regard to the policies for which it advises.
Finally, in their paper, Giovanni Valente and Massimo Tavoni offer a comprehensive review of the role of uncertainty in climate change modeling. Focusing on “Integrated Assessment Models,” they argue that computational models should consider the severe uncertainty that characterizes technological and human systems.
Note
References
Author notes
Mattia Andreoletti’s research is supported by the PRIN project “From Models to Decisions” (Bando 2017 Prot. 201743F9YE). Daniele Chiffi’s research is supported by the “Fragilità Territoriali” project of the Department of Architecture and Urban Studies of Politecnico di Milano as part of the “Departments of Excellence 2018–2022” MIUR program. Behnam Taebi’s work for this article is part of the research program Ethics of Socially Disruptive Technologies, which is funded through the Gravitation program of the Dutch Ministry of Education, Culture, and Science and the Netherlands Organization for Scientific Research (NWO grant number 024.004.031).