Abstract

I examine how Heather Douglas’ account of values in science applies to the assessment of actual cases of scientific practice. I focus on the case of applied toxicologists’ acceptance of molecular evidence-gathering methods and evidential sources. I demonstrate that a set of social and institutional processes plays a philosophically significant role in changing toxicologists’ inductive risk judgments about different kinds of evidence. I suggest that Douglas’ inductive risk framework can be integrated with a suitable account of evidence, such as Helen Longino’s contextual empiricism, to address the role of social context in the cases like the one examined here. I introduce such an integrated account and show how Longino’s contextual empiricism and inductive risk framework fruitfully complement each other in analyzing the novel aspects of the toxicology case.

1. Introduction

Contemporary philosophers of science are increasingly interested in evidence-based policy assessment and the role of value considerations in scientific analysis. This has revitalized some of the traditional philosophical debates about the proper role of values in science, the norms of scientific reasoning, and the cogency of the traditional “value-free ideal” (Rudner 1953; Jeffrey 1956; Longino 1990; Douglas 2009; Elliott 2011; Betz 2013; John 2015; de Melo-Martín and Intemann 2016; Hicks 2018). Heather Douglas contributes to this debate by offering a normative account that distinguishes legitimate from illegitimate value-permeations in scientific reasoning (Douglas 2000, 2009, 2015).

Douglas conveys that scientific reasoning can legitimately be value-laden while its objectivity stays intact. This is possible, Douglas argues, if the role of non-epistemic values is constrained to the assessment of evidential sufficiency in the face of uncertainty. More specifically, it is legitimate that non-epistemic values inform an evaluation of not-fully-confirmed hypotheses by providing researchers with information about pertinent risks of being wrong in one’s inferences when the available evidence is inconclusive (i.e., inductive risks). Douglas argues that allowing non-epistemic values in science in such an “indirect” manner does not violate the objectivity of science, at least in a significant sense of objectivity that appeals to the virtue of assessing evidence in a detached way (2004). Douglas further suggests that this understanding helps us assess when scientific judgments get dogmatic or irrationally politicized and therefore untrustworthy (2009, pp. 112–14). The most prominent aspiration of Douglas’ undertaking is to replace the value-free ideal, which strictly prohibits value considerations in evidential reasoning during the so-called core (internal) stages of scientific justification.

Douglas’ project has been found promising by philosophers of science who are interested in the role of values in science (see, for instance, Kitcher 2011; Elliott and Richards 2017a, for prominent reviews and reflections). Many commentators involved in science-based policy discussions accept that scientific inputs should be sensitive to relevant moral and political considerations without losing their rationality, objectivity, and trustworthiness in some meaningful sense. I, too, am sympathetic to this pragmatic aim. Therefore I am interested in demonstrating in detail how Heather Douglas’s rational account of scientific reasoning in the face of ethical and pragmatic considerations, and factual uncertainty, is and can be considered consistent with real-world contexts of evidential decision-making that scientists typically face.

In this article, I examine how Heather Douglas’ inductive risk framework applies to a specific and carefully selected case of scientific practice. I analyze toxicologists’ evidential judgments during a societally important historical episode of toxicological practice, the so-called molecularization of policy-relevant toxicology. Policy-relevant toxicology, which is also often aptly referred to as “regulatory toxicology,” is practiced for the purposes of regulation of the industrial use of toxic components. In the US, the major research institutions in this area are the National Institute of Environmental Health Sciences (NIEHS), which is also home to the National Toxicology Program (NTP), the Environmental Protection Agency (EPA), and the National Center for Toxicological Research (NCTR) based in the Food and Drug Administration (FDA). These institutions are tasked with protecting the public and environmental health through supplying scientific risk assessments about the potential adverse effects of the use of various toxic substances.

I will focus on regulatory toxicologists’ judgments about the acceptability of molecular evidential sources and evidence-gathering methods for their purposes. In analyzing this context, I draw on the evidence documented in the relevant historical studies of toxicology (such as Sara Shostak 2005, 2013; Frickel 2004) to offer a descriptive account of how regulatory toxicologists actually make and update methodological decisions about evidence. Based on my analysis, I demonstrate that toxicologists evaluate different kinds of evidence by assessing their epistemic advantages (such as further accuracy and precision achieved through the use of different kinds of evidence) and by considering relevant non-epistemic consequences of their inductively risky methodological decisions (such as the convenience of the use of different kinds of evidence for regulatory use, and for the broader purpose of protecting the public and environmental health). I highlight some philosophically interesting aspects of the case, such as the initial rigidity of toxicologists’ inductive risk judgments and the role of social and institutional processes in the formation of their evidential decisions, which Douglas’ account, and the broader inductive risk framework, is not designed to address. I argue that Douglas’ account of scientific reasoning can be integrated with a suitable conception of evidence, such as Helen Longino’s contextual empiricist one, in order to account for the highlighted aspects of the case. I then introduce such an integrated account and demonstrate how Longino’s contextual empiricism and Douglas’ inductive risk framework fruitfully complement each other in describing the case.

The article proceeds as follows. In section 2, I illustrate Douglas’ theory for scientific reasoning, and how it can be used for representing and assessing the actual cases of scientific judgment formation in the face of value considerations and uncertainty about facts. In section 3, I examine how the inductive risk framework can be applied to understand toxicologists’ acceptance of molecular methods and evidential sources over the conventional ones for the purposes of regulatory toxicology, and I highlight the exceptional and theoretically novel aspects of the toxicology case which the inductive risk framework is not designed to address. In section 4, I show how Douglas’ inductive risk framework works in harmony with Longino’s contextual empiricism to address the highlighted problems such as toxicologists’ initial rigidity of the inductive risk judgments and the role of social context in mediating scientists’ evidential judgments. In section 5, I conclude with a brief summary of the paper.

2. Heather Douglas’ Account as a Norm of Scientific Judgment in the Face of Inductive Risks

The so-called “value-free ideal” suggests that scientists’ assessment of evidence should be free from non-epistemic values. As a norm of scientific reasoning, this suggestion prohibits appealing to contextual pragmatic or ethical considerations in making evidence-based judgments. The value-free ideal has been widely debated in the modern philosophy of science (Rudner 1953; Jeffrey 1956; Kincaid, Dupré, and Wylie 2009; Douglas 2009; Kitcher 2011; Betz 2013; de Melo-Martín and Intemann 2016; Bright 2018). Heather Douglas has contributed to the debate by re-examining the role of inductive risk in scientific reasoning in her Science, Policy and the Value-Free Ideal (Douglas 2009). Douglas proposes an alternative norm for scientific reasoning that preserves the objectivity and rationality of science in a meaningful sense. In this section, I present a summary of Douglas’ account, and motivate the project and the case I am going to analyze.

2.1. Distinguishing Roles of Values to Conceptualize an Alternative to the Value-Free Ideal

Heather Douglas proposes that scientists’ judgments are often used for informing policy decisions, and thereby scientists may contribute to morally and politically significant outcomes in the world. In such cases, Douglas argues, scientists cannot and should not resist appealing to non-epistemic considerations in their decision-making. She argues that in many methodological decisions, scientists inescapably need to choose thresholds of evidential sufficiency, asking “how much evidence is sufficient for me to accept or reject a hypothesis?”, because scientific inference and judgment often involve ineliminable uncertainties about what is actually true or right (Douglas 2000, p. 559). Accordingly, Douglas argues that it is rational and morally desirable when scientists consider the non-epistemic consequences of their decisions in contexts where they accept a hypothesis, make methodological choices, or provide support for a course of policy action. She concludes that the value-free scientific reasoning is therefore “flawed” and “incomplete” as a normative and descriptive account of policy-relevant sciences because it precludes scientists from making decisions regarding evidential sufficiency in the face of factual uncertainty and the pertinent need to consider the moral and pragmatic consequences of scientific judgments (2009).

Douglas proposes an alternative norm for scientific reasoning, which conceives of non-epistemic considerations as information used for determining the seriousness of making inductive errors in evidence-based reasoning:

The scientist will need to consider both the quantity of evidence or degree of confirmation to estimate the magnitude of inductive risk and the valuation of the consequences that would result from error to estimate the seriousness or desirability of the consequences. The weighing of these consequences, in combination with the perceived magnitude of the inductive risk (i.e., how likely one is to be wrong), determines which choice is more acceptable. Where inductive risks are involved and non-epistemic consequences follow from error, non-epistemic values are essential for deciding which inductive risks we should accept, or which choice we should make. (2000, p. 565)

This suggests that non-epistemic values help researchers determine how much evidence is sufficient to accept or reject a hypothesis or to make a potentially harmful methodological decision. Douglas contrasts this restricted, “indirect” role of values in scientific reasoning with what she calls the “direct” role of values in scientific reasoning. When the role of values is direct, their use goes beyond this supplementary function (that is, helping assess the sufficiency of evidence), and instead, trumps evidential ones or replace them. Only the indirect role of non-epistemic values is permissible in scientific reasoning, she claims, as this indirect role does not harm scientific objectivity in an important and meaningful sense of the term objectivity.

To support this argument, Douglas compares the cases of indirect permeation of values with the cases of direct permeation of values during the internal stages of the scientific method, which consists of the methodological steps that concern scientific justification such as selection and interpretation of data, modeling, and confirmation of hypotheses. The latter involves “wishful thinking,” “cherry-picking evidence from a wide variety of evidential sources,” and “constructing a methodology that will give results serving [one’s] own liking” (2009, pp. 150–52). These kinds of epistemic practices would harm scientific objectivity and invite dogmatism, corruption, or radical politicization of science. In contrast, so Douglas argues, the indirect permeation of values in scientific reasoning is different from these cases and does not lead to an erosion of scientific objectivity. She emphasizes that scientists’ consideration of non-epistemic values in the indirect manner preserves detachment and non-dogmatism, which she takes to be one of the essential aspects and the core virtues of scientific reasoning (2009, pp. 112–14). Her proposed alternative to the value free ideal preserves these core virtues of scientific reasoning (Douglas 2004).

Douglas’s argument from inductive risk is therefore promising to conceptualize a new normative benchmark for scientific reasoning, although there are important skeptical arguments such as Betz (2013) who questions whether the value free ideal should be abandoned, and de Melo-Martín and Intemann (2016) who question whether Douglas’ account really rejects the value free ideal. Douglas’ account is indeed interpreted in different ways: as part of an alternative to the traditional value-free ideal, as a tool for policing scientists’ reasoning, and as a benchmark to distinguish permissible value-ladenness in science from corrupt or politized use of values in science (see Elliott 2011 for an extensive critical review of these distinct pursuits of Douglas’ account). At the same time, various prominent commentators have discussed the scope of the inductive risk arguments such as Douglas’, debate to what extent the inductive risk framework can serve as a full-fledged account of values in science, analyze how Douglas’ account and the broader inductive risk approach to values in science relate to the other established descriptive and normative arguments about the role and nature of values in science, and demonstrate how the inductive risk framework applies to different cases of scientific practice (Steel and Whyte 2012; Brown 2013; Hicks 2014; Biddle 2016; Biddle and Kukla 2017; de Melo-Martín and Intemann 2016; Elliott and Richards 2017b; Hicks 2018).

In this vein, a growing body of philosophical literature contributes to our understanding of the proper place of values in science by applying the inductive risk framework to the assessment of the actual contexts of scientific decision-making. Here, I aim to contribute to these debates by focusing on an episode of scientific practice that instantiates some aspects of scientific reasoning, which are less commonly examined from the perspective of the inductive risk framework. I will analyze the context in which scientists choose between different kinds of evidential sources and evidence-gathering methods and make judgments about the (inductive) risks pertaining to their decisions. I will focus on the case of regulatory toxicologists’ gradual adoption of molecular tools. Specifically, I will examine precisely how the inductive risk framework applies to this case, and how Douglas’ account can account for some interesting aspects of toxicologists’ reasoning such as the graduality of the changes in their evidential judgments and the role of contextual social and institutional processes in the formation of their methodological decisions. Based on my analysis, I will argue that these aspects of the case can be described by Douglas’ inductive risk framework when it is considered to be in synchrony with suitable neighboring philosophical approaches, specifically Helen Longino’s contextual empiricism. I aim to demonstrate that these two accounts complement each other in fruitful ways. Let’s then focus on the case and discuss how Douglas’s inductive risk framework applies to it.

3. Analyzing the Case of Molecularization from the Perspective of Douglas’ Inductive Risk Framework

Applied toxicologists aim to assemble scientific assessments concerning the toxicity of chemical substances to inform regulatory decisions taken by public health institutions such as the National Institute of Environmental Health Sciences (NIEHS), the Environmental Protection Agency (EPA), and the Food and Drug Administration (FDA) in the U.S. context. Because applied toxicology is a “regulatory science” (as initially termed by Jasanoff 1990), toxicological practice constitutes a perfect case study for Douglas (2000, 2009) and any philosophical discussion regarding the issues arising in the context of policy-relevant sciences and evidence-based policies. In this section, I will investigate the extent to which Douglas’ account successfully evaluates various types of decisions that toxicologists make. Specifically, I will focus on the acceptance of molecular evidential sources for conducting policy-relevant toxicological research—the so-called molecularization of regulatory toxicology.

3.1. Douglas’ Norm applied to Toxicologists’ Judgments

The practice of regulatory toxicologists is a very suitable case for illuminating Heather Douglas’ account of values, as it is a rigorous scientific practice that is not autonomous from society or policy-related processes. In her reflections on regulatory toxicology, Douglas debunks the division of labor that has been widely assumed to hold between “toxicologists as value-free risk assessors” and “regulators as risk managers” (2009, p. 140). Douglas claims that the depiction of toxicologists as value-free risk assessors, who operate in isolation from society and who serve the decision-making processes of value-laden policymakers, is descriptively incomplete. She also argues that trying to achieve such an ideal in practice is normatively undesirable due to the inductive risks involved in toxicological risk assessment. Instead, Douglas proposes that toxicologists should consciously and transparently take responsibility for triggering environmental risk management by adjusting their thresholds for determining the sufficiency of the evidential grounds in response to potential societal consequences of their decisions. In line with the account illustrated in Section 2, Douglas considers this (“indirect”) role of values in toxicology-based judgment formation legitimate.

Douglas applies her account to analyze various critical methodological stages of toxicological science (2000):

  • • 

    Making a judgment about the severity of the changes observed in exposed tissues (2000, sec. 4);

  • • 

    Choosing an adequate data-analysis method among alternative dose-response curves to fit the available data (2000, sec 5);

  • • 

    Interpreting the results in a particular way (e.g., stating whether a study allows scientists to conclude that the substance under consideration is toxic irrespective of its dosage; or that the toxicity of the substance is contingent on the dosage). (2000, sec. 6).

How does Douglas’ framework represent the structure of scientific reasoning in these contexts? Take the context in which toxicologists interpret the results. Imagine, for instance, two representative toxicologists (say “T1” and “T2”) who take “environmental health” into account in their reasoning when they assess the hazardousness of chemical substances. T1 reports that “the substance is highly toxic” even though she does not unambiguously detect any significant malignant changes in the exposed tissues observed. T1 nevertheless confirms the toxicity of the substance because T1 refrains from making judgments that might lead to undesirable environmental health outcomes. T2 also reports that “the substance is highly toxic.” But, different from T1, T2 detects some malignancies in the exposed tissues she observes. Even though available evidence does not fully confirm the hypothesis that “the substance is toxic,” T2 lowers the threshold of evidential sufficiency by considering the undesirability of environmental risks involved and accepts the hypothesis. It is clear that the non-epistemic value (the protection of environmental health) trumps the evidential considerations in T1’s judgment. For T2, however, the concern for environmental health only plays a supplementary role in the reasoning. If we follow Douglas’ account, it is then pretty straightforward to deem T1 as biased, dogmatic, and unjustifiably subjective. Similarly, it is pretty simple to judge that T2 preserves “objectivity,” as some metaphorical distance from evidence is kept in the assessment while allowing non-epistemic considerations to influence the judgment.

The suitability of Douglas’ account to evaluate the toxicologists’ judgments in cases like the above bears little controversy. There are some pieces of objects, such as slides of the exposed animal tissues, which have an evidential relationship with toxicologists’ hypotheses and, more generally, their judgments. These pieces of evidence confirm toxicologists’ judgments, such as concerning the toxicity of a substance, to different degrees. Inductive risks are therefore present, and hence Douglas’ norms could readily be applied to describe and assess the permissibility of the ways in which non-epistemic values permeate into scientists’ judgments in these cases.

The case of molecularization is about a different methodological context than the one I described above. Unlike the sufficiency of evidence for confirmation, the acceptance of molecular data and methods is a methodological issue about the relevance of evidence for the aim of regulatory toxicology. This context (i.e., the choice of evidence-types and how scientists make inductive risk judgments about them) is less commonly examined in the literature on inductive risk (see Biddle 2016; de Melo-Martín and Intemann 2016; Biddle and Kukla 2017; Hicks 2018 for a discussion of the conceptual and theoretical issues regarding how the inductive risk framework applies to this context).

In the following, I will illustrate that Douglas’ inductive risk framework properly applies to toxicologists’ judgments about the kinds of evidence, too. My aim is not only to demonstrate how Douglas’ account applies to this context but also to highlight some important aspects of the case that the inductive risk framework is not designed or purported to address.

Before I focus on this context in the next subsection, a disclaimer is in order. In my analysis of the case of molecularization, I will specifically refer to some science studies scholars’ detailed examination of toxicologists’ judgments concerning molecular evidence (such as Frickel 2004 and Shostak 2013) with the aim of providing a comprehensive description of the case. Some readers might anticipate that I will pit the science studies scholars’ arguments about toxicology against Douglas’ arguments. However, that is not my aim. I will not refer to any philosophical claims made by these scholars. Rather, I will rely solely on these authors’ characterization of the historical facts regarding the case. I will thereafter defend the philosophical significance of the historical details of the case for the inductive risk framework.

I will now start with a brief explanation of the so-called “molecularization of regulatory toxicology.” Then, I will offer a description of how regulatory toxicologists choose between molecular evidential sources and conventional pathological ones and examine how the inductive risk framework captures their reasoning. I will then discuss why it is a good idea to integrate the inductive risk framework with a suitable neighboring approach to analyze the specifics of the case.

3.2. Choosing between the Molecular and Conventional Methods in Regulatory Toxicology

Classical policy-relevant toxicology used to be predominantly based on pathological examination of exposed animals in laboratory conditions. In this method, laboratory observations of exposed tissues, which are then statistically analyzed by toxicologists, are used for making predictions about the hazardousness of toxic substances under changing dosages based on different dose-response models.1 This primarily pathology-based, observational method was referred to as the gold standard for performing policy-relevant toxicological analysis until the 2000s; see, for instance, National Toxicology Programs’ seminal report (2004), where a vision for a change is introduced.

The received methodology of policy-relevant toxicological science has changed dramatically toward the adoption of molecular methods, especially in the United States. Contemporary toxicological science, at least in the context of the United States, including toxicology practiced for regulatory purposes, has been systematically moving towards becoming a genomic and molecular science (a prominent consensus report published by the National Academy of Sciences [2017] reviews the developments in the last two decades). Though some controversy remains and the widespread application of new methods to the regulatory realm is still a project in progress, the relevance and the usefulness of molecular methods for informing environmental health policies are widely accepted by the major institutions of regulatory toxicology in the US. The research outputs of NTP at NIEHS and NCTR at FDA or the activities of ToX21 collaboration attest to this change in major institutions’ methodological judgments about molecular methods (see EPA 2019 for a review).2 Regulatory toxicologists nowadays increasingly rely on data gathered through advanced micro array technologies and advance quantitative tools such as high-throughput screening, which affords a comprehensive and ever more precise measurement of the simultaneous effects of multiple toxic chemicals across genetically different populations and individuals.

Nevertheless, the shift towards the molecular methods in applied toxicology, which is labeled “the molecularization of toxicology” by science studies scholars (Shostak 2005, 2013), has been highly controversial among toxicologists and relevant stakeholders such as regulators, industrial organizations, and environmental advocacy groups. As we will see, toxicologists’ reasoning about the adoption of molecular methods was influenced by epistemic considerations such as the high precision of the molecular evidential sources as well as non-epistemic ones such as how regulatory toxicologists should protect public health and reduce ecological and environmental health risks.

Broadly construed, the controversy regarding the choice of the molecular over the pathological method revolves around the following. Many applied toxicologists and relevant stakeholders such as research-based environmentalist advocacy groups initially proposed that molecular evidence-gathering methods and evidential sources were ill-suited for the pragmatic and regulatory roles of toxicology. Among other things, these roles include informing legislators about how the industrial production of chemical substances should be regulated and performing evaluations of ecological and health risks associated with various toxicants so as to conserve environmental health. The traditional pathological evidence-gathering methods and evidential sources were deemed entirely sufficient for serving these purposes, as they were conducive to assessing hazards of one chemical substance in isolation from other causally relevant chemical and genetic factors. Furthermore, to venture into comprehensive analyses of the complex molecular and genetic mechanisms of disease causation was thought to be impractical and inefficient for regulatory toxicology, however interesting it might be for purely scientific purposes. Some toxicologists working at prominent regulatory institutions in the US (e.g., NTP-NIEHS, NCTR-FDA, and EPA) or researchers from environmental advocacy and justice groups (e.g., West Harlem Environmental Action, WEACT) thought that the epistemic benefits of molecular methods, such as greater precision and explanatory power, could only be achieved at the expense of making environmental regulation more complicated and less effective (Shostak 2013, pp. 64–70). Hence, so their view went, these methods would potentially serve the interests of profit-seeking industries, which aim to minimize the financial costs of toxicological regulation. Here are some excerpts from the interviews that a prominent science studies scholar, Sara Shostak, conducted with applied toxicologists working at NTP, which exemplify these kinds of worries about the increasing use of molecular methods in regulatory toxicology:

[at NTP] we need some people with practicality. We need some people with skills in toxicology … empirical descriptive toxicology. [If] you find out something causes cancer, then let somebody else mess around with the mechanism…. I don’t want to know how it does it … I want to know, “Is this safe?” (Shostak 2013, p. 64).

My interest is in what can we change to make people healthier? We can change exposures…You can’t change your gene pool (2013, p. 65).

Any new technology, it’s always a good delaying tactic for environmental health risk assessments (2013, p. 66).

While the molecular kinds of evidence are conducive to more accurate and precise toxicological risk assessment, toxicologists based in prominent regulatory institutions initially did not accept the relevance of molecular kinds of evidence for their inquiry. Their reasoning was that molecular evidence-gathering methods complicated toxicological risk assessment as they are conducive to more detailed results, the significance of which is ever more difficult to translate into judgments that can conveniently be used for regulatory purposes. Working with molecular evidence-gathering methods thereby slows down and renders ambiguous the regulatory process, hence harming their mission of protecting environmental health.

It seems that these toxicologists took the relevant inductive risks into account when making pertinent negative judgments about the acceptability of molecular kinds of evidence. Given the risk of making a wrong methodological decision for the pragmatic purposes of regulatory toxicology, some toxicologists such as the ones quoted believed that they should keep low standards for evidential sufficiency (i.e., the level of evidential precision and certainty they deem sufficient for the regulatory purposes of toxicology). This belief was shaped by taking into account the environmental health risks associated with the purported impracticality of the molecular methods for serving policy-relevant toxicological assessment.

While we can explain toxicologists’ reasoning by reference to the language of Douglas’ inductive risk framework, the content of toxicologists’ inductive risk judgment in this instance is also puzzling from the perspective of Douglas’ account. Note that those toxicologists referred above who were skeptical toward the molecularization did not argue that the adoption of molecular kinds of evidence would worsen the precision, accuracy, and the overall quality of evidence used for conducting toxicological risk assessment. On the contrary, it seems that the toxicologists acknowledged that the molecular kinds of evidence would enable them to make more precise and accurate assessments than they could do through the classical kinds of evidence (Shostak 2013, pp. 48–64). They nevertheless resisted adopting molecular kinds of evidence by considering the magnitude of inductive risks regarding the protection of public health and the ease of regulation. This kind of reasoning is not fully in line with Douglas’ account because more precise and accurate evidence is supposed to decrease inductive risks, according to her theory. In other words, when the available evidence is more certain, the chance of making wrong decisions becomes lower; hence the magnitude of inductive risks is supposed to be lower, too (2009, p. 96). Yet, puzzlingly, in this case, if the toxicologists’ assessment were right, then additional or better evidence would lead to more uncertainty, not less.

The same issue has been recently highlighted by Hicks through a case study about the practice of policy-relevant molecular toxicology (2018). Hicks also observes that additional evidence, of the kind achieved using molecular models in the studies of endocrine disruptors, does not decrease uncertainty. They note that “uncertainty here is not the result of limited evidence but of more accurate or precise evidence” (2018, p. 170). Furthermore, they label this as “an important counterexample or exception” to Douglas’ norm (2018, p. 170). For now, I flag this aspect of the case as an important issue to be addressed. I will respond to this problem later, in Section 4.

An equally interesting—in fact the main—aspect of the case of molecularization is that the prominent communities of toxicologists who were initially skeptical about the molecular approaches to toxicological risk assessment later updated their judgments and gradually accepted the use of molecular evidential sources and evidence-gathering methods (Shostak 2013). In so far as this change in toxicologists’ judgment is described in terms of the language of the inductive risk framework, it must be the case that the content of toxicologists’ inductive risk judgments changed. The same toxicologists must have concluded that the use of molecular evidential sources did not increase inductive risks, unlike what they had initially thought.

I will now describe how this change in toxicologists’ judgment happened, drawing on the relevant historical studies. Thereafter, I will discuss why and how the historical details of the ‘molecularization’ can suitably be addressed by an analysis of the case from the perspective of the inductive risk framework.

3.3. The Transition to Molecular Regulatory Toxicology: The Role of Social and Institutional Processes

Let’s review the relevant historical facts about how major communities of toxicologists in the U.S. gradually adopted the molecular approaches to toxicological risk assessment, and then ask ourselves whether and how we should fit the details of the case with Douglas’ inductive risk framework.

Prominent historical studies of contemporary toxicology (such as Frickel 2004; Shostak 2005, 2013) describe the process of molecularization as one that is mediated by a set of sociological and institutional processes. According to these studies, a significant macro-sociological process that contributed to the molecularization of regulatory toxicology was decreasing financial and institutional support for regulatory toxicology. Although not adopting molecular methods would not have practically prevented toxicologists from performing effective policy-relevant analyses, regulatory toxicologists increasingly met criticism for not using what were considered cutting-edge scientific methods (Shostak 2013, chap. 3). In particular, these criticisms were raised by commercial scientists working in profit-seeking industries (2013, p. 6). In addition, the popularity of genomic studies had increased in the general public, and the neighboring disciplines had already adopted molecular methods. These macro-scale developments led regulatory toxicologists to worry about losing the financial and social support they received from governmental and non-governmental organizations. In her 2013 book, Sara Shostak extensively documents how leading communities of applied toxicology in the US interpreted these macro-sociological changes as threats to the scientific authority of toxicology for regulatory purposes. Shostak explains that such financial and institutional considerations boosted the toxicologists’ gradual adoption of molecular methods.

Shostak also describes how the major communities of regulatory toxicologists initiated a set of strategic micro-institutional processes in response to these macro challenges to their inquiry. Those developments, too, contributed to the gradual adoption of the molecular evidence-gathering methods over pathological ones in regulatory toxicology. Here are some examples of these institutional processes toxicologists deliberately engaged in:

  • 1. 

    Initiating new institutional ties with research centers, which were formerly thought to be only remotely related to applied-toxicological practice (such as the National Human Genome Research Institute, NHGRI) (Shostak 2013, pp. 141–43).

  • 2. 

    Forming new research centers that aim to raise funds for promoting policy-relevant molecular research in toxicology, including providing infrastructure for training new toxicologists who are experts in both classical toxicology and molecular methods (pp. 143–50).

  • 3. 

    Defining a research agenda for exploring applications of gene expression profiling specific to regulatory toxicology. This includes encouraging the use of adequate tools (such as ToXChips) for translating the new type of data into the old forms of regulatory procedures (p. 147).

  • 4. 

    Establishing research initiatives such as the Toxicogenomics Research Consortium to develop new methodological procedures and standards relevant for the use of new molecular risk assessment models, and to advance tools so as to allow the transparent and standardized use of new kinds of evidential sources (pp. 156–57).

  • 5. 

    Organizing workshops such as consensus-building forums that aim to convince the relevant stakeholders (such as governmental agencies, funding agencies, private sector participants, environmental justice groups, and practicing scientists), explain the need for molecular methods in performing regulatory toxicological practice (such as the large-scale information forum “the Committee on Emerging Issues and Data on Environmental Contaminants”), and shape users’ and producers’ expectations of toxicological practice (pp. 160–66).

Now, these are important descriptive details regarding how major communities of regulatory toxicologists gradually changed their judgments in favor of the molecularization of regulatory toxicology. I want to highlight two aspects of the case based on this description. First, according to the historical details of the case, the updating of toxicologists’ evidential judgments was mediated by a variety of social and institutional processes working in the background such as disciplinary politics, economic and cultural pressure, and toxicologists’ own attempts to build a new consensus through academic and technical innovations. Second, and by implication of the first, the updating of toxicologists’ evidential judgments was gradual and followed the social and institutional processes I have briefly described.

My contention is that these highlighted observations about the historical details of the case are philosophically significant, and an analysis of the case from the perspective of the inductive risk framework would get conceptually stronger by properly addressing and accounting for them. More generally, I suggest, these aspects of the case are important for our understanding of the nature of toxicologists’ inductive risk judgments. In the following, I will first explain why this is the case and then offer a promising way through which Douglas’ inductive risk framework can address these details.

3.4. The Need to Account for the Role of Social and Institutional Processes in the Formation of Toxicologists’ Inductive Risk Judgments

As we have seen, toxicologists consider the pertinent inductive risks in judging the acceptability of the molecular evidential sources in regulatory toxicology. They do so by weighing the epistemic quality of molecular evidence (e.g., additional precision and accuracy in toxicological risk assessment) and the perceived non-epistemic consequences of adopting molecular methods (e.g., making toxicology-based regulation more complicated, slowing the pace of risk assessment, and ambiguating the evidence that is supposed to trigger policies which preserve environmental health). Moreover, we have seen that the content of toxicologists’ inductive risk judgments pertaining to the adoption of molecular approaches has changed over time. Furthermore, the historical details of the case suggest that social and institutional processes have mediated this change.

The role that sociological and institutional processes may play in the formation of scientists’ various decisions are not referenced by the standard presentations of the inductive risk framework such as Douglas’. This is natural because the inductive risk framework is not designed to address the role of social context in the formation of scientists’ judgments. Our question is then the following: How should an analysis of the molecularization of toxicology from the perspective of the inductive risk framework account for the relevant historical facts reviewed above? In other words, what would be a plausible interpretation of the relevant historical details of the molecularization from the perspective of Douglas’ account?

A plausible response would be to deny the philosophical significance of these historical details for an inductive-risk-analysis of the case. It could be that the documented historical facts, which describe how some toxicologists came to accept the relevance of molecular methods and evidential sources, are some nitty-gritty details of precisely how the toxicologists updated their inductive risk judgments. The role of sociological mediating factors, so this response goes, is redundant in an analysis of the case from the perspective of inductive risk. I will ultimately reject this response. To do so, I will present how the standard presentation of the inductive risk framework can describe the change in toxicologists’ judgments, excluding the details about the role of social and institutional context. Then, I will discuss why the standard reconstruction of the case would be incomplete.

Recall the logic of the inductive risk judgments that I summarized in Section 2. Douglas suggests that scientists consider the inductive risks of their methodological choices based on an assessment of the magnitude of the non-epistemic consequences of making a risky decision and an assessment of the evidential uncertainty (that is, an estimation of how accurate and confirmatory the available evidence is). According to this standard presentation of scientists’ decision-making, a change in the content of scientists’ inductive risk judgments may result from a change in one or two of these variables that enter into the inductive risk calculation. That is, the content of scientists’ inductive risk judgments could follow a change in scientists’ assessment of the magnitude of the adverse consequences stemming from error, i.e., a change in their evaluation of relevant non-epistemic considerations. The change could also follow an update in the confirmatory power of available evidence (for example, by producing more precise and accurate evidence, possibly through technological advancement).

Now, in describing how toxicologists first rejected the molecularization and then later updated this judgment and adopted the molecular types of evidence, we cannot reasonably attribute this to a change in their evaluation of the importance of public and environmental health. Nor can we cogently claim that the major advisory institutions of regulatory toxicology abandoned their pragmatic mission of producing evidence that can be used for regulatory action. Hence, the acceptance of molecular methods is not reasonably attributable to a shift in toxicologists’ assessment of the relevant non-epistemic consequences. Accordingly, if we want to stick to the standard model of the inductive risk framework, we would expect that the shift in toxicologists’ judgment should have resulted from some improvement in the perceived confirmatory power (and epistemic quality) of the molecular tools and evidential sources for the purposes of regulatory toxicology. For instance, toxicologists might have reasoned that accepting molecular kinds of evidence would not only improve the accuracy and the precision of the toxicological risk assessment but also do so without bearing additional costs for the pragmatic, regulatory function of toxicology. Toxicologists might have, therefore, chosen to raise their standards for evidential quality, as that decision no longer posed serious inductive risks—contrary to what they initially thought.

If this rational reconstruction of the case of molecularization is persuasive, then toxicologists’ acceptance of molecular kinds of evidence can be explained purely in terms of Douglas’ standard presentation of the inductive risk model. The historical details of the case that I highlighted above (that is, how the social and institutional processes mediated the shifts in toxicologists’ judgments) would then be philosophically uninteresting details of precisely how toxicologists made their judgments about the relevance of molecular kinds of evidence for their inquiry. However, I argue that the reconstruction of the case I have just offered is incomplete in an important respect, and that the role of the social context would better be addressed in an analysis of the case in terms of the inductive risk framework. What is then missing in the presented reconstruction of the case of molecularization?

Admittedly, it is true that the molecular risk assessment models in contemporary regulatory toxicology are highly sophisticated such that they effectively fulfill the regulatory and societal purposes of contemporary toxicology. Nevertheless, this sophistication is not directly followed by some drastic advances in molecular technologies. On the contrary, the molecular methods used in contemporary toxicological research were already being used by scholars in neighboring disciplines such as bio-medicine, biology, and even in less applied parts of toxicology (such as the genetic branch of toxicology that deals with the mechanisms of toxicity irrespective of policy-relevant questions) decades before the adoption of these technologies in regulatory toxicology. Applied (regulatory) toxicology appears to be the last scientific field among its neighbors that adopted the molecular methods and made use of molecular evidential sources (see Frickel 2004 for an exhaustive historical account). Indeed, this fact is precisely why the molecularization of toxicology, or researchers’ initial resistance to it, is such an interesting case to investigate in the eyes of science studies scholars in the first place (Frickel 2004; Shostak 2013). Hence, it is not as straightforward as to say that the molecular methods in toxicology had advanced so much that adopting these new methods no longer bore the risk of making policy-relevant toxicological analysis less practical.

The reconstruction of toxicologists’ reasoning offered in the standard presentation of the inductive risk framework is, thus, not compelling because it does not explain why toxicologists did not adopt the advanced technologies earlier. In my reading, the change in toxicologists’ judgments and the graduality of the change can be explained by reference to institutional and sociological processes that mediate regulatory toxicologists’ acceptance of molecular kinds of evidence. In other words, in the absence of these social and institutional processes, toxicologists’ acknowledgment of epistemic virtues of molecular methods over the pathological methods, and only that, would not be enough to change their inductive risk judgments. Only after these contextual changes took place did regulatory toxicologists gradually update the content of their inductive risk judgments and accept the relevance and aptness of molecular approaches for their purposes. The social and institutional processes (including broad sociological processes or the institutional activities of the communities of toxicologists) therefore played a philosophically significant role in mediating the changes in toxicologists’ inductive risk judgments.

This argument supports the need to account for the social processes for an analysis of the case like the molecularization through the language of the inductive risk framework. It also motivates my proposal to integrate Douglas’ inductive risk analysis with neighboring philosophical accounts that can accommodate the idea that social contextual factors may sometimes scaffold the formation of scientists’ evidential judgments. Discussing how this can be done will be my next task.

In order to accomplish this task, I propose that we do not need to radically transform Douglas’ inductive risk framework such that it also accounts for the relevant social dimensions of scientific reasoning. Such a modification would be unnecessary as Douglas’ account and the broader inductive risk framework are not designed to examine the role of social and institutional factors in scientists’ reasoning, and more importantly, doing so is not their main philosophical function. The main function of the inductive risk framework is that it gives us a plausible rational account of scientific reasoning in the face of factual uncertainty and the pertinent need to consider pragmatic and moral consequences of scientists’ decisions. In line with Douglas’ compatibilist reading of the literature on values in science (2015), my strategy will be to show that the inductive risk framework is compatible and works in harmony with the relevant philosophical accounts that accommodate the constitutive role the social and institutional processes may play in mediating scientists’ research decisions. I will then demonstrate how such an integrated understanding of the inductive risk framework helps us address the novel aspects of the case I have identified so far.

4. Integrating Douglas’ Account with Longino’s Contextual Empiricism

I have so far argued that it is desirable for Douglas’ account and the inductive risk framework to account for the social and institutional processes that mediate scientists’ evidential judgments when examining cases like the molecularization of regulatory toxicology. I have also proposed that a reasonable way to do so is to show how Douglas’ account can be complemented with a relevant philosophical framework that gives special attention to the contribution of social and institutional surroundings to individual scientists’ research decisions. What immediately comes to mind is Helen Longino’s contextual empiricist conception of scientific evidence (1979, 1990, 2002). Longino’s framework is suitable to integrate with Douglas’ account of values in science in the context of our discussion, not only because the former entails a social perspective on scientists’ decisions about evidence (or, more precisely, on the question of why certain objects and states of affairs are considered by scientists as relevant evidence), but also because Douglas (2004, 2009, p. 18) considers Longino’s broader framework as compatible and complementary to her own project. In this section, I will discuss how Longino’s conception of evidence can be integrated with Douglas’s inductive risk framework and how this integrated account works in the analysis of the molecularization case.

4.1. Integrating Longino’s and Douglas’ Accounts

Helen Longino has an explicit conception of evidence and evidential reasoning that describes how objects and states of affairs acquire evidential status (1979). Longino famously argues that there is no intrinsic feature of any states of affairs or objects that make them evidential for a given hypothesis, and that states of affairs or objects acquire evidential status for hypotheses or theories only in light of some contextual background assumptions (1990). These background assumptions may sometimes be products of the broader social and cultural context within which scientists work, and the micro-sociological institutional context such as issues like disciplinary politics, methodological conventions, financial and technological constraints, institutional interactions, disciplinary preferences and interests (Longino 1990, 2008).

Unlike the pessimistic philosophical views about the social dimensions of science, Longino famously put forward that the social character of science is key to understanding the rationality and trustworthiness of scientific inquiries. Most prominently, for instance, she argues that social institutions of science that facilitate critical interactions among the members of a scientific community or members of different communities would eliminate the potential dominance of biased viewpoints over others (1990). In the same vein, in her more recent work (2002), Longino has argued that the social and institutional background context of science, which historical studies of science examine thoroughly, can in principle contribute to the rational progress of individual scientists’ research decisions rather than deteriorate or bias them.3

According to Longino’s framework, then, toxicologists’ acceptance of new, molecular kinds of data as relevant evidence for their inquiry can plausibly be mediated by the sociological and institutional processes such as the ones I have reviewed here. In line with Longino’s ideas, these processes can, in principle, play a constitutive role in the sense that they scaffold toxicologists’ inductive risk considerations about the kinds of evidence. I suggest that this theoretical possibility provides us with a basis for integrating Longino’s and Douglas’ accounts.

While Longino’s account supplies a prominent theory of how the social processes in science can plausibly change scientists’ judgments about evidence and methods, Douglas’ account supplies a theory of the principles that make scientists’ inductive reasoning reliable and legitimate in the face of factual uncertainty and the pertinent need to consider moral and pragmatic consequences of scientific judgments.

My suggestion is that these two theories complement each other in describing the case of molecularization I have examined here. Specifically, Longino’s theory is suitable and helpful in describing the case of molecularization because it accounts for the philosophically significant role that social processes played in changing toxicologists’ inductive risk judgments. At the same time, Longino’s framework is complemented by the inductive risk framework, as her own account of values in science does not put forward a theory of individual scientific decision-making under inferential risk.

Now, let’s put this integrative understanding to use and provide an example of how Douglas’ inductive risk framework and Longino’s contextual empiricism complement each other in analyzing the case.

4.2. Back to the Case of Molecularization

Armed with the inductive risk framework integrated with Longino’s contextual empiricism, we can address some of the puzzling aspects of the case again. Specifically, we can account for why some toxicologists only gradually accepted molecular methods about which they were initially skeptical in a way that is exceptional to Douglas’ theory of scientific reasoning. Moreover, we can supply a plausible explanation of some persisting disagreement between different communities of toxicologists about the acceptability of the molecular approaches to regulatory toxicology.

Remember that regulatory toxicologists were initially skeptical toward molecular kinds of evidence and thought that molecular methods increased inductive risks even though these methods were conducive to a more precise and accurate measurement of environmental health risks. This is, however, puzzling from the perspective of the inductive risk framework, which has been noted by Hicks (2018) as an important counterexample to Douglas’ norm of reasoning under inductive risk, as I have flagged earlier in Section 3.2. While Hicks gives us a plausible explanation of this apparent exception by suggesting that molecular models increase evidential uncertainty at another level (that is, in relation to the question of what ranges of potency are relevant for toxicological risk assessment [p. 170]), the integrated account proposed here complements Hicks’ interpretation by supplying an explanation of why this kind of apparent exception may have arisen and how toxicologists’ reasoning is still in line with the inductive risk framework. It goes as the following.

Individual toxicologists’ evidential judgments can be intransigent at first but change gradually in response to a set of social and institutional changes, as the case of molecularization manifests. While the observed judgment of toxicologists who were initially skeptical about the adoption of molecular tools seems exceptional to Douglas’ theory of scientific reasoning, the toxicologists working at the major regulatory institutions did gradually update the content of their inductive risk judgments in line with Douglas’s norms, followed by a set of social changes (i.e., active institutional attempts to re-evaluate the contribution of molecular sources to regulatory toxicology and to establish the relevance of molecular tools for regulatory toxicologists). In other words, toxicologists came to accept that adopting molecular approaches does not necessarily increase inductive risks, but this change in their judgment was mediated at least partly by a set of social and institutional processes that helped major communities of toxicologists establish the new methodological standards. Accordingly, once we acknowledge the mediating role of the social and institutional processes in the formation of toxicologists’ judgments, and accept that the updating of toxicologists’ judgments can be a gradual process, then the toxicologists’ observed reasoning does not conflict with Douglas’ inductive risk framework. Douglas’ account of scientists’ reasoning is still descriptively adequate in this context, but the structure of reasoning described in Douglas’ account seems to be instantiated gradually in practice and scaffolded by a set of institutional and social factors in line with Longino’s constitutive understanding of the social dimensions of scientific judgment-formation.

Consequently, by showing how Douglas’ account is complemented by a suitable philosophical account such as Longino’s contextual empiricism, we can plausibly explain an interesting aspect of the case of the molecularization and do so from the perspective of the inductive risk framework.

The integrated account can also provide us with an explanation of why there are still ongoing disagreements between different communities of toxicologists about the acceptability of molecular approaches to regulatory toxicology. To make this point, I will refer to a concrete example from Hicks’ 2018 study. Hicks (2018) focuses on a recent controversy in regulatory toxicology in which toxicologists assess a set of chemicals as potential endocrine disruptors by using a molecular data-generating model referred to as the ER model. Hicks observes that different toxicologists’ evaluations of inductive risks attached to the use of certain molecular evidence-gathering methods for toxicological risk assessment are inconsistent. Moreover, they report, this is reflected in conflicting estimations of risks calculated by different groups of regulatory toxicologists who made use of the same methods in calculating the risks. Specifically, the NRDC (Natural Resources Defense Council), an environmental advocacy organization, has argued against the use of the molecular model in question because NRDC claims that the method generates more inconclusive data-points than the conventional methods do, however precise and accurate those results might be. Given their purpose of environmental protection, NRDC does not favor the use of the method because, when used for regulatory assessments, it motivates less aggressive regulations of the industrial use of the chemicals in question. Hicks contrasts NRDC’s judgment with that of the EPA-NIEHS. The researchers and regulators at the EPA-NIEHS collaboration are in favor of adopting the molecular method (the ER model). Unlike NRDC’s judgment, EPA-NIEHS’s use of the same model generates less inconclusive evidence and motivates a more aggressive regulatory action when used for justifying regulatory decisions. Hicks reports that the difference between the two judgments results from “different ways of handling inconclusive chemicals” that score low-range response rates. While EPA-NIEHS’s methodological approach allows researchers to interpret such data as evidence of toxicity, NRDC’s methodology interprets the same results as “inconclusive” (pp. 168–69).

Hicks hypothesizes that the discrepancy between the views of two groups of toxicologists might be the result of a difference in how much they value the protection of the environment: “it is highly plausible that [NRDC’s] calculations are directly motivated by their concern to protect human health and the environment” (p. 170). But, at the same time, they rightly observe that NRDC’s resistance to EPA-NIEHS’s approach is notable because “EPA-NIEHS’s interpretation of inconclusive chemicals would be much more protective of these values” (p. 170).

Since Hicks is interested in another aspect of the case he focuses on, they stop there and do not question the persistence of this disagreement. But, the disagreement between these groups of toxicologists about the content of inductive risks is hardly attributable to differences in evaluations of relevant non-epistemic values. Indeed, EPA and NIEHS are also explicitly motivated by the concern to protect environmental health.

The integrated account I have proposed here supplies another plausible hypothesis that explains the disagreement between these two groups of toxicologists, which again complements Hicks’ analysis. In my reading of the example, it is plausible that the two groups of toxicologists have gone through different institutional processes. EPA and NIEHS have reached the judgment that the use of molecular methods is effectively conducive to the protection of health, and they have built a new set of shared methodological and technical standards regarding how to use molecular tools for the regulatory purposes of toxicology effectively. Recall the social and institutional processes that mediated the formation of this consensus, such as those described by the historical studies of toxicology or the activities such as the formation of the ToX21 collaboration. In the context of Hicks’ case study, EPA-NIEHS’s judgment is about how to interpret low-range response rates that are generated through the use of molecular methods. While the EPA-NIEHS consensus advises to count low-range potency rates as conclusive evidence of toxicity in the context of endocrinal disruption studies, NRDC does not seem to accept this new methodological standard even though doing so would be more protective of the non-epistemic values that they aim to protect.4

Accordingly, it is a plausible hypothesis that the researchers at NRDC have not participated in those social and institutional processes that helped the researchers at EPA-NIEHS update their inductive risk judgments about the acceptability of the molecular approaches. Recall, for instance, the consensus-building forums that aimed to introduce and promote the use of new methods in regulatory toxicology, which prominent institutions of regulatory toxicology such as NIEHS initiated (Shostak 2013, pp. 160–64; see also p. 176 for an example of such forums with the relevant environmental justice groups). From the perspective of the integrated account of the inductive risk framework presented here, such forums are good examples of institutional processes that could help the researchers from environmental advocacy groups such as NRDC share or update their methodological judgments.5

This example is yet another demonstration of why the mediating role of social and institutional processes is philosophically significant for our understanding of toxicologists’ inductive risk judgments, and why integrating Douglas’ account with suitable philosophical frameworks, such as Longino’s contextual empiricism, is useful to reflect on the cases such as the molecularization of toxicology.

5. Conclusion

We have seen that the prominent regulatory toxicologists’ choice of molecular kinds of evidence over traditional ones is informed by some assessment of how suitable the different methods are for their mission of protecting public health and the environment. In particular, that assessment involves accounting for non-epistemic considerations such as the seriousness of potentially harmful consequences that could follow from toxicologists’ choices over different kinds of evidence (e.g., slowing the pace of producing regulation-related analysis or making toxicological assessments less intelligible for the regulatory action). The case I have presented here, therefore, instantiates a context in which the inductive risk framework applies to scientists’ choices across different types of evidence.

In this context, I have offered a description of how and why the content of regulatory toxicologists’ inductive risk judgments changed over time, drawing on the relevant historical studies of contemporary regulatory toxicology. In doing so, I have focused on some aspects of toxicologists’ judgments that Douglas’ inductive risk framework is not purported to address, namely the initial rigidity of their evidential judgments, and the graduality of the updating of these judgments, and the role of institutional and social processes in mediating toxicologists’ judgments. I have argued for the philosophical significance of these aspects of the case and suggested that Douglas’ account, and the broader inductive risk framework, should be able to suitably address them.

I have proposed an account that integrates Douglas’ inductive risk account and Longino’s contextual empiricism as a suitable philosophical account of evidence that can accommodate the idea that social contextual factors may sometimes plausibly mediate scientists’ evidential judgments. I have then shown how Longino’s contextual empiricism and the inductive risk framework fruitfully complement each other in analyzing the specific questions the toxicology case raises, which are also of interest to the specialized philosophical literature.

The case of the molecularization of regulatory toxicology motivates the need to consider how our sophisticated philosophical and historical accounts of scientific judgment relate to each other. And, the integrated account proposed here instantiates an exploration where such connections between Douglas’ account of values in scientific reasoning and Longino’s contextual empiricism are drawn and then put into use to describe and understand toxicologists’ acceptance of molecular approaches to regulatory toxicology.

Notes

1. 

Notice that the examples in previous section, which Douglas examines in her work, pertain to classical toxicology, which is based primarily on pathological studies.

2. 

The debate over the acceptability of molecular evidence and tools in regulatory toxicology may still persist among different groups of toxicologists (see, for instance, Hicks [2018] analysis about the acceptability of a particular molecular model for endocrinal risk assessment). In the light of my analysis, I will also reflect on how the inductive risk framework may interpret such disagreements among different communities of toxicologists (such as the one described by Hicks) later in Section 4. However, it is beyond dispute that a substantial shift has been taking place toward the adoption of the molecular methods, and that the major institutions of toxicology no longer consider the traditional methods as the gold standard for conducting regulatory risk assessments, and their strategical aim is to move further towards the adoption of molecular tools for conducting regulatory risk assessments.

3. 

Longino (2002) contrasts her account with the approaches that consider “the social” harmful to rational decision-making in science. While prominent scholars of sociology of science interpret the documented influence of the social context in scientists’ decisions as evidence of the irrationality of science, prominent philosophers of science have developed normative accounts of science which consider the social factors as deteriorating or irrelevant for the rationality in science. Longino (2002) subscribes to neither of these traditions and instead seeks a cogent reconciliation between these two canonical perspectives.

4. 

In this specific example, it seems rational for NRDC to subscribe to EPA-NIESH’s acceptance of the molecular method (the ER model) and the pertinent methodological standards. Note, however, that my account does not purport to make a normative assessment of the success of regulatory toxicologists’ changing judgments or the quality of the institutional and social processes they have attended to. Engaging in such an evaluation would require one to qualify the integrated account proposed here in normative terms; specifying the conditions under which the mediating social processes are conducive to reaching the right or desirable content of inductive risk judgments. Obviously, one could ask, for instance, how far Longino’s normative criteria for social interaction between scientific communities (such as enhanced inclusiveness) are fulfilled in the case of molecularization (Longino 1990); or to what extent the financial/industrial interests have permeated into the social and institutional processes similar to the ones described here, as it would rightly be questioned by many philosophers of science (e.g., Elliot 2014). I thank an anonymous referee of this journal for encouraging me to highlight these important questions that I do not address in this article.

5. 

It is also intriguing to analyze how and why the mediatory role of institutional and social processes becomes dysfunctional or fails to generate (desirable) forms of agreement between different communities of scientific researchers in this case or in similar cases. This is another issue I do not attempt to analyze in this article.

References

Betz
,
Gregor
.
2013
. “
In Defence of the Value Free Ideal
.”
European Journal for Philosophy of Science
3
(
2
):
207
220
.
Biddle
,
J.
2016
. “
Inductive Risk, Epistemic Risk, and Overdiagnosis of Disease
.”
Perspectives on Science
24
(
2
):
192
205
.
Biddle
,
Justin B.
, and
Rebecca
Kukla
.
2017
.
The Geography of Epistemic Risk
. Pp.
215
237
in
Exploring Inductive Risk
. Edited by
Kevin
Elliott
and
T.
Richards
.
Oxford
:
Oxford University Press
.
Bright
,
Liam Kofi
.
2018
. “
Du Bois’ Democratic Defence of the Value Free Ideal
.”
Synthese
195
(
5
):
2227
2245
.
Brown
,
Matthew J.
2013
. “
Values in Science beyond Underdetermination and Inductive Risk
.”
Philosophy of Science
80
(
5
):
829
839
.
Douglas
,
Heather
.
2000
. “
Inductive Risk and Values in Science
.”
Philosophy of Science
67
(
4
):
559
579
.
Douglas
,
Heather
.
2004
. “
The Irreducible Complexity of Objectivity
.”
Synthese
138
(
3
):
453
473
.
Douglas
,
Heather
.
2009
.
Science, Policy, and the Value-Free Ideal. Science, Policy, and the Value-Free Ideal
.
Pittsburgh
:
The University of Pittsburgh Press
.
Douglas
,
Heather
.
2015
.
Values in Science
. Pp.
610
630
in
Oxford Handbook of Philosophy of Science
.
Vol. 1
. Edited by
Paul
Humphreys
.
Oxford
:
Oxford University Press
.
Elliott
,
Kevin C.
2011
. “
Direct and Indirect Roles for Values in Science
.”
Philosophy of Science
78
(
2
):
303
324
.
Elliot
,
Kevin C.
2014
. “
Financial Conflicts of Interest and Criteria for Research Credibility
.”
Erkenntnis
79
:
917
937
.
Elliott
,
Kevin C.
, and
Ted
Richards
.
2017a
.
Exploring Inductive Risk: Case Studies of Values in Science
.
Oxford
:
Oxford University Press
.
Elliott
,
Kevin C.
, and
Ted
Richards
.
2017b
.
Exploring Inductive Risk: Future Questions
. In
Exploring Inductive Risk: Case Studies of Values in Science
.
EPA (US Environmental Protection Agency)
.
United States Federal Government Tox21 Collaboration Advancing Toxicology To Improve Environmental Health And Pharmaceutical Safety
,
April 2019
, https://tox21.gov/wp-content/uploads/2019/04/Tox21_FactSheet_Apr2019.pdf
Frickel
,
Scott
.
2004
.
Chemical Consequences: Environmental Mutagens, Scientist Activism, and the Rise of Genetic Toxicology
.
New Brunswick, NJ
:
Rutgers Univesity Press
.
Hicks
,
Daniel J.
2014
. “
A New Direction for Science and Values
.”
Synthese
191
(
14
):
3271
3295
.
Hicks
,
Daniel J.
2018
. “
Inductive Risk and Regulatory Toxicology: A Comment on de Melo-Martín and Intemann
.”
Philosophy of Science
85
:
164
174
.
Jasanoff
,
Sheila
.
1990
.
The Fifth Branch: Science Advisers as Policymakers
.
Cambridge, Mass.
:
Harvard University Press
.
Jeffrey
,
Richard C.
1956
. “
Valuation and Acceptance of Scientific Hypotheses
.”
Philosophy of Science
23
(
3
):
237
246
.
John
,
Stephen
.
2015
. “
Inductive Risk and the Contexts of Communication
.”
Synthese
192
(
1
):
79
96
.
Kincaid
,
Harold
,
John
Dupré
, and
Alison
Wylie
.
2009
.
Value-Free Science?: Ideals and Illusions
.
Oxford
:
Oxford University Press
.
Kitcher
,
Philip
.
2011
.
Science in a Democratic Society
.
Amherst
:
Prometheus Books
.
Longino
,
Helen E.
1979
. “
Evidence and Hypothesis: An Analysis of Evidential Relations
.”
Philosophy of Science
46
(
1
):
35
56
.
Longino
,
Helen E.
1990
.
Science as Social Knowledge: Values and Objectivity in Scientific Inquiry
.
Princeton
:
Princeton University Press
.
Longino
,
Helen E.
2002
.
The Fate of Knowledge. The Fate of Knowledge
.
Princeton
:
Princeton University Press
.
Longino
,
Helen E.
2008
. “
Values, Heuristics, and the Politics of Knowledge
.” Pp.
68
85
in
The Challenge of the Social and the Pressure of the Practice: Science and Values Revisited
. Edited by
M.
Carrier
,
D.
Howard
, and
Janet A.
Kourany
.
Pittsburgh
:
University of Pittsburgh Press
.
Melo-Martín
,
Inmaculada de
, and
Kristen
Intemann
.
2016
. “
The Risk of Using Inductive Risk to Challenge the Value-Free Ideal
.”
Philosophy of Science
83
(
4
):
500
520
.
National Academies of Sciences, Engineering, and Medicine
.
2017
.
Using 21st Century Science to Improve Risk-Related Evaluations
.
Washington, DC
:
The National Academies Press
.
National Toxicology Program
.
Toxicology in the 21st Century: The Role of the National Toxicology Program
,
February 2004
, https://ntp.niehs.nih.gov/ntp/main_pages/ntpvision.pdf
Rudner
,
Richard
.
1953
. “
The Scientist Qua Scientist Makes Value Judgments
.”
Philosophy of Science
22
:
237
246
.
Shostak
,
Sara
.
2005
. “
The Emergence of Toxicogenomics: A Case Study of Molecularization
.”
Social Studies of Science
35
:
367
403
.
Shostak
,
Sara
.
2013
.
Exposed Science: Genes, the Environment, and the Politics of Population Health
.
Berkeley
:
University of California Press
.
Steel
,
Daniel
, and
Kyle Powys
Whyte
.
2012
. “
Environmental Justice, Values and Scientific Expertise
.”
Kennedy Institute of Ethics Journal
22
:
163
182
.

Author notes

The author would like to thank two anonymous referees of the journal for their useful and challenging feedback.