The potential for financial conflicts of interest (COIs) to influence scientific research has become a significant concern. Some commentators have suggested that the development of standardized study protocols could help to alleviate these problems. This paper identifies two problems with this solution: (1) scientific research incorporates numerous methodological judgments that cannot be constrained by standardized protocols; and (2) standardization can hide significant value judgments. These problems arise because of four weaknesses of standardized guidelines: incompleteness, limited applicability, selective ignorance, and ossification. Therefore, the standardization of study designs should not serve as an alternative to addressing the interests and power relations that pervade science policy-making. Policy makers should take at least two steps to prevent powerful interest groups from co-opting standardized guidelines. First, their development and review should be made as transparent as possible and should be subjected to broadly-based deliberation. Second, standardized guidelines should be supplemented with efforts to scrutinize the conditions under which financial COIs tend to have the most worrisome effects so that additional steps can be taken to eliminate and mitigate those conditions.

The potential for financial conflicts of interest (COIs) to influence scientific research in problematic ways has recently become a significant topic of discussion across numerous fields. The chemical, petroleum, pharmaceutical, and tobacco industries have all been accused of suppressing evidence that their products are harmful, producing studies with questionable methodologies, generating questionable reinterpretations of studies that challenge their products, and working with public relations (PR) firms and front groups to mislead the public about relevant science (see e.g., McGarity and Wagner 2008; Michaels 2008; Oreskes and Conway 2010; Elliott 2011). In an effort to address these influences, government agencies and universities have instituted various policies, which include requirements to disclose financial COIs in publications, the creation of trial registries so that unfavorable studies cannot be hidden as easily, increased enforcement of COI policies for participation on government advisory panels, and the development of criteria for assessing the credibility of research (Conrad and Becker 2011; Resnik 2006; Resnik and Elliott 2013).

Another approach for addressing financial COIs that has drawn recent attention is the development of standardized study protocols (see e.g., Conrad and Becker 2011; Ramachandran et al. 2011). The goal of this strategy is to standardize the design of policy-relevant research (e.g., safety studies of new chemicals or pharmaceuticals) so that it is more difficult for investigators to design studies in a manner that generates predetermined outcomes. The present paper explores the merits of this approach by drawing on literature from the philosophy of science as well as science and technology studies (STS). This literature is very helpful for thinking through the complexities of standardizing study designs because it highlights the methodological judgments involved in scientific research and the ways in which those judgments can be made appropriately or inappropriately.

The next section of the paper provides background about the influences of financial COIs on research and the major proposals for addressing them. It also introduces the strategy of standardizing study protocols in an effort to constrain study designs. Section 3 then highlights two major difficulties with this solution. First, scientific research incorporates so many methodological value judgments that it is difficult to constrain them through the standardization of protocols. Second, the standardization of protocols can hide significant value judgments that merit further discussion. The paper clarifies and elaborates on these two difficulties by highlighting four potential weaknesses of standardized study designs: they tend to be incomplete, they have limited applicability, they promote selective ignorance, and they can easily ossify. Section 4 argues that some of the potential weaknesses associated with standards can be alleviated by promoting broadly-based deliberation about them and about the results of studies. But it also argues that those who wish to address financial COIs in scientific research would do well to adopt other strategies, such as identifying the conditions under which the worst problems occur and attempting to eliminate or mitigate those conditions. Besides helping to address financial COIs, the analysis in this paper should promote greater understanding of the strengths and weaknesses of standardization, which is receiving increasing attention both in the regulatory environment and in the science-studies literature (see e.g., Lampland and Star 2009; Busch 2011).

In recent years, the influences of financial COIs on scientific research have become a topic of increasing social concern. It is now well known that the tobacco industry produced a great deal of questionable research throughout the latter half of the twentieth century in an effort to defend the safety of smoking and exposure to second-hand smoke (Proctor 2012). The industry also famously employed a variety of related strategies, including cultivating well-known academics to support their products, employing powerful PR firms, and funding think tanks and front groups (McGarity and Wagner 2008; Oreskes and Conway 2010; Proctor 2012). It has now become clear that other industry groups used similar strategies to defend products like lead, asbestos, and vinyl chloride (Markowitz and Rosner 2002). In their recent book Merchants of Doubt (2010), Naomi Oreskes and Erik Conway show that many of the same scientists, PR firms, and front groups worked to generate confusion and misleading science about a series of environmental issues, including acid rain, the ozone hole, and climate change (see also McCright and Dunlap 2010). David Michaels (2008) points out that government departments and agencies are prone to engage in some of the same misleading strategies when they face incentives to do so.

Some of the most enlightening empirical information about the effects of financial COIs on research has come from analyses of the pharmaceutical industry. Over the past twenty years, a number of studies have compared the results of research on the effectiveness of particular drugs when the research is funded by an entity with a financial interest in the outcome (such as the manufacturer of one of the drugs) versus when the research is funded by an entity without significant financial interests in the outcome (such as the National Institutes of Health). An influential meta-analysis of eleven of these studies found that, in every single study, research funded by entities with a financial interest in the outcome was more likely to yield results favorable to those entities than comparable research funded by entities without obvious financial interests (Bekelman et al. 2003; see also Sismondo 2008 and Lundh et al. 2012). By combining data from all eleven studies, the meta-analysis concluded that research funded by those with a financial interest in the outcome was about four times as likely to yield results that favored the funders. Moreover, when the primary outcomes of studies are not statistically significant, industry-funded biomedical studies are more likely than non-industry funded studies to report positive findings in subgroups (Sun et al. 2011). Although less data on financial COIs have been gathered outside the biomedical arena, some similar results have been identified with chemical safety studies (see e.g., Fagin et al. 1999, p. 51; vom Saal and Hughes 2005). A study of financial influences on nutrition research also found that research funded by entities with financial interests in the outcome was four to eight times as likely to produce results favorable to the funder as compared to similar research funded by a disinterested party (Lesser et al. 2007).

As various figures have noted, one needs to be careful about how to interpret these correlations between funding sources and research outcomes, because they can be generated by a variety of different causal processes. Arthur Schafer discusses at least four potential causes for the correlations that have been identified in pharmaceutical industry publications: (1) industry might be good at recognizing in advance which products are likely to be most effective; (2) trials sponsored by industry might be of poor quality (yielding results favorable to their products); (3) the industry studies might be biased (either consciously or unconsciously); or (4) there might be publication bias (Schafer 2004, p. 18).

Schafer (2004) argues that the final two causes are more plausible than the others and that there is already significant evidence for publication bias. In other words, pharmaceutical companies are most likely to publish studies that are favorable toward their products, and they are less likely to publish findings that are unhelpful for their marketing goals. This is part of a broader “publication planning” effort on the part of these companies. They work with professional publication planners to determine what kinds of papers would be most helpful for promoting their products, which academics would be most influential to have as authors, and which “key opinion leaders” should be cultivated to spread the word about their findings (Elliott 2004; Sismondo 2007). As part of this effort, Richard Smith (former editor of the British Medical Journal) argues that the pharmaceutical industry has become very skilled at designing studies and interpreting data in ways that do not appear obviously biased to peer reviewers but that are likely to generate preferred results (Smith 2005; see also Michaels 2008).

Unfortunately, developing adequate approaches for alleviating these influences of financial COIs on scientific research is no easy matter. I have previously argued that scientific practice is permeated by so many different subtle methodological judgments that it is very difficult to identify and manage all the problematic ways that financial COIs can affect research (Elliott 2008; see also de Melo-Martin and Intemann 2009). At present, perhaps the most common approach for addressing COIs is to require that investigators disclose any relevant financial ties when they publish their work. But a fascinating body of psychological research indicates that this solution is likely to be fairly unhelpful and perhaps even counterproductive. Psychologists have found that those who receive information about COIs are very poor at determining how much to discount the quality of the information that they receive, and those who disclose COIs may actually provide more biased information than they otherwise would (Cain et al. 2005; Loewenstein et al. 2012; see also de Melo-Martin and Intemann 2009).

Other proposed solutions face similar challenges. For example, another common suggestion for addressing financial COIs is to fund more research on policy-relevant issues via government agencies that do not have a significant stake in the results (e.g., the US National Science Foundation or the National Institutes of Health) (see e.g., Angell 2005; APHA 2003; Shrader-Frechette 2007; Elliott 2011). While this proposal might be promising in principle, it faces major practical difficulties. Namely, industry currently funds about two-thirds of scientific research and development in the US, including the vast majority of safety studies for new products (Conrad and Becker 2011; Elliott 2011, p. 6). Given that most governments are skeptical of increased spending, it would be very difficult to replace much of this industry research with science funded by government agencies. Other commentators have suggested that individual researchers might be able to counteract the effects of financial COIs by developing virtuous character traits that enable them to minimize biasing influences (Curzer and Santillanes 2012). But this proposal faces at least two difficulties. First, people tend to overestimate the extent of their objectivity and their ability to resist the influences of COIs (Cain et al. 2005; Loewenstein et al. 2012). Second, many of the figures paid by interest groups to obtain specific sorts of scientific results do not appear motivated to resist these influences (see e.g., Michaels 2008; Oreskes and Conway 2010).

Given the difficulties associated with these common approaches for responding to financial COIs, some authors have suggested that the standardization of study protocols could be a promising alternative solution. This was a central recommendation in the concluding report of a group at the University of Minnesota that received National Science Foundation (NSF) funding for a four-year project to study potential oversight structures for research on nanotechnology (Ramachandran et al. 2011). Based on an analysis of five previous oversight experiences with emerging technologies in the United States (involving genetically engineered foods, pharmaceuticals, medical devices, workplace chemical exposures, and gene therapy), they concluded that common weaknesses in these systems include “low transparency, little public input, prominent conflicts of interest, and limited financial resources” (Ramachandran et al. 2011, p. 1352).

In order to alleviate the limited financial resources available to regulatory agencies, the Minnesota group suggested that industry should play a major role in generating safety data for new nanotechnologies. They acknowledged that this could exacerbate the problem of conflicts of interest, but they suggested a two-pronged approach for addressing these conflicts. First, they suggested that standardized protocols and procedures should be developed that both manufacturers and regulatory agencies can follow. Second, they suggested that a coordinating agency could provide external review and vetting of the safety data produced. Part of the idea behind standardizing study protocols is that it makes it more difficult for investigators to take steps to generate preferred outcomes. For example, the standards could prevent investigators from running studies for too short a period of time or using too few animals or using an inappropriate feed or using various other tactics that can mask toxic effects (Shrader-Frechette 2007).

This emphasis on using standardized study protocols to constrain study designs can also be found in a recent article in Environmental Health Perspectives that attempted to synthesize various criteria for assessing the credibility of scientific studies. The authors, James Conrad and Richard Becker (2011), focused especially on assessing the credibility of chemical safety studies performed for regulatory purposes. They acknowledged that many people have become suspicious of these studies because the overwhelming majority of them are funded by those who manufacture or use the chemicals under investigation. Nevertheless, Conrad and Becker suggest that it makes more sense to evaluate the credibility of these studies based on criteria that do not appeal to the studies’ sources of funding.

Conrad and Becker’s paper (2011) provides ten criteria, such as whether financial COIs have been disclosed, whether principal investigators are legally guaranteed freedom to publish and interpret results, whether data and methods are publicly disclosed, and whether the study has undergone peer review or some other form of external review. I have previously argued that many of these criteria are relatively unhelpful, but one of their most promising criteria is whether the study has a transparent research objective and an appropriate design for achieving that objective (Elliott 2014). For the purposes of the present paper, the crucial point is that Conrad and Becker appeal to standardized study protocols in order to determine whether this criterion is met. They argue that those evaluating studies should consider whether the studies meet Good Laboratory Practice (GLP) or Good Epidemiological Practice (GEP) guidelines and whether they are performed according to agency-approved study protocols that are typically generated by the Organization for Economic Cooperation and Development (OECD). The goal of this paper is to consider how well these standards can indeed serve the goal of ensuring study credibility.

The main argument of this section is that when one reflects on the myriad ways in which value judgments pervade scientific practice, it becomes apparent that the standardization of study design faces significant difficulties as a response to financial COIs. Numerous STS scholars, including many philosophers of science, have recently elucidated the value-laden character of scientific reasoning (see e.g., Kitcher 2001; Longino 2002; Douglas 2009; Kourany 2010; Elliott 2011). According to these scholars, scientists have to make methodological decisions that cannot be decided in a simple rule-governed fashion; they have to weigh the importance of various desiderata when deciding what questions to ask about a particular phenomenon, how to design studies to investigate it, what terminology or categories to employ for describing it, what background assumptions to adopt, how to interpret ambiguous results, and what standards of evidence to demand when drawing conclusions.

Because of all these methodological choices, scientific reasoning becomes value-laden in at least two different senses. First, as noted in the previous paragraph, scientists are forced to weigh a range of desiderata (i.e., values) when making methodological choices. Second, when these methodological choices have social consequences, such as determining whether particular products are determined to be harmful or whether particular groups of people receive compensation for being harmed, they become value-laden in an additional (ethical and political) sense. As Heather Douglas (2009) has argued, scientists have responsibilities to consider the potentially harmful social consequences of their methodological decisions, and therefore they need to include ethical reasoning about these consequences among the other desiderata that they factor into their decisions. Admittedly, making methodological choices (like deciding what strain of rats to use in a toxicology study) is not as explicitly normative as deciding what level of risk people should be exposed to. Nevertheless, these issues start to blur together once one recognizes that subtle methodological choices can determine whether a chemical is deemed to have a high level of risk or not, and therefore whether it ends up being regulated (Douglas 2009; Elliott 2011).

The prevalence of these methodological value judgments throughout scientific reasoning poses two problems for using standardized study protocols to address financial COIs. First, it is unrealistic to expect standardized study guidelines to prevent the wide array of potential influences of financial COIs on all these judgments. Second, standardized guidelines can hide significant value judgments, insofar as they impose design features that are appropriate in some scientific contexts but not in others. Therefore, they can be exploited (either intentionally or unintentionally) to advance the goals of specific interest groups (see e.g., Ottinger 2010). In order to highlight and illustrate these two problems, it is helpful to reflect on four potential weaknesses of standardized protocols: incompleteness, limited applicability, selective ignorance, and ossification.

First, standardized protocols are often incomplete, in the sense that they are typically insufficient to ensure that studies actually have the qualities that are desired in a particular context. A perfect example of this sort of incompleteness comes from Conrad and Becker’s (2011) suggestion that those evaluating the credibility of studies should consider whether they have been performed according to GLP guidelines. The problem with this dependence on GLP standards is that they were developed by the U.S. Food and Drug Administration (FDA), and later adopted by the Environmental Protection Agency (EPA), as a response to industry abuses in which researchers falsified or fabricated their data (Myers et al. 2009). Therefore, they focus primarily on documenting procedures so as to ensure that all reported results actually reflect what was done in the lab. They do not ensure that these procedures are actually appropriate for answering the questions under investigation. For example, a number of researchers argued that the U.S. FDA and the European Food Safety Authority gave too much weight to industry-funded GLP studies when they evaluated the safety of bisphenol A, even though those GLP studies may not have employed the best new techniques for identifying potential health hazards (Myers et al. 2009).

Conrad and Becker (2011) would presumably respond that when GLP studies are performed for regulatory purposes, the protocols followed by investigators are typically those that have been specified and approved by agencies such as the FDA or the EPA. These protocols are often proposed by international bodies such as the OECD, and they undergo validation to ensure that they are appropriate for answering the regulatory questions under consideration. Therefore, even though GLP guidelines themselves are incomplete, in the sense that they do not ensure that appropriate protocols are followed, they are conjoined with other standards that address this source of incompleteness.

Unfortunately, even these agency-approved protocols are often incomplete, in the sense that they are insufficient to prevent questionable value judgments from influencing studies. For example, these protocols typically give investigators the freedom to choose which of several species or strains of animals to employ (Elliott and Volz 2012). This is understandable, because it could be problematic for the standards to be overly rigid in specifying the details of study designs in all cases. Nevertheless, this allows those with an interest in obtaining particular results (e.g., obtaining evidence that a chemical is not particularly harmful) to choose experimental animals that are more likely to yield those results (e.g., choosing particularly insensitive animals) (see e.g., Vandenberg et al. 2012, p. 393; vom Saal and Hughes 2005). Similarly, the guidelines for regulatory ecotoxicity tests provide significant latitude for investigators to choose desired dose levels and statistical analyses in their experiments, which can have significant effects on the calculation of safe exposure levels for the chemicals under investigation (Chapman et al. 1996; Isnard et al. 2001). Finally, these guidelines typically only specify a minimal amount of information that must be reported to regulators, which allows those performing the studies to make significant value judgments about what additional information to report or not (Elliott and Volz 2012).

A second potential problem with standardized protocols is their limited applicability. In other words, they apply in some circumstances but not in others, and there are often significant value judgments involved in deciding where they do and do not appropriately apply. For example, David Michaels (2008) notes that Good Epidemiological Practice (GEP) guidelines require high standards of evidence in order to infer that a particular substance contributes causally to a hazard. This demand for high standards of evidence makes sense in many scientific contexts, and it serves the interests of those who produce and use the substance in question. But in some cases (e.g., where the use of a potentially hazardous substance could have dire social consequences and where there are reasonable alternatives), it might make sense to infer that a substance is harmful based on much more limited evidence (Elliott and Resnik 2014). To take another example, David Andow and Angela Hilbeck (2004) note that OECD guidelines recommend that chemical pesticides be tested on a set of “universal indicator species” (such as an algae, a water flea, and a fish) in order to predict their likely toxicity on a wide range of other species. Nevertheless, value judgments are needed about how widely to generalize the results of these studies, because these indicator species are not likely to serve as reliable indicators for effects on nontarget species or on “higher-level” entities such as populations and ecosystems (Andow and Hilbeck 2004).

Another excellent example of the limited applicability of standards comes from a recent article by Fern Wickson and Brian Wynne (2012). They discuss recent debates about a safety study of a genetically modified (GM) insect-resistant corn (maize) that expresses a protein from the bacterium Bacillus thuringiensis (Bt) (see Bøhn et al. 2008). The study provided evidence that water fleas fed this GM maize suffered increased mortality, lower sexual maturation, and lower egg production (Bøhn et al. 2008, p. 584). According to critics of the study, one reason for challenging its results is that it ran too long; whereas OECD Directive 211 for reproduction tests with Daphnia magna (the water flea used for the experiment) calls for tests to be 21 days long, the Bøhn et al. (2008) study ran for 42 days. Wickson and Wynne challenge the critics by pointing out that one could plausibly interpret the OECD Directive as calling for tests to be a minimum of 21 days rather than a maximum of 21 days. In addition, they point out that there are significant judgments involved in deciding when a 21-day test is superior to a 42-day test (Wickson and Wynne 2012, p. 327). They note that if one desires to investigate the potential for chronic effects or for effects over the entire lifecycle of the organism, 42-day tests could be superior. Thus, whereas a 21-day test may be preferable in some circumstances, this standard may not be appropriate for answering other important questions about the safety of GM maize.

A third potential problem with standardized protocols is their tendency to promote selective ignorance (Elliott 2013). In other words, they focus investigators on collecting some forms of information about potential hazards, which may decrease attention to other relevant threats or forms of information. This problem is related to the previous one (limited applicability), in the sense that both are caused by the tendency of standardized guidelines to constrain the sorts of information that are collected. Nevertheless, the focus of the two problems is somewhat different. The limited applicability problem emphasizes that the information collected is helpful only for addressing certain sorts of questions in a specific range of contexts. The focus of the selective ignorance problem is that, insofar as standardized protocols yield only specific forms of information, scientists and policy makers can lose sight of other forms of information that would be valuable for them to collect in order to inform their decision making.

Wickson and Wynne’s discussion of GM maize also provides an example of selective ignorance. They note that another criticism of the study by Bøhn et al. (2008) has been that it used GM maize flour as the test material that was fed to the water fleas. Once again, this violated the OECD standards supported by industry and the European Food Safety Authority (EFSA) (see Wickson and Wynne 2012, p. 329). The standard material fed to water fleas in order to test the safety of this GM maize is purified Bt protein taken directly from Bacillus thuringiensis. An advantage of this standardized approach is that it is much easier for investigators to obtain purified Bt protein from bacteria than from the maize that has been genetically engineered (Wickson and Wynne 2012, p. 329). But this approach also promotes a significant form of ignorance. Namely, it uncovers only the toxic effects caused by the Bt protein itself. It does not identify any toxic effects that might be caused by the maize plant as a result of its genetic modification, and it does not uncover any harmful ways in which the Bt protein might have been modified in the maize plant (Wickson and Wynne 2012). Thus, this case shows how standards can incorporate significant judgments and assumptions about what sorts of information are and are not important to collect.1

Some of the features of the selective ignorance illustrated by the Bt maize case are common across many areas of standardized regulatory science. In particular, it is common for standardized approaches to simplify the complexity of real-life situations significantly for the sake of generating more replicable and easily interpretable data. In order to minimize the potential to obtain confusing results as a result of feeding test organisms the actual flour from GM maize, researchers typically assume that the hazards associated with the maize can be isolated to the Bt toxin and that the toxin is no different in the maize than in Bt bacteria. In a somewhat analogous manner, researchers simplify the vast majority of toxicity tests for regulatory purposes so that they are performed with individual chemicals, even though people are exposed to mixtures of chemicals in real life (Eggen et al. 2004). Similarly, toxicity tests are typically performed on a small number of well-understood model organisms or indicator species, even though these may not provide complete information about the effects of the chemicals on other species or ecosystems (Andow and Hilbeck 2004). Studies are also commonly performed in carefully controlled laboratory conditions, with purified samples and a minimum of potential confounding factors. All these common features of standardized regulatory studies maximize the possibility of obtaining replicable results that are easy to interpret, but they lessen the chance of uncovering new and previously unknown hazards.

A fourth potential problem for standardized study protocols can be called “ossification.” The worry about ossification is that it can be very difficult and time-consuming to develop and change standardized protocols, which can prevent the introduction of new techniques or perspectives. For example, in 1996 Congress tasked the U.S. EPA with developing a screening program to identify endocrine disrupting chemicals, and the Tier 2 assays for the testing process have still not been fully validated and approved.2 Similarly, the ecotoxicity studies currently required for testing pesticides in the United States were based on draft guidelines for sixteen years, until they were finally approved in 2012.3 They have still been criticized for failing to identify subtle, indirect ecological effects caused by some pesticides (Calow and Forbes 2003). With such a slow process for creating and approving standardized study guidelines, one can hardly expect them to be based on the best cutting-edge scientific innovations.

An additional and related problem associated with the ossification of standards is that it can be exploited by interest groups for rejecting cutting-edge scientific research that conflicts with their goals. We have already seen how regulatory decisions regarding bisphenol A were based on studies that employed standardized study protocols, even though a number of scientists argued that those study protocols were out-of-date (Myers et al. 2009). Similarly, the Bøhn et al. (2008) study was dismissed for failing to follow standardized guidelines that may not have been essential (and that may inappropriately constrain scientific investigations). Gwen Ottinger (2010) has also discussed how standards can be used as “boundary-policing” devices to block citizen involvement in science. While citizens can sometimes gain legitimacy for their work by following the standardized practices recommended by experts, sometimes these same standardized practices can block innovative new methodologies or forms of data suggested by citizens (Ottinger 2010). Of course, it is crucial to remember that standardized study guidelines can sometimes be valuable for blocking inappropriate study methodologies; the point is that there are significant value judgments involved in deciding whether new methodological approaches are valuable or not in particular contexts.

Having looked at these four potential weaknesses of standardized study guidelines, it should now be clear how they contribute to the two major problems with using these guidelines to address financial COIs. First, given the incompleteness and limited applicability of standardized study protocols, it is unrealistic to expect standardized guidelines to prevent the wide array of potential influences of financial COIs on methodological value judgments. Standards cannot fully constrain all the decisions associated with experimental design, and the more they did constrain those decisions (thereby eliminating the problem of incompleteness), the narrower their applicability would be (thereby exacerbating the problem of limited applicability) (see also Douglas 2012, 146). Second, given the potential for standards to promote selective ignorance and to ossify, they can be strategically employed by interest groups to hide significant value judgments that serve their needs. For example, we have seen how standardized guidelines can prevent the collection of information that could point to new and unknown hazards. We have also seen that those with vested interests in dismissing particular scientific findings can appeal to the fact that they are based on non-standardized study designs (even if those designs represent cutting-edge advancements in scientific methodology or legitimate starting points for further investigations).

We have seen that there are significant potential problems associated with employing standardized study guidelines to address financial COIs in scientific research. Nevertheless, this solution need not be abandoned completely. In fact, it would be unwise to give up on using standards to help address problems with scientific research because they are likely to continue playing a significant role in future regulatory science. Moreover, if used carefully, standards can actually serve as a useful tool for illuminating crucial value judgments rather than hiding them. When scientists and other stakeholders are forced to deliberate about the appropriate protocols for performing key regulatory studies, they can sometimes develop a better understanding of key methodological assumptions that would otherwise have gone unnoticed. In this respect, the strengths and weaknesses of standardizing study protocols are much like the strengths and weaknesses of formal decision-making tools like cost-benefit analysis (CBA). In many cases, CBA can be used to hide crucial value judgments, but under the right circumstances CBA can also be a valuable tool for making judgments transparent and promoting deliberation about them (Shrader-Frechette 1985; Schmidtz 2001).

Thus, scientists and policy makers need to think about ways to minimize the weaknesses of standardized protocols while maximizing their strengths. This section proposes two broad classes of solutions. One set focuses on ways to improve the use of standardized protocols so that they can be a more effective solution to financial COIs. The second set of solutions focuses on ways to supplement the use of standardized protocols with additional strategies for addressing financial COIs.

Turning first to solutions that attempt to improve on the use of standardized study guidelines, it is helpful to return to the article written by the group at the University of Minnesota concerning oversight systems for nanotechnology (Ramachandran et al. 2011). That group calls for a two-pronged approach to financial COIs: the use of standardized study protocols as well as the vetting of the resulting data by a coordinating agency (Ramachandran et al. 2011, p. 1361). Importantly, they call for the coordinating agency to include members from key agencies, various stakeholder groups, and the public. This reflects a general theme in their article, namely, that public engagement and transparency are crucial for successful regulatory regimes. Their emphasis on incorporating public engagement and transparency, as well as promoting a dynamic and flexible regulatory regime, could go a long way toward improving the use of standardized study guidelines for addressing financial COIs.

As intimated by Ramachandran et al. (2011), all four potential weaknesses of standardized study guidelines (incompleteness, limited applicability, selective ignorance, and ossification) could be mitigated by creating mechanisms for generating broadly based deliberation about the guidelines and the studies stemming from them (see also NRC 1996). For example, if an appropriate deliberative body could examine the results of regulatory safety studies, it could alleviate the problems of incompleteness and limited applicability by identifying at least some of the ways in which the studies might have been influenced by financial COIs even if they followed standardized guidelines. This sort of body could also alleviate the problem of ossification by calling for revisions to out-of-date guidelines or identifying cases in which studies that do not follow standardized guidelines should nevertheless be given significant weight in regulatory decision making. Finally, the problem of selective ignorance could be addressed by generating broadly based deliberation about the ways in which particular standards do and do not generate the sorts of information desired by various stakeholder groups (see also Ottinger 2010).

Unfortunately, creating this sort of broadly based deliberation is no easy matter. Consider, for example, that current standardized study guidelines are typically created by the OECD, which operates in a manner that is limited in its transparency and openness to the public. For example, the process for developing or altering OECD chemical testing guidelines is controlled primarily by member nations of the OECD (IOCM 2009). Coordinators from member nations have the authority to submit national proposals for new or revised standards (IOCM 2009, p. 19). While concerned scientists, citizens, and non-governmental organizations (NGOs) can propose new or altered standards, they have to work through a national coordinator (IOCM 2009, p. 20). Subsequent review and approval of the proposals is controlled primarily by the national coordinators and the members of relevant committees of the OECD (IOCM 2009, p. 34). Other scientific experts and stakeholder groups can sometimes inform the decision-making process, but their ability to provide input largely depends on whether they are invited by member nations or officials of the OECD (IOCM 2009, p. 29). Once one recognizes how standardized study guidelines can be employed to advance the goals of particular interest groups, it becomes clear that this sort of process is likely to exacerbate concerns over financial COIs rather than alleviating them.

In principle, national regulatory agencies could make up for this lack of transparency by creating deliberative bodies of their own that include broader representation and that are open to public input. But given the financial constraints faced by most regulatory bodies, this solution would be challenging to implement on a large scale. And even if agencies did create deliberative bodies, they would not have the time to examine every study submitted to the regulatory agency to determine how it might have been influenced by financial COIs. Moreover, some of the influences generated by financial COIs are not easily observable merely by looking at reports of study data and conclusions (Resnik and Elliott 2013). Therefore, while it would certainly be valuable to open up the process for generating study guidelines and scrutinizing study results so that these processes are more transparent and open to public involvement, this hardly seems sufficient by itself as a solution to financial COIs in research.

In previous work, I have suggested an alternative set of solutions to financial COIs that do not depend on standardization of study protocols (Elliott 2014). These solutions are grounded in an attempt to identify conditions under which financial COIs are most likely to generate worrisome effects on scientific research. By reflecting on previous cases in which financial COIs appeared to be particularly problematic, I suggested three conditions that appear to play a significant role in these sorts of cases (Elliott 2014):

  • (1) 

    Scientific findings are ambiguous or require a good deal of interpretation or are difficult to establish in a straightforward manner.

  • (2) 

    Individuals or institutions have strong incentives to influence those scientific findings in ways that damage the credibility of the research.

  • (3) 

    Individuals or institutions that have incentives to influence those scientific findings also have adequate opportunities to influence them.

I argued that the elimination or mitigation of any one of these conditions is likely to decrease the likelihood that financial COIs will influence research in problematic ways. Therefore, one can arrive at a promising set of potential solutions by exploring ways to eliminate each of these three conditions.

Consider first the possibility of avoiding scientific findings that are ambiguous or that require a good deal of interpretation. It is obviously not possible to eliminate uncertainty in scientific research, but it is possible to formulate science policy in a manner that depends less on highly uncertain scientific findings. By doing so, one can eliminate much of the pressure for interest groups to manipulate this research in ways that serve their interests. The Massachusetts Toxic Use Reduction Act of 1989 provides an excellent example of this sort of creative policy making. Rather than promoting endless debates over the question of whether specific substances were toxic enough to be banned, this legislation merely identified a number of chemicals that appeared likely to be problematic. Then, rather than banning the substances, the legislation required companies that used large quantities of them to publicly report how much they were using and to document any alternatives to the use of those substances. This policy approach placed relatively little emphasis on obtaining detailed results from scientific studies; instead, it focused on incentivizing the pursuit of alternatives to potentially harmful activities. Despite (or perhaps because of) avoiding extensive debates about scientific results, this policy appeared to have very beneficial results, including large reductions of toxic chemical emissions as well as economic savings for industry (Tickner 1999, p. 178).

Another set of solutions for alleviating financial COIs can be developed by attempting to eliminate the second condition above, namely, that individuals or institutions have strong incentives to influence scientific findings in ways that damage the credibility of the results. In many areas of science, this condition is not a significant problem; instead, those who fund research have an interest in obtaining highly credible results. For example, when the failures of a new technical innovation will be obvious to consumers, there are strong incentives for the manufacturers to produce highly credible research to make sure the innovation works. It is when product failures—or human and environmental health threats caused by the products—are more difficult to identify that manufacturers end up with incentives to produce shoddy research that covers up those problems (Elliott 2014).

One way to alter these incentives for producing high-quality research (and therefore to mitigate or eliminate the second condition described above) is to tweak tort policies so that it is easier for plaintiffs to obtain damages when they are harmed by faulty or dangerous products (McGarity and Wagner 2008; Cranor 2008). Another approach is to impose more severe penalties for manipulating science for the purposes of introducing harmful products to the market (McGarity and Wagner 2008). Carl Cranor (2011) argues that some incentives for producing questionable science could also be eliminated by creating a pre-market approach to regulatory policy rather than a post-market approach. He suggests that once companies have already introduced a product to the market they are likely to fight strenuously—including, in some cases, by using questionable research—to keep it on the market.

A final set of solutions for alleviating financial COIs are directed toward eliminating the third condition above, namely, that the individuals or institutions with incentives to influence research also have opportunities to do so. One approach to lessening opportunities for influencing research is to put various oversight mechanisms in place (Resnik and Elliott 2013). For example, in the biomedical context, most journals now require prior registration of clinical trials in a database as a condition for publication, and some journals require an individual independent from industry to take responsibility for data integrity and data analysis (DeAngelis et al. 2005; DeAngelis and Fontanarosa 2008). Another approach is to create institutional barriers between those who have incentives to influence research and those who have opportunities to do so. For example, a governmental or inter-governmental agency could take charge of designing and contracting out regulatory studies of new products (perhaps with direct or indirect financial support from relevant industry groups), thereby largely eliminating opportunities for the makers of the products to influence research on them (see e.g., Krimsky 2003; Schafer 2004; Volz and Elliott 2012).

Another promising approach for lessening opportunities to damage the credibility of research (and thereby to eliminate the third condition mentioned above) is to create collaborative research endeavors. Rather than using institutional barriers to insulate the research process from those with an interest in obtaining particular results, the collaborative approach attempts to bring together an adequate range of interested parties so that they can work together to develop a mutually acceptable research process. An advantage of this approach is that it has the potential to create greater trust among a diverse range of stakeholders, whereas some of these stakeholders might remain suspicious that the creation of institutional barriers is inadequate to keep those with deep pockets from surreptitiously influencing research. Heather Douglas (2005) has drawn attention to a case where a highly contentious risk assessment of the Alaskan oil trade in Prince William Sound received widespread support because a major local citizens’ group (the Regional Citizens’ Advisory Council or RCAC) collaborated in the creation of the risk assessment with the oil industry (see also Busenberg 1999). Similarly, after evidence emerged that pollen from Bt corn plants might be harmful to monarch butterflies, the US Department of Agriculture (USDA) helped organize a collaborative research effort guided by individuals from industry, academia, environmental groups, and government. This collaborative effort helped to generate research that was widely respected even though it addressed a highly contentious topic (Pew Initiative on Food and Biotechnology 2002).

In sum, there are a variety of available strategies for addressing the effects of financial COIs on research. Given that the standardization of study protocols is likely to keep playing a significant role in science policy making, it is very important to pursue efforts of the sort proposed here to make this approach more transparent and responsive to public input. But even if the standardization process is improved, we have seen that it is not likely to be a sufficient solution to financial COIs. Therefore, it is worth considering approaches for eliminating the conditions that make financial COIs most worrisome. Many of these suggestions are likely to prove politically challenging to implement (e.g., altering tort policies or creating institutional barriers that keep those with significant interests in research from influencing it). But while efforts are made to institute these sorts of changes, other strategies (such as the creation of collaborative research efforts) hold promise for addressing some of the challenges posed by financial COIs.

This paper has argued that the standardization of study protocols is limited in its ability to address problematic influences of financial COIs on scientific research. The difficulties with employing standardized study protocols are two-fold. First, there are so many methodological value judgments involved in research that it is not feasible to constrain them all using standardized guidelines. Second, the use of standardized protocols can actually hide important value judgments. In cases where there should be discussion about what methodology best serves the needs of a particular research context, appeals to standardized protocols can shut down this discussion.

These two problems were highlighted and illustrated in Section 3 by examining four potential weaknesses of standardized guidelines. First, they suffer from incompleteness, insofar as they do not constrain all the methodological decisions that must be made in research. Second, they have limited applicability; in other words, the guidelines apply in some cases but can actually be misleading or problematic in other research contexts. Third, they promote selective ignorance, insofar as they direct researchers toward collecting some pieces or kinds of information about a problem rather than others. Fourth, they tend to ossify, meaning that they are difficult to develop and change and can therefore enforce outdated or problematic methodologies. It is significant that there can often be tradeoffs between these weaknesses. For example, efforts to alleviate the incompleteness of standards or the selective ignorance that they generate by making them more elaborate can exacerbate their limited applicability and their ossification.

An overarching lesson of the paper is that the standardization of study designs cannot serve as an alternative to addressing the interests and power relations that pervade science policy making. In this respect, the practice of standardization is much like the practice of cost-benefit analysis or the use of any other formal tool; such practices can be very helpful under some circumstances, but they can also be wielded strategically by powerful interest groups to advance their goals (Shrader-Frechette 1985; Ottinger 2010; Busch 2011). Therefore, standardized study guidelines should continue to be employed, but policy makers should take at least two steps to supplement them and to prevent powerful interest groups from co-opting them. First, the development and review of standardized guidelines should be made as transparent as possible and should be subjected to broadly-based deliberation (NRC 1996). Second, science policy makers should scrutinize the conditions under which financial COIs tend to have the most worrisome effects and take additional steps to eliminate and mitigate those conditions.

1. 

This case illustrates many features of a more general clash between two frameworks for risk assessments of non-target impacts from GM crops (e.g., unintended effects on the natural predators of the pests that are actually targeted by the GM crops). The “eco-toxicological” approach relies on substantial equivalence testing as evidence that the modified and un-modified plants are similar in all ways except the inserted toxin, thereby justifying their focus on direct effects and use of laboratory studies. The “ecological” approach argues that semi-field and field studies are best suited to test for the potential effects on non-target organisms caused by indirect pathways and unpredicted changes in the modified plant (see Kokotovich 2014).

Andow
,
D.
and
A.
Hilbeck
.
2004
. “
Science-Based Risk Assessment for Nontarget Effects of Transgenic Crops
.”
BioScience
54
:
637
649
.
Angell
,
M.
2005
.
The Truth about the Drug Companies: How They Deceive Us and What to Do about It
.
New York
:
Random House
.
APHA (American Public Health Association)
.
2003
.
Supporting Legislation for Independent Post-marketing Phase IV Comparative Evaluation of Pharmaceuticals
.
Washington, DC
:
APHA
.
Bekelman
,
J.
,
Y.
Lee
, and
C.
Gross
.
2003
. “
Scope and Impact of Financial Conflicts of Interest in Biomedical Research
.”
Journal of the American Medical Association
289
:
454
465
.
Bøhn
,
T.
,
R.
Primicerio
,
D.
Hessen
,
T.
Traavik
.
2008
. “
Reduced Fitness of Daphnia Magna Fed a Bt-transgenic Maize Variety
.”
Archives of Environmental Contamination Toxicology
55
:
584
592
.
Busch
,
L.
2011
.
Standards: Recipes for Reality
.
Cambridge, MA
:
MIT Press
.
Busenberg
,
G.
1999
. “
Collaborative and Adversarial Analysis in Environmental Policy
.”
Policy Studies
32
:
1
11
.
Cain
,
D.
,
G.
Loewenstein
, and
D.
Moore
.
2005
. “
The Shortcomings of Disclosure as a Solution to Conflicts of Interest
.” Pp.
104
125
in
Conflicts of Interest: Challenges and Solutions in Business, Law, Medicine, and Public Policy
. Edited by
D.
Moore
,
D.
Cain
,
G.
Loewenstein
, and
M.
Bazerman
.
New York
:
Cambridge University Press
.
Calow
,
P.
and
V.
Forbes
.
2003
. “
Does Ecotoxicology Inform Ecological Risk Assessment?
Environmental Science and Technology
37
:
146A
151A
.
Chapman
,
P.
,
M.
Crane
,
J.
Wiles
,
F.
Noppert
,
E.
McIndoe
.
1996
. “
Improving the Quality of Statistics in Regulatory Ecotoxicity Tests
.”
Ecotoxicology
5
:
169
186
.
Conrad
,
J.
and
R.
Becker
.
2011
. “
Enhancing Credibility of Chemical Safety Studies: Emerging Consensus on Key Assessment Criteria
.”
Environmental Health Perspectives
119
:
757
764
.
Cranor
,
C.
2008
.
Toxic Torts: Science, Law, and the Possibility of Justice
.
New York
:
Cambridge University Press
.
Cranor
,
C.
2011
.
Legally Poisoned: How the Law Puts Us at Risk from Toxicants
.
Cambridge, Mass.
:
Harvard University Press
.
Curzer
,
H.
and
G.
Santillanes
.
2012
. “
Managing Conflict of Interest in Research: Some Suggestions for Investigators
.”
Accountability in Research
19
:
143
155
.
de Melo-Martin
,
I.
and
K.
Intemann
.
2009
. “
How Do Disclosure Policies Fail? Let Us Count the Ways
.”
FASEB Journal
23
:
1638
1642
.
DeAngelis
,
C.
,
J.
Drazen
,
F.
Frizelle
,
C.
Haug
,
J.
Hoey
,
R.
Horton
,
S.
Kotzin
, et al
2005
. “
Is This Clinical Trial Fully Registered? A Statement from the International Committee of Medical Journal Editors
.”
New England Journal of Medicine
352
:
2436
2438
.
DeAngelis
,
C.
and
P.
Fontanarosa
.
2008
. “
Impugning the Integrity of Medical Science: The Adverse Effects of Industry Influence
.”
Journal of the American Medical Association
299
:
1833
1835
.
Douglas
,
H.
2005
. “
Inserting the Public into Science
.” Pp.
153
169
in
Democratization of Expertise? Exploring Novel Forms of Scientific Advice in Political Decision Making
. Edited by
S.
Maasen
and
P.
Weingart
.
Dordrecht
:
Springer
.
Douglas
,
H.
2009
.
Science, Policy, and the Value-Free Ideal
.
Pittsburgh
:
Pittsburgh University Press
.
Douglas
,
H.
2012
. “
Weighing Complex Evidence in a Democratic Society
.”
Kennedy Institute of Ethics Journal
22
:
139
162
.
Eggen
,
R.
et al
2004
. “
Challenges in Ecotoxicology
.”
Environmental Science and Technology
38
:
59A
64A
.
Elliott
,
C.
2004
. “
Pharma Goes to the Laundry: Public Relations and the Business of Medical Education
.”
Hastings Center Report
34
:
18
23
.
Elliott
,
K.
2008
. “
Scientific Judgment and the Limits of Conflict-of-Interest Policies
.”
Accountability in Research
15
:
1
29
.
Elliott
,
K.
2011
.
Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research
.
New York
:
Oxford University Press
.
Elliott
,
K.
2013
. “
Selective Ignorance and Agricultural Research
.”
Science, Technology & Human Values
38
:
328
350
.
Elliott
,
K.
2014
. “
Financial Conflicts of Interest and Criteria for Research Credibility
.”
Erkenntnis
79
:
917
937
.
Elliott
,
K.
and
D.
Resnik
.
2014
. “
Science, Policy, and the Transparency of Values
.”
Environmental Health Perspectives
122
:
647
650
.
Elliott
,
K.
and
D.
Volz
.
2012
. “
Addressing Conflicts of Interest in Nanotechnology Oversight: Lessons Learned from Drug and Pesticide Safety Testing
.”
Journal of Nanoparticle Research
14
:
664
668
.
Fagin
,
D.
,
M.
Lavelle
, and the
Center for Public Integrity
.
1999
.
Toxic Deception
, 2nd edn.
Monroe, Maine
:
Common Courage
.
IOCM (Inter-Organization Programme for the Sound Management of Chemicals)
.
2009
.
Guidance Document for the Development of OECD Guidelines for the Testing of Chemicals
.
Paris
:
OECD
.
Available at: http://www.oecd.org/env/ehs/testing/49803789.pdf (accessed on July 25, 2013)
Isnard
,
P.
,
P.
Flammarion
,
G.
Roman
,
M.
Babut
,
P.
Bastien
,
S.
Bintein
,
L.
Essermeant
, et al
2001
. “
Statistical Analysis of Regulatory Ecotoxicity Tests
.”
Chemosphere
45
:
659
669
.
Kitcher
,
P.
2001
.
Science, Truth, and Democracy
.
New York
:
Oxford University Press
.
Kokotovich
,
A.
2014
.
Contesting Risk: Science, Governance, and the Future of Plant Genetic Engineering
.
PhD dissertation, University of Minnesota
.
Kourany
,
J.
2010
.
Philosophy of Science after Feminism
.
New York
:
Oxford University Press
.
Krimsky
,
S.
2003
.
Science in the Private Interest
.
Lanham, MD
:
Rowman and Littlefield
.
Lampland
,
M.
and
S.
Star
.
2009
.
Standards and Their Stories: How Quantifying, Classifying, and Formalizing Practices Shape Everyday Life
.
Ithaca, NY
:
Cornell University Press
.
Lesser
,
L.
,
C.
Ebbeling
,
M.
Goozner
,
D.
Wypij
,
D.
Ludwig
.
2007
. “
Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles
.”
PLoS Med
4
:
e5
.
Loewenstein
,
G.
,
S.
Sah
, and
D.
Cain
.
2012
. “
The Unintended Consequences of Conflict of Interest Disclosure
.”
Journal of the American Medical Association
307
:
669
670
.
Longino
,
H.
2002
.
The Fate of Knowledge
.
Princeton
:
Princeton University Press
.
Lundh
,
A.
,
S.
Sismondo
,
J.
Lexchin
,
O. A.
Busuioc
, and
L.
Bero
.
2012
. “
Industry Sponsorship and Research Outcome
.”
Cochrane Database of Systematic Reviews
12
:
MR000033
.
Markowitz
,
G.
and
D.
Rosner
.
2002
.
Deceit and Denial: The Deadly Politics of Environmental Pollution
.
Berkeley
:
University of California Press
.
McCright
,
A.
, and
R.
Dunlap
.
2010
. “
Anti-Reflexibility: The American Conservative Movement's Success in Undermining Climate Science and Policy
.”
Theory, Culture & Society
27
:
100
133
.
McGarity
,
T.
and
W.
Wagner
.
2008
.
Bending Science: How Special Interests Corrupt Public Health Research
.
Cambridge, MA
:
Harvard University Press
.
Michaels
,
D.
2008
.
Doubt Is Their Product: How Industry’s Assault on Science Threatens Your Health
.
New York
:
Oxford University Press
.
Myers
,
J.
,
F.
vom Saal
,
B.
Akingbemi
,
K.
Arizono
,
S.
Belcher
,
T.
Colborn
,
I.
Chahoud
, et al
2009
. “
Why Public Health Agencies Cannot Depend on Good Laboratory Practices as a Criterion for Selecting Data: The Case of Bisphenol A
.”
Environmental Health Perspectives
117
:
309
315
.
NRC (National Research Council)
.
1996
.
Understanding Risk: Informing Decisions in a Democratic Society
.
Washington, DC
:
National Academies Press
.
Oreskes
,
N.
and
E.
Conway
.
2010
.
Merchants of Doubt
.
New York
:
Bloomsbury Press
.
Ottinger
,
G.
2010
. “
Buckets of Resistance: Standards and the Effectiveness of Citizen Science
.”
Science, Technology, and Human Values
35
:
244
270
.
Pew Initiative on Food and Biotechnology
.
2002
.
Three Years Later: Genetically Engineered Corn and the Monarch Butterfly Controversy
.
Proctor
,
R.
2012
.
Golden Holocaust: Origins of the Cigarette Catastrophe and the Case for Abolition
.
Berkeley
:
University of California Press
.
Ramachandran
,
G.
,
S.
Wolf
,
J.
Paradise
,
J.
Kuzma
,
R.
Hall
,
E.
Kokkoli
, and
L.
Fatehi
.
2011
. “
Recommendations for Oversight of Nanobiotechnology: Dynamic Oversight for Complex and Convergent Technology
.”
Journal of Nanoparticle Research
13
:
1345
1371
.
Resnik
,
D.
2006
.
The Price of Truth: How Money Affects the Norms of Science
.
New York
:
Oxford University Press
.
Resnik
,
D.
and
K.
Elliott
.
2013
. “
Taking Financial Relationships into Account When Assessing Research
.”
Accountability in Research
20
:
184
205
.
Schafer
,
A.
2004
. “
Biomedical Conflicts of Interest: A Defense of the Sequestration Thesis—Learning from the Cases of Nancy Olivieri and David Healy
.”
Journal of Medical Ethics
30
:
8
24
.
Schmidtz
,
D.
2001
. “
A Place for Cost-Benefit Analysis
.”
Philosophical Issues
11
:
148
171
.
Shrader-Frechette
,
K.
1985
.
Science Policy, Ethics, and Economic Methodology
.
Dordrecht
:
Reidel
.
Shrader-Frechette
,
K.
2007
.
Taking Action, Saving Lives: Our Duties to Protect Environmental and Public Health
.
New York
:
Oxford University Press
.
Sismondo
,
S.
2007
. “
Ghost Management: How Much of the Medical Literature is Shaped Behind the Scenes by the Pharmaceutical Industry?
PLoS Medicine
4
:
e286
.
Sismondo
,
S.
2008
. “
Pharmaceutical Company Funding and Its Consequences: A Qualitative Systematic Review
.”
Contemporary Clinical Trials
29
:
109
113
.
Smith
,
R.
2005
. “
Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies
.”
PLoS Med
2
(
5
):
e138
.
doi:10.1371/journal.pmed.0020138
.
Sun
,
X.
,
M.
Briel
,
J. W.
Busse
,
J. J.
You
,
E. A.
Akl
,
F.
Mejza
, et al
2011
. “
The Influence of Study Characteristics on Reporting of Subgroup Analyses in Randomized Controlled Trials: Systematic Review
.”
British Medical Journal
342
:
d1569
.
Tickner
,
J.
1999
. “
A Map Toward Precautionary Decision Making
,” Pp.
162
186
in
Protecting Public Health & the Environment: Implementing the Precautionary Principle
. Edited by
C.
Raffesnperger
and
J.
Tickner
.
Washington, DC
:
Island Press
.
Vandenberg
,
L.
et al
2012
. “
Hormones and Endocrine-Disrupting Chemicals: Low-Dose Effects and Nonmonotonic Dose Responses
.”
Endocrine Reviews
33
:
378
455
.
Volz
,
D.
, and
K.
Elliott
.
2012
. “
Mitigating Conflicts of Interest in Chemical Safety Testing
.”
Environmental Science and Technology
46
:
7937
7938
.
vom Saal
,
F.
, and
C.
Hughes
.
2005
. “
An Extensive New Literature Concerning Low-Dose Effects of Bisphenol A Shows the Need for a New Risk Assessment
.”
Environmental Health Perspectives
113
:
926
933
.
Wickson
,
F.
, and
B.
Wynne
.
2012
. “
Ethics of Science for Policy in the Environmental Governance of Biotechnology: MON810 Maize in Europe
.”
Ethics, Policy & Environment
15
:
321
340
.

Author notes

I am grateful for helpful comments from Adam Kokotovich, Heather Douglas, Fern Wickson, two anonymous referees, and the attendees at the pre-conference workshop on Science, Policy, and Values at the Society for Philosophy of Science in Practice meeting in 2013 at the University of Toronto.