What should citizens understand about science to participate in democratic life? Against the prevailing approach, we argue that “what” a public understanding of science is about strongly depends on the specific epistemological nature of the science related issues considered in different contexts and circumstances. We identify three specific categories of such issues and show how, equally, specific models of public understanding are required to address them. Only by endorsing such an alternative approach will citizens arguably be able to form sound opinions about those very issues, as well as to discuss and deliberate rationally about them.

In 2001, the American Association for the Advancement of Science published the Atlas of Science Literacy, a large volume that, using fifty linked maps, describes how students throughout 12th grade develop their understanding and skills to meet specific science-literacy goals. The Atlas was set up in the framework of an education reform initiative, the AAAS’s “Project 2061.” This program aimed to “promote literacy in science, mathematics, and technology to help people live interesting, responsible, and productive lives” (American Association for the Advancement of Science 1993, p. XI).

The expression “science literacy,” first popularized by Paul Hurd (1958), has been widely used over the last few decades (including by AAAS) to support the idea that, given the growing importance of scientific knowledge and expertise in our everyday lives, some elements of contemporary science should be mastered by ordinary citizens (e.g., Feinstein 2011; Slater et al. 2019; Huxster et al. 2018). This need for “mastering” some elements of the sciences can be justified at different levels (Shen 1975): practical science literacy helps people to make individual decisions in their everyday life, for instance about health or work (Shen 1975, p. 46); cultural science literacy makes people appreciate science as a great human achievement (1975, p. 49); and civic science literacy allows people to reach considered decisions about science related debates and issues (1975, p. 48). It is the latter that has recently taken center stage in the literature on scientific literacy, thanks also to a steady increase of social interest in science and technology debates (Miller 2004).

In the frame of civic science literacy one can legitimately ask, following, for instance, Arnon Keren: “what must laypersons understand about science to allow them to make sound decisions on science related issues?” (2018, p. 781). “Making sound decisions” in Keren’s context concerns, more than what laypeople want to do with scientific information, what laypeople come to believe when confronted by scientific information. He explicitly claims that good public understanding of science should clearly focus on the ability of laypeople “to be able to determine which scientific claims to accept” (Keren 2018, p. 788) or “which scientific claims to believe” (p. 799). In other words, the problem underlying civic science literacy as posed by Keren (among others, including Shen 1975) can be reframed in the following terms: what must laypersons understand about science to allow them to formulate sound opinions on science related issues?

Of course, figuring out what laypeople are to understand while addressing science related questions is crucial to finding an answer to the problem as just reframed.1 Indeed, this has been a concerning question for many of those engaged with public understanding of science over the last few decades. Phillips et al. (2018), for example, supported by the National Science Foundation and drawing from the Framework for Evaluating Impacts of Informal Science Education Projects (Friedman 2008), remark that understanding “science” can be applied to at least one of the following items: science content (with reference to “subject matter, i.e., facts or concepts”), science processes (“the methodologies that scientists use to conduct research,” for example the hypothetico-deductive method), and the nature of science (“the epistemological underpinnings of scientific knowledge and how it is generated”) (Phillips et al. 2018, p. 9).

Keren adds a further item of understanding. Citizens, he claims, are not required to use first-hand scientific evidence to form their beliefs (this is the type of understanding that professional scientists engage with), but to acquire a second order understanding based on “information about patterns of agreement and disagreement among purported experts and authorities” (Keren 2018, p. 785). In this sense Keren refers to a “division of scientific labour” between scientists and citizens. Other authors insist on the need for citizens to understand the “social structure of science,” or “science as a social enterprise” (Slater et al. 2019, p. 257), namely, to grasp in what sense and for what reasons scientific activity brings scientists to be at the same time competitive and cooperative with one another (Slater et al. 2019, p. 256).

Despite the multiplicity of these different approaches, all these normative models of public understanding of science share a central objective: the formulation of a unique, general answer to the civic science literacy problem, independently of the type of science issues that are at stake in the different domains and circumstances. Not much philosophical attention has been given to the epistemic diversity of the “science related issues” addressed in the literature on public understanding of science, and on how this very diversity might significantly affect the types of understanding of science citizens should acquire and pursue.

In the literature engaged with civic science literacy, issues as diverse as climate change, vaccines, evolution biology, and GMOs are generally considered equivalent instances of “science related issues.” For example, Miller (2010) puts a rather heterogeneous range of scientific issues on the same level within the “public policy agenda”: global climate change, the use of embryonic stem cells, the future of energy research, nuclear power, viral pandemics, genetically modified food (2010, p. 241). It appears to be the same in the case of Gerken’s study on the role of journalists in communicating scientific results: no distinction is made among debates on creationism vs. evolutionary biology, or on risks associated with GMO crops, vaccine-autism links, gun control (Gerken 2019, p. 5). Duncan, Chinn and Barzilai, in questioning “what students should understand about how experts work with evidence” and “how laypeople can use evidence reports themselves” (Duncan et al. 2018, p. 907), refer to “current social and political controversies about scientific claims regarding climate change, vaccination and evolution” as part of a same ensemble (2018, p. 930).

In this paper we argue that the different approaches to public understanding of science in current literature make sense, and can be best appreciated, if we start with making appropriate distinctions among the specific types of science related issues which they explicitly or tacitly address. In order to test our claim, we first articulate a tentative typology of “science related issues,” and then elucidate for each type which model of understanding appears more suitable. Our aim is to clarify what is at stake in some of the debates about what public understanding of science ought to be about, and possibly to allow such understanding to become a useful practical tool in the hands of citizens.

Our argument is organized as follows. In section 2 taking a cue from the civic science literacy problem as formulated by Keren, we will focus on some of the issues raised by this formulation, which we believe requires both specific attention and some qualification. In section 3 we put forward a three-dimensional typology of science related issues, which includes (i) an epistemological characterization of types of issues illustrated by concrete examples, and (ii) an indication of what model of public understanding seems most suitable to deal with each type. We finally conclude (section 4) by reflecting on the significance of promoting public understanding of science in the light of the diversity of the science related issues which populate public debate.

The civic science literacy problem in Keren’s slightly reformulated form suggested in the previous section (i.e., what must laypersons understand about science to allow them to make sound opinions on science related issues) raises at least three questions in need of clarification: a) what counts as a sound opinion; b) what science related issues are targeted in the formulation above; and c) why citizens ought to acquire understanding of these issues.

An answer to the last question generally takes the form of a democratic ideal: one of the basic requirements of a democratic society is that citizens should “actively participate in the democratic process that weighs in on such [science related] issues” (Slater et al. 2019, p. 252). For the purpose of our discussion, we define “active participation” in the following, minimal, terms: citizens actively participate, or may be actively participative, not only when they form an opinion, but when they are able to defend it in a well-argued discourse that could be shared with, and gain support from, other citizens. A well-argued discourse is what specifically underwrites a sound opinion. In response to question a, an opinion is then sound in the sense that the arguments which are formulated in its support can be shared and discussed on rational bases. This entails that soundness depends on whether opinions are built, and then discussed, refined, or rejected by using a shared basic set of truth-related criteria. We will later show how these criteria are significantly dependent on the types of “science related issues” which are addressed in different discussions. This will constitute the core of our argument.

As to question b, it must first be emphasized that in the literature on science literacy or public understanding of science generally the science related issues referred to belong to a well identified category, namely they are issues debated in the public sphere because of their social, economic, or policy related importance. In other words, the “science related issues” focused on are those that can be labeled as politically relevant science related issues. Obviously, what counts as “politically relevant” might well change with time: for instance, at present, quantum mechanics does not immediately qualify as one of these issues, but it could become so in the future (let us think about the new developments of quantum informatics, for instance).

In the light of our answers to questions c and b we then suggest reframing the civic science literacy problem in the following way:

what ought laypersons to understand about science to allow them to actively participate in democratic life, that is to form and defend well-argued opinions about politically relevant science related issues?

The problem so reframed brings us to focus attention on the undeniable variety of politically relevant science related issues that can be found in practice, and which are debated in the public sphere. Our first step will then be to propose a basic typology of what we call politically relevant science related issues. Our typology will include three classes of such issues (more complex typologies could include further types, but a basic formulation adequately serves the purpose of our argument without over burdening its structure). Once the typology is in place, we will argue that an appropriate answer to the civic science literacy question depends on the epistemic characteristics of the different types of science related issues we consider.

Before we undertake our analysis, three clarifications are in order. First, the weight of our argument does not lay on the typology itself, but on how it shows that the nature of the issue at stake may influence the kind of understanding citizens need to actively participate in democratic life (in the sense suggested above). The classes of our typology, three among a much wider range of possibilities, were chosen having this objective in mind. Equally, these classes do not constitute, and should not be taken as, an exhaustive description of the epistemic diversity of the existing science related politically relevant issues.

Our second objective is to identify, for each class, a minimal requisite for public understanding of science, namely the kind of understanding which seems to be both necessary and sufficient for citizens to form sound opinions. Indeed, the fact that some kind of understanding is needed for a given class does not mean that it could not also be interesting or relevant (without being necessary) for other classes. Conversely, some kinds of understanding can be interesting or relevant for a given class, without being sufficient.

Third, the typology is organized around specific issues, and not around specific research domains. For instance, when considering the case of climate science, the science related issue we consider as characteristic of our first category is the anthropogenic origin of current global warming, not climate science as a research domain. It is this attribution of specificity that informs our argument.

We build our typology on three main classes of issues. A first class includes issues which are characterized by large consensus within the scientific community, for example, the benefits of vaccines or the reality of human-driven climate change. A second class refers to issues where intra-disciplinary disagreement exists among conflicting results, data, or approaches. Most often these disagreements reflect, and are fueled by, divergences of values. Suitable examples of these issues can be found in the field of chemical risks to the public and their management. A third class refers to issues that entail yet another type of disagreement, namely inter-disciplinary disagreement. In this case disagreements are due to clashes among different scientific disciplines or domains of inquiry dealing with the same issue or core of issues. For instance, the choice to promote or reject GMOs crops raises ecological, health, economic and social problems that often conflict with each other. The last two classes both deal with so-called “deep” disagreements, that is with disagreements which are epistemically grounded (in a sense that will be qualified below).

In the three following subsections we will describe each class more in detail and point out which among standard models of public understanding of science best addresses each class. At the same time we will highlight how none of them individually can effectively be used to address the typology in its totality. In fact, as pointed out earlier, none of these standard answers appear to take into specific account the effects that a variety of types of science related issues might produce on the possible answers themselves. We must not reason in terms of a “one size fits all” answer, but rather in terms of different answers for different types.

3.1. Type-One Issues: Wide Scientific Consensus

A first set of politically relevant science related issues is characterized by the existence of a large consensus among the scientific or expert community. Take for instance the case of human driven climate change: it is here acknowledged that “there is a broad expert consensus” about the anthropogenic drive on climate changes, and “most of the challenges to this claim come from interested parties outside the scientific community” (Oreskes 2018, p. 31). This does not mean that there is no disagreement about several aspects of climate change (e.g., how much and how far human impact accounts for such changes, or how to model/simulate/predict future changes), nor that all relevant aspects of climate, past and present, are thoroughly understood. However, on the basis of what the scientific community does know today—a combination of observational data, theoretical analysis, and computer simulation—we can rather confidently believe in (i.e., there are good scientific reasons to support the belief in) anthropogenic climate change. Not only some of the major international science institutions concord on this.2 If we also take a look at the scientific literature, there are almost no articles that question it. For instance, a survey made by Cook et al. of fourteen papers analyzing scientific consensus on human-driven climate change points out that consensus “is robust, with a range of 90%–100% depending on the exact question, timing and sampling methodology” (Cook et al. 2016, p. 11).

A similar story can be told about vaccine safety. On the basis of what is known today, the scientific reasons and evidence that support vaccine effectiveness is concurred upon. This does not mean that there are no risks associated with vaccine use, or recorded instances of vaccination mishandling. Yet, from such risks and cases no inference can be drawn to claiming fatal uncertainty for the science behind vaccines among the overwhelming majority of the scientific community (Kampourakis and McCain 2020, pp. 99–106). The degree of confidence in the use of vaccines is emphasized, for instance, by van der Linden (2016). He notes that “surveys of physicians and medical scientists have repeatedly indicated that over 90% of doctors agree that adults and children should receive all recommended vaccines. In other words, there is a strong medical consensus about vaccine safety that many patients may not be aware of” (Sturgis et al. 2021, p. 119).

These two examples (the anthropogenic origin of global warming and vaccine safety) are widely used as case studies within the literature on public understanding and communication of science. For instance, a large number of studies question the influence of communicating the mechanisms of anthropogenic global warming in changing people’s beliefs and attitudes (Ranney and Clark 2016; Guy et al. 2014; Bedford 2016; Johnson 2017). In the case of vaccines, it is often asked what kind of public understanding of science is needed for people to dismiss false results (notably, Wakefield’s 1998 retracted study about the link between vaccines and autism). The answers to these questions (e.g., Clarke et al. 2015; Gerken 2019; Duncan et al. 2018; Slater et al. 2019) are usually formulated in such a way as to include how to handle the rejection of scientific consensus. As described by Rutjens et al. (2018), a rejection attitude is determined by a mix of ideological, psychological, political, and educational features. It is not always clear how these elements interact with each other. For instance, a better level of education and science literacy has been reported to be a factor of polarization of opinions about climate change risks (Kahan et al. 2012). Clearly, understanding the science is only one dimension among many to consider when describing the distribution of opinions regarding these otherwise highly consensual issues among the scientific community. But if that is the case, what is the necessary and sufficient type of public understanding of science applied to consensus laden issues, able to respond effectively to the challenge posed by the civic science literacy problem as formulated above?

By looking at the available literature we suggest that Keren’s “scientific division of labour” (SDoL) model might provide the most appropriate answer. Keren contrasts SDoL with what he calls the “science content” model, taken to be the “dominant approach to the public understanding of science.” By science content Keren refers to “scientific concepts, theories, facts, and methods” belonging to science (Keren 2018, p. 782) By acknowledging SDoL instead, the public is not required to gain first-hand scientific evidence, and master scientific terminology, in order to form their beliefs. It rather needs to decide what and who to put trust on. Professional experts generate scientific knowledge through internal mechanisms of criticisms, debates and controversies, and scientifically literate people will require information and skills that allow them to assess “patterns of agreement and disagreement among purported experts and authorities” (Keren 2018, p. 785).

The SDoL model seems to be well-suited to solve the civic science literacy problem for issues that benefit from large scientific consensus. In these cases, being able to identify patterns of agreement indeed provide citizens with a rational tool which allows them to participate in democratic debate (that is, to defend one’s position in a well-argued discourse that could gain support from other citizens). As Keren himself acknowledges, “certain patterns of consensus among experts provide strong reasons to trust those experts.” But what happens for issues where such a consensus pattern is more difficult to identify? There are indeed cases where “scholars in a field fail to reach any substantial agreement on almost anything.” For such cases Keren points out that lack of consensus among experts “might be a reason not to trust them on such contested questions” (2018, p. 785).

Regarding such cases, advocates of SDoL do not offer solutions to the civic science literacy problem. Arguably, they could possibly be seen as subscribing to the view that for those kinds of politically relevant controversial issues the rational thing to do would be to (at least temporarily) withhold one’s judgement. However, withdrawing from forming a sound opinion is not an ideal position to be in vis-à-vis active participation in democratic life (in the sense given in section 2). This is made even worse by the fact that in practice the absence of clear scientific consensus is not a rare occurrence, and cases where scientific results are uncertain, and expertise not clearly identified, do matter when discussing the civic science literacy problems. In the next two sections we will focus on two kinds of politically relevant science related issues which are characterized by experts’ disagreement and yet, we will argue, ought not to end up necessarily in a position of suspended judgement on the part of laypeople.

3.2. Type-Two Issues: Deep Disagreements within Single Scientific Disciplines

A second ensemble of politically relevant science related issues is characterized by the existence of deep disagreement among scientists belonging to the same discipline. We borrow the concept of deep disagreement from Biddle (2018). He takes it from Lynch (2010) who defines it as a disagreement over “which methods are most reliable in a given domain.” To this characterization Biddle adds that often these types of disagreement don’t just concern the methods but also “what kinds of evidence are relevant to a given hypothesis” (p. 376). And what counts as evidence only partly emerges from disagreement over methods. It also originates from disagreement among other issues—for example, as we will see below, among values.3

Chemical risk evaluation offers a suitable area of discussion for deep disagreement. Let us consider as a first example the effects of the use of neonicotinoids on bees as discussed in Douglas (2017). Neonicotinoids are a class of pesticides widely used in agriculture to protect crops from insects (neonicotinoids act on insects’ central nervous system). These molecules are however suspected of being a cause of the decline in bees’ population. Because of widespread use over the last two decades, the issue of neonicotinoid toxicity has become a highly sensitive scientific issue for national and international institutions, (e.g., EFSA 2013 and DEFRA 2013 official reports), and the subject of a lively ongoing debate among environmental scientists specifically interested in negative effects on bees. The main aspect of controversy arises from how to interpret the difference in results between laboratory and field experiments. Studies made in laboratory show that sub-lethal concentrations of neonicotinoids negatively influence bee colony growth, with a significant reduction of bees’ ability to orient themselves when returning to the hive (Henry et al. 2012; Whitehorn et al. 2012; Gill et al. 2012). These results clash with evidence coming from various field studies which, by looking at not-controlled conditions, do not detect such an effect (Chauzat et al. 2009; Genersch et al. 2010; Sterk et al. 2016). One reason for such difference in results has been linked to the fact that concentrations of neonicotinoids under field conditions are much lower than those used in laboratory experiments (Blacquiere et al. 2012; Cresswell et al. 2012). However, more recent field studies suggest that neonicotinoids do have a measurable effect on bees’ performance also in real-world agricultural landscapes (Woodcock et al. 2017; Budge et al. 2015), but the extent of such an effect seems to vary depending on spatial location and bee species (Wintermantel et al. 2018). We are here typically in a situation of deep disagreement (in the sense qualified by Biddle): there is a debate on the methods to be used within a given scientific domain (lab vs. field research), and on what should count as reliable evidence.

A second example of chemical risk evaluation has to do with another group of chemicals known as endocrine disruptors (ED). EDs form a class of chemicals which are present in an increasingly large number of products (industrial chemicals, pesticides, cosmetics, pharmaceuticals). These chemicals are suspected to alter the functions of the hormonal system, with possible effects on human reproduction (Godfray et al. 2019). However, the scientific community is divided regarding both the reality of these effects on health, and the best regulatory approach to adopt (McIlroy-Young et al. 2021). More precisely, the core of the debate is how to rely on classical chemical risk assessment methods, which involve determining the probability of adverse effects for human health emerging from real world exposure (Lofstedt 2011). Some scientists claim that as long as exposure is below a certain threshold, EDs can be used safely (Lamb et al. 2014; Dietrich et al. 2013; Brescia 2020). Against this view, others argue that it is very difficult, and highly controversial, to determine in a reliable way where to set the bar for making an exposure threshold “safe” regarding EDs. In fact, they claim that classical risk assessment increases the chance of allowing the use of chemicals that are potentially harmful (Bergman et al. 2015; Gore et al. 2013), and therefore recommend the adoption of a precautionary principle in view of preserving safety. Again, this scientific debate raises deep disagreement about the reliability of risk assessment methods, as is often the case with the evaluation of chemical risks, of which neonicotinoids or EDs are only two instances. First, scientists must make methodological choices, e.g., regarding the level of statistical significance, the sample size, the time-lapse of the experiments, or the measured outputs. Second, assessing the toxicity of a chemical is a matter of balancing inductive risks, that is, the risks of making a mistake (Elliott and Richards 2017). False positives (judging that the chemical is dangerous whereas it is not) or false negatives (judging the chemical is safe whereas it is not) may have different kinds of consequences (for people’s health, the environment, life safety, economic growth, etc.).

As has been widely acknowledged, in the case of both methodological choice and inductive risk balance, non-epistemic values influence experts’ judgement and related policy decisions (Longino 1990; Dupré 2007; Douglas 2009; Elliott 2011; Biddle 2016). Scientists need, for example, to decide what uncertainties to emphasize in the collected evidence and what to leave aside; what methods to use in handling what they know and what they do not know; what data to select and how to interpret them; what constitutes sufficient warrant in particular cases; how to evaluate the risk, or the impact, of error outside laboratory research; etc. (see Montuschi 2017a discussing Douglas 2000 on scientists’ “judgement calls”).4 Decisions of this sort prove controversial for the scientists making them (and different scientists might “call upon” different decisions) and for the different choices that can be made at different junctures. Decisions and choices are not just dictated by facts. They often entail a wide range of evaluations—with ethical and social values playing a crucial role. Are the economic consequences on the market more or less important than human health safety in allowing the use of ED chemicals, and how do the two issues compare? What values should we appeal to in assessing the potential consequences of either under-estimating or over-estimating the risks associated with such use? As Biddle puts it, “there is always some possibility of being wrong (i.e., of accepting a false hypothesis or rejecting a true hypothesis) and being wrong brings different consequences (including moral consequences) for different parties” (2018, p. 363).

In the light of what is involved with this second class of science related issues, what kind of public understanding of science is then needed to allow citizens to participate actively (in the sense defined in section 2 above) in democratic life? Heather Douglas argues that in some cases of socially relevant scientific debates (such as the legislation regarding neonicotinoids) it is rational for a lay individual to follow the advice of experts who explicitly share the same values as the lay individuals themselves. This is not said in the sense that science should and can be used to support sides or for that matter any sides (2017, p. 94). It is rather because there are reasons to think that the experts with whom we (or other individuals) share values would manage inductive risks (that is, evaluate the consequences of making a mistake) in the same way as we (or identified others) would do, or that they address questions we believe ought to be addressed. In the case of the neonicotinoids debate, the scientists who are worried about bee health would be more trustworthy for us if bee health is our primary concern. If we are instead more concerned about farmers minimizing pest damage, then trusting scientists with similar concerns will be the rational thing to do. Let us note here that saying that it is “rational” to follow the advice of experts who share our values should be taken to mean that it is “justifiable publicly, i.e., [with] a reasoned basis that can be stated publicly” (Douglas 2017, p. 94n9). Therefore, following the advice of experts who appear scientifically legitimated to defend a given position amounts to following an advice based on an empirically grounded position (i.e., supported by evidence), which can also be legitimately defended as a justified choice of advice.

The kind of understanding which is needed here is then quite demanding: citizens should understand how their values enter the scientific process, and jointly with facts produce the kind of results they themselves defend as most relevant to guide policy making. This kind of understanding recalls what Phillips et al. label under the model “understanding the Nature of Science” (2018) in the specific sense of understanding the influence of social and cultural values as epistemological tenets of the scientific endeavor (see also Douglas 2017 on “Teaching the Nature of Science,” p. 85; Lederman et al. 2014 on the tenets of science).

On a more practical side, by granting her point Douglas draws the conclusion that it would be better for scientists to be as transparent and as open as possible about their non-epistemic values, especially when these offer clear guidance to how scientists carry out their research. To this claim we add two provisions. First, in order to achieve effective transparency and openness, scientists should undergo a radical transformation in their practice of presenting scientific results and be trained in such a way that they can recognize, and are willing to acknowledge, the values they endorse while preserving the objectivity of research. Second, citizens should be able to accept that scientific expertise often goes hand in hand with balancing inductive risks, and that non-epistemic values influence how this balance is achieved. Besides, they should be able to identify, in specific cases, the values at stake and the role they play (namely, the kind of risk they intend to avoid). In other words, a minimal requirement for public understanding of science, for this second category of politically relevant science related issues, would be an understanding of (i) the different kinds of inductive risks in the specific case at stake, and (ii) the non-epistemic values subscribed by the different scientists or experts who contribute to a specific debate. To go back to our examples, in the case of neonicotinoids citizens should be able to (i) understand that experts’ disagreement comes from the fact that there is a difference between laboratory experiments and real fields results, and (ii) understand that this ambiguity in results will be evaluated differently depending on whether the focus of research is on the harm on insects, or else on the damage to agricultural fields. Similarly, in the case of EDs, citizens should understand that there exists deep scientific disagreement about the reliability of classical risk assessment methods, so that they can take (and defend) a rational position depending on their own political or moral values.5

3.3. Type-Three Issues: Rational Disagreements across Disciplines and/or Domains

Our third category of politically relevant science related issues is constituted by a class where disagreement specifically cuts across different kinds of disciplinary fields and then feeds into the political debate.6 Let us start here from a paradigmatic example: the regulation of the use of GMOs in agriculture. As convincingly shown by Biddle, the debates on GMOs focus on different kinds of risks: “health and safety risks (e.g., allergenicity), risks to the environment (e.g., evolution of herbicide and/or pesticide-tolerant plants or animals), and legal and socio-economic risks (e.g., accidental flow of GM seeds onto organic farms, heavy-handed use of patent infringement lawsuits)” (Biddle 2018, p. 365). In other words, the regulation of the use of GMOs in agriculture is a multidisciplinary science related issue, which makes use of scientific results from biomedicine, plant biology and agronomy, ecology, sociology and economics.7 Issuing political decisions then depends on the way the problem is framed not by one disciplinary field alone, but across several fields, in combination and often in conflict with each other. Following Biddle, framing a problem in a certain way means choosing “which sort of evidence is relevant to that problem” (2018, p. 369). By “sort of evidence,” Biddle does not refer to the evidence produced by different methods within a given scientific discipline, but to the kinds of evidence made available by different scientific disciplines and domains of investigation. In other words, the way(s) a problem is framed determines the relative weight that a discipline is attributed vis-à-vis others. And this may “impact on the distribution of investigative resources, which in turn will impact on the ways in which those investigations balance the risks of false positives, false negatives, and failure to generate results at all” (Biddle 2018, p. 369).

Biddle illustrates his view about framing evidence by pointing out that “critics of GMOs, for example, tend to have a broad range of concerns, including health and safety risks (e.g., allergenicity), risks to the environment (e.g., the evolution of herbicide and/or pesticide tolerant plants or animals), and legal and socio-economic risks (e.g., the accidental flow of GM seeds onto organic farms, heavy-handed use of patent infringement lawsuits). On the contrary, “GMO proponents […] tend to adopt a much narrower conception of risk. Some argue—or, more commonly, assume—that health and safety is the only relevant criterion for evaluating GM crops” (Biddle 2018, p. 366). In making decisions regarding the regulation of GMOs, the framing of the problem is (or should be) thus a central object of debate. Indeed, framing is ultimately a political choice since it heavily depends on ideological or moral values. Let us note that, within each of the scientific disciplines which may offer a relevant specific expertise (in the GMOs case, biology, ecology, economics, sociology, agronomy), there is also room for the sort of rational disagreement we described previously. By comparison with our second class of science related issues, what is specific about this third class is the existence of a diversity of field perspectives, driven by different objectives, theoretical backgrounds, empirical methods, and rules of evidence production.

With this third type of politically relevant science related issues, which kind of understanding is needed? Looking at the example of GMOs, it appears that a minimum requirement for citizens to form a sound opinion is an ability to identify what the different disciplines involved in the debate are, and what the objects and aims of investigation each field selects in view of entering the debate itself. Understanding the different disciplines dealing with the issue at stake gives indeed citizens a more structured view of the debate. It allows them to situate the different disciplinary or domain-laden positions by referring to the sort of evidence each position gives priority to. Giving priority to one discipline over others is mostly a matter of values and interest. But this, once more, does not entail that it is not rational to form an opinion (and to make a political choice when needed) on the basis of an argument supplied by one particular scientific domain. What it does mean is that forming a sound opinion (in the sense, given previously, of an opinion that could be discussed in a political arena by using sound arguments) demands an understanding of the disciplinary structure of the debate—namely, identifying the scientific fields where the arguments so debated originate from.

This kind of understanding would find an adequate description by resorting to a mixed model, which includes reference to both the “Nature of Science” and the “Science process” models as described in Phillips et al. (2018). As noted in the previous section, understanding the “Nature of Science” entails, among other features, understanding the role of values in assessing scientific hypotheses. In the case of this third class of science related issues, values play a role in choosing the kind(s) of proofs, coming from one specific discipline, which ought to be favored in defending a particular solution to a given politically relevant science related issue. Nonetheless, public understanding in this case also refers to the diversity of methods, belonging to different disciplines, that can be used to solve a given issue. In that sense, it also amounts to understanding the “science processes” as described by Phillips et al.

It must finally be noted that scientists and experts, when communicating their results, should clearly state which disciplinary perspective they adopt in a debate, potentially exposing themselves to the type of deep disagreement concerning single disciplines (as discussed in the previous section). This, however, does not blur the distinction between the two types of disagreement. It rather strengthens the need to pay separate attention to the impact and the consequences each type produces on what is to count as sound public understanding.

So, what should citizens understand about science to participate in democratic life? Contrary to the background assumption underlying past and current literature on public understanding of science, this paper aims to demonstrate that there cannot exist one unique answer to this question. The politically relevant science related issues are indeed epistemologically diverse, and this diversity makes the search for one, “across the board” referent for public understanding of science irrelevant, besides being misguided. To defend this idea, we proposed a basic typology of science related issues and assessed the impact that individual types of issues produce on the features of understanding citizens should rely on. In Table 1 below we offer a summary of our position.

Table 1.

Summary of models of public understanding of science vis-à-vis a typology of politically relevant science related issues.

 Type I issuesType II issuesType III issues
Focal epistemological feature Wide scientific consensus Rational disagreement within one discipline Rational disagreement between different disciplines 
Examples ● Is current climate change due to human activities? ● Is chemical A (resp. neonicotinoid/endocrine disruptors) dangerous for species B (resp. bees/human beings)? ● Should GMOs culture be limited? 
● Are recommended vaccines safe? 
Referent of public understanding Understanding of the pattern of (dis)agreement among experts Understanding of the role of values in balancing inductive risks Understanding of the cross-disciplinary structure of the debate as informed by specific fields of scientific inquiry 
Links to existing model(s) of public understanding of science Scientific division of labour model (akin to Keren 2018Nature of science model (akin to Phillips et al. 2018Nature of science model and Science process model (akin to Phillips et al. 2018
 Type I issuesType II issuesType III issues
Focal epistemological feature Wide scientific consensus Rational disagreement within one discipline Rational disagreement between different disciplines 
Examples ● Is current climate change due to human activities? ● Is chemical A (resp. neonicotinoid/endocrine disruptors) dangerous for species B (resp. bees/human beings)? ● Should GMOs culture be limited? 
● Are recommended vaccines safe? 
Referent of public understanding Understanding of the pattern of (dis)agreement among experts Understanding of the role of values in balancing inductive risks Understanding of the cross-disciplinary structure of the debate as informed by specific fields of scientific inquiry 
Links to existing model(s) of public understanding of science Scientific division of labour model (akin to Keren 2018Nature of science model (akin to Phillips et al. 2018Nature of science model and Science process model (akin to Phillips et al. 2018

Our typology could by no means be modified and/or refined: the aim of creating such a typology—it is worth repeating here—is specifically to show how the nature of the issue at stake influences the kind of understanding that citizens need if they want to participate competently in democratic life.

Arguing in favor of a case by case understanding of science driven political problems is not the same as proposing practical solutions to facilitate the acquisition of relevant understanding. In this paper we are not dealing with the important practical issue of what institutional conditions allow citizens to access and identify sound scientific information. There is a well-established body of literature dealing with this issue—addressing either the role and the defining features of experts in the public debate or the tools that citizens can use to choose who can legitimately be considered an expert (see for instance Anderson 2011). A parallel debate concerns the way scientific information is communicated and received by citizens, notably in cases of deep scientific uncertainty (Gustafson and Rice 2019).

Instead, in this paper we raise a philosophical question on “what” public understanding or public knowledge of science is about, in the belief that any discussions concerning the civic science literacy problem can only make sense by elucidating ex ante what laypeople are asked to be “literate about.” Bringing conceptual clarity to what public understanding is about is preliminary to finding practical answers to the civic science literacy problem, and we take our discussion on the nature of the referents of such understanding to be a step in that direction.

1. 

This is not to say that figuring out what “understanding” itself amounts to is not equally crucial. And we are of course aware of the wide literature on the very concept of understanding in the philosophy of science. However the focus of this paper is not on the meaning of understanding but rather on its object or target.

2. 

IPCC (International Panel on Climate Change), NAS (National Academy of Sciences), American Meteorological Society, American Geophysical Union, American Association for the Advancement of Science, ECCP (European Climate Change Programme), etc.

3. 

Biddle contrasts deep disagreement (which he qualifies as “rational”) with another type that he calls dirty (or irrational): the latter is “disagreement that results from ignorance, bias, irrationality, or dishonesty on the part of one or more parties to the disagreement” (p. 376).

4. 

This is not to say that non-epistemic values are not at play in situations where consensus (and lack of deep uncertainty) is present, e.g., Montuschi (2017a), p. 71.

5. 

Citizens follow a “motivated” reasoning here, which is different from the model often described in some literature (e.g., Kahan et al. 2012, quoted by Douglas 2017). The motivation at stake does not intend to secure personal interest (which, left on their own, open the gate to confirmation bias, ideological conviction, and the like) to the detriment of legitimate and accountable scientific evidence.

6. 

In the (mainly sociological) literature, this kind of problems are sometimes labelled as “complex,” “uncertain”, or “wicked” (see Spruijt et al. 2014 for a review). We do not use here this terminology which is mainly designed to think about the role of experts as policy advisers.

7. 

Another interesting multidisciplinary and highly divisive case of evidence gathering consists of the scientific (and policy) debate on the causes of the spread of bovine tuberculosis in the United Kingdom. See Montuschi (2017b).

American Association for the Advancement of Science
.
1993
.
Benchmarks for Science Literacy
.
New York
:
Oxford University Press
.
Anderson
,
Elisabeth
.
2011
. “
Democracy, Public Policy, and Lay Assessments of Scientific Testimony
.”
Episteme
8
(
2
):
144
164
.
Bedford
,
Daniel
.
2016
. “
Does Climate Literacy Matter? A Case Study of US Students’ Level of Concern about Anthropogenic Global Warming
.”
Journal of Geography
115
(
5
):
187
197
.
Bergman
,
Ake
,
Georg
Becher
,
Bruce
Blumberg
,
Poul
Bjerregaard
,
Riana
Bornman
,
Ingvar
Brandt
,
Stephanie C.
Casey
,
Heloise
Frouin
,
Linda C.
Giudice
, and
Jerrold J.
Heindel
.
2015
. “
Manufacturing Doubt About Endocrine Disrupter Science—A Rebuttal of Industry-Sponsored Critical Comments on the UNEP/WHO Report ‘State of the Science of Endocrine Disrupting Chemicals 2012’
.”
Regulatory Toxicology and Pharmacology
73
(
3
):
1007
1017
. ,
[PubMed]
Biddle
,
Justin B.
2016
. “
Inductive Risk, Epistemic Risk, and Overdiagnosis of Disease
.”
Perspectives on Science
24
(
2
):
192
205
.
Biddle
,
Justin B.
2018
. “
‘Antiscience Zealotry?’ Values, Epistemic Risk, and the GMO Debate
.”
Philosophy of Science
85
(
3
):
360
379
.
Blacquiere
,
Tjeerd
,
Guy
Smagghe
,
Cornelis A. M.
van Gestel
, and
Veerle
Mommaerts
.
2012
. “
Neonicotinoids in Bees: A Review On Concentrations, Side-Effects and Risk Assessment
.”
Ecotoxicology
21
(
4
):
973
992
. ,
[PubMed]
Brescia
,
Susy
.
2020
. “
Thresholds of Adversity and their Applicability to Endocrine Disrupting Chemicals
.”
Critical Reviews in Toxicology
50
(
3
):
213
218
. ,
[PubMed]
Budge
,
Gilles E.
,
David
Garthwaite
,
Andrew
Crowe
,
Nigel D.
Boatman
,
Keith S.
Delaplane
,
Mike A.
Brown
,
H. H.
Thygesen
, and
Stéphane
Pietravalle
.
2015
. “
Evidence for Pollinator Cost and Farming Benefits of Neonicotinoid Seed Coatings on Oilseed Rape
.”
Scientific Reports
5
(
1
):
1
12
. ,
[PubMed]
Chauzat
,
Marie-Pierre
,
Patrice
Carpentier
,
Anne-Claire
Martel
,
Stéphanie
Bougeard
,
Nicolas
Cougoule
,
Philippe
Porta
,
Julie
Lachaize
,
François
Madec
,
Michel
Aubert
, and
Jean-Paul
Faucon
.
2009
. “
Influence of Pesticide Residues on Honey Bee (Hymenoptera: Apidae) Colony Health in France
.”
Environmental Entomology
38
(
3
):
514
523
. ,
[PubMed]
Clarke
,
Christopher E.
,
Graham N.
Dixon
,
Avery
Holton
, and
Brooke Weberling
McKeever
.
2015
. “
Including “Evidentiary Balance” In News Media Coverage of Vaccine Risk
.”
Health Communication
30
(
5
):
461
472
. ,
[PubMed]
Cook
,
John
et al
2016
. “
Consensus on Consensus: A Synthesis of Consensus Estimates on Human-Caused Global Warming
.”
Environmental Research Letters
11
:
048002
.
Cresswell
,
James E.
,
Christopher J.
Page
,
Mehmet B.
Uygun
,
Marie
Holmbergh
,
Yueru
Li
,
Jonathan G.
Wheeler
,
Ian
Laycock
,
Christopher J.
Pook
,
Natalie
Hempel de Ibarra
, and
Nick
Smirnoff
.
2012
. “
Differential Sensitivity of Honey Bees and Bumble Bees to a Dietary Insecticide (Imidacloprid)
.”
Zoology
115
(
6
):
365
371
. ,
[PubMed]
Department for Environment, Food, and Rural Affairs (DEFRA)
.
2013
.
An Assessment of Key Evidence About Neonicotinoids and Bees
. https://www.gov.uk/government/publications/an-assessment-of-key-evidence-about-neonicotinoids-and-bees
Dietrich
,
Daniel R.
,
Sonja
von Aulock
,
Hans
Marquardt
,
Bas J.
Blaauboer
,
Wolfgang
Dekant
,
James
Kehrer
,
Jan
Hengstler
,
Abby
Collier
,
Gio Batta
Gori
, and
Olavi
Pelkonen
.
2013
. “
Scientifically Unfounded Precaution Drives European Commission’s Recommendations on EDC Regulation, While Defying Common Sense, Well-Established Science and Risk Assessment Principles
.”
Toxicology in Vitro
27
(
7
):
2110
2114
. ,
[PubMed]
Douglas
,
Heather
.
2000
. “
Inductive Risk and Values in Science
.”
Philosophy of Science
67
:
559
579
.
Douglas
,
Heather
.
2009
.
Science, Policy, and the Value-Free Ideal
.
Pittsburgh
:
University of Pittsburgh Press
.
Douglas
,
Heather
.
2017
. “
Science, Values, and Citizens
.” Pp.
83
96
in
Eppur si muove: Doing History and Philosophy of Science with Peter Machamer
. Edited by
Marcus P.
Adams
,
Zvi
Biener
,
Uljana
Feest
, and
Jacqueline A.
Sullivan
.
Berlin
:
Springer
.
Duncan
,
Ravit Golan
,
Clark A.
Chinn
, and
Sarit
Barzilai
.
2018
. “
Grasp of Evidence: Problematizing and Expanding the Next Generation Science Standards’ Conceptualization of Evidence
.”
Journal of Research in Science Teaching
55
(
7
):
907
937
.
Dupré
,
John
.
2007
. “
Fact and Value
.” Pp.
27
41
in
Value-Free Science? Ideals and Illusions
. Edited by
H.
Kincaid
,
J.
Dupré
, and
A.
Wylie
.
Oxford
:
Oxford University Press
.
EFSA
.
2013
. “
EFSA Guidance Document on the Risk Assessment of Plant Protection Products on Bees (Apis Mellifera, Bombus Spp. and Solitary Bees)
.”
EFSA Journal
11
(
7
):
3295
.
Elliott
,
Kevin C.
2011
.
Is a Little Pollution Good For You? Incorporating Societal Values in Environmental Research
.
Oxford
:
Oxford University Press
.
Elliott
,
Kevin C.
, and
Ted
Richards
.
2017
.
Exploring Inductive Risk: Case Studies of Values in Science
.
Oxford
:
Oxford University Press
.
Feinstein
,
Noah
.
2011
. “
Salvaging Science Literacy
.”
Science Education
95
(
1
):
168
185
.
Friedman
,
A.
(Ed.).
2008
. “
Framework for Evaluating Impacts of Informal Science Education Projects
[On-line]
. http://insci.org/resources/Eval_Framework.pdf
Genersch
,
Elke
,
Werner
von der Ohe
,
Hannes
Kaatz
,
Annette
Schroeder
,
Christoph
Otten
,
Ralph
Büchler
,
Stefan
Berg
,
Wolfgang
Ritter
,
Werner
Mühlen
, and
Sebastian
Gisder
.
2010
. “
The German Bee Monitoring Project: A Long Term Study to Understand Periodically High Winter Losses of Honey Bee Colonies
.”
Apidologie
41
(
3
):
332
352
.
Gerken
,
Mikkel
.
2019
. “
How to Balance Balanced Reporting and Reliable Reporting
.”
Philosophical Studies
177
:
3117
3142
.
Gill
,
Richard J.
,
Oscar
Ramos-Rodriguez
, and
Nigel E.
Raine
.
2012
. “
Combined Pesticide Exposure Severely Affects Individual-and Colony-Level Traits in Bees
.”
Nature
491
(
7422
):
105
108
. ,
[PubMed]
Godfray
,
H. Charles J.
,
Andrea E. A.
Stephens
,
Paul D.
Jepson
,
Susan
Jobling
,
Andrew C.
Johnson
,
Peter
Matthiessen
,
John P.
Sumpter
,
Charles R.
Tyler
, and
Angela R.
McLean
.
2019
. “
A Restatement of the Natural Science Evidence Base On The Effects Of Endocrine Disrupting Chemicals on Wildlife
.”
Proceedings of the Royal Society B
286
(
1897
):
20182416
. ,
[PubMed]
Gore
,
Andrea C.
,
Jacques
Balthazart
,
Daniel
Bikle
,
David O.
Carpenter
,
David
Crews
,
Paul
Czernichow
,
Evanthia
Diamanti-Kandarakis
,
Robert M.
Dores
,
David
Grattan
,
Patrick R.
Hof
, et al,
2013
. “
Policy Decisions on Endocrine Disruptors Should Be Based on Science Across Disciplines: A Response to Dietrich et al
.”
Endocrinology
154
(
11
):
3957
3960
. ,
[PubMed]
Gustafson
,
Abel
, and
Ronald E.
Rice
.
2019
. “
The Effects of Uncertainty Frames in Three Science Communication Topics
.”
Science Communication
41
(
6
):
679
706
.
Guy
,
Sophie
,
Yoshihisa
Kashima
,
Iain
Walker
, and
Saffron
O’Neill
.
2014
. “
Investigating the Effects of Knowledge and Ideology on Climate Change Beliefs
.”
European Journal of Social Psychology
44
(
5
):
421
429
.
Henry
,
Mickaël
,
Maxime
Beguin
,
Fabrice
Requier
,
Orianne
Rollin
,
Jean-François
Odoux
,
Pierrick
Aupinel
,
Jean
Aptel
,
Sylvie
Tchamitchian
, and
Axel
Decourtye
.
2012
. “
A Common Pesticide Decreases Foraging Success and Survival in Honey Bees
.”
Science
336
(
6079
):
348
350
. ,
[PubMed]
Hurd
,
Paul D.
1958
. “
Science Literacy: Its Meaning for American Schools
.”
Educational Leadership
16
(
1
):
13
16
.
Huxster
,
Joanna K.
,
Matthew H.
Slater
,
Jason
Leddington
,
Victor
LoPiccolo
,
Jeffrey
Bergman
,
Mack
Jones
,
Caroline
McGlynn
,
Nicolas
Diaz
,
Nathan
Aspinall
, and
Julia
Bresticker
.
2018
. “
Understanding “Understanding” in Public Understanding of Science
.”
Public Understanding of Science
27
(
7
):
756
771
.
Johnson
,
Dan R.
2017
. “
Bridging the Political Divide: Highlighting Explanatory Power Mitigates Biased Evaluation of Climate Arguments
.”
Journal of Environmental Psychology
51
:
248
255
.
Kahan
,
Dan M.
,
Ellen
Peters
,
Maggie
Wittlin
,
Paul
Slovic
,
Lisa Larrimore
Ouellette
,
Donald
Braman
, and
Gregory
Mandel
.
2012
. “
The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks
.”
Nature Climate Change
2
(
10
):
732
735
.
Kampourakis
,
Kostas
, and
Kevin
McCain
.
2020
.
Uncertainty: How It Makes Science Advance
.
Oxford
:
Oxford University Press
.
Keren
,
Arnon
.
2018
. “
The Public Understanding of What? Laypersons’ Epistemic Needs, the Division of Cognitive Labor, and the Demarcation Of Science
.”
Philosophy of Science
85
(
5
):
781
792
.
Lamb
IV,
James C.
,
Paolo
Boffetta
,
Warren G.
Foster
,
Julie E.
Goodman
,
Karyn L.
Hentz
,
Lorenz R.
Rhomberg
,
Jane
Staveley
,
Gerard
Swaen
,
Glen
Van Der Kraak
, and
Amy L.
Williams
.
2014
. “
Critical Comments on the WHO-UNEP State of the Science of Endocrine Disrupting Chemicals–2012
.”
Regulatory Toxicology and Pharmacology
69
(
1
):
22
40
. ,
[PubMed]
Lederman
,
Norman G.
,
Allison
Antink
, and
Stephen
Bartos
.
2014
. “
Nature of Science, Scientific Inquiry, and Socio-Scientific Issues Arising from Genetics: A Pathway to Developing a Scientifically Literate Citizenry
.”
Science & Education: Contributions from History, Philosophy, and Sociology of Science and Mathematics
23
:
285
302
.
Lofstedt
,
Ragnar E.
2011
. “
Risk versus Hazard–How to Regulate in the 21st Century
.”
European Journal of Risk Regulation
2
(
2
):
149
168
.
Longino
,
Helen
.
1990
.
Science as Social Knowledge: Values and Objectivity in Scientific Inquiry
.
Princeton
:
Princeton University Press
.
Lynch
,
Michael P
.
2010
. “
Epistemic Circularity and Epistemic Incommensurability
.” Pp.
262
277
in
Social Epistemology
. Edited by
Adrian
Haddock
,
Alan
Millar
, and
Duncan
Pritchard
.
Oxford
:
Oxford University Press
.
McIlroy-Young
,
Bronwyn
,
Gunilla
Öberg
, and
Annegaaike
Leopold
.
2021
. “
The Manufacturing of Consensus: A Struggle for Epistemic Authority in Chemical Risk Evaluation
.”
Environmental Science & Policy
122
:
25
34
.
Miller
,
Jon D.
2004
. “
Public Understanding of, and Attitudes Toward, Scientific Research: What We Know and What We Need to Know
.”
Public Understanding of Science
13
(
3
):
273
294
.
Miller
,
Jon D.
2010
. “
The Conceptualization and Measurement of Civic Scientific Literacy for the Twenty-First Century
.”
Science and the Educated American: A Core Component of Liberal Education
136
:
241
255
.
Montuschi
,
Eleonora
.
2017a
. “
Scientific Evidence vs. Expert Opinion: A False Alternative?
Politeia
,
special issue ‘Science and Democracy’
XXXIII
:
69
79
.
Montuschi
,
Eleonora
.
2017b
. “
Using Science, Making Policy: What Should We Worry About?
European Journal for Philosophy of Science
7
(
1
):
57
78
.
Oreskes
,
Naomi
.
2018
. “
The Scientific Consensus on Climate Change: How Do We Know We’re Not Wrong?
” Pp
31
64
in
Climate Modelling: Philosophical and Conceptual Issues
. Edited by
E. A.
Lloyd
and
E.
Winsberg
.
London
:
Palgrave Macmillan
.
Phillips
,
Tina
,
Norman
Porticella
,
Mark
Constas
, and
Rick
Bonney
.
2018
. “
A Framework for Articulating and Measuring Individual Learning Outcomes from Participation in Citizen Science
.”
Citizen Science: Theory And Practice
3
(
2
):
1
19
.
Ranney
,
Michael Andrew
, and
Dav
Clark
.
2016
. “
Climate Change Conceptual Change: Scientific Information Can Transform Attitudes
.”
Topics in Cognitive Science
8
(
1
):
49
75
. ,
[PubMed]
Rutjens
,
Bastiaan T.
,
Steven J.
Heine
,
Robbie M.
Sutton
, and
Frenk
van Harreveld
.
2018
. “
Attitudes towards Science
.”
Advances in Experimental Social Psychology
57
:
125
165
.
Shen
,
Benjamin S. P.
1975
. “
Views: Science Literacy: Public Understanding of Science is Becoming Vitally Needed in Developing and Industrialized Countries Alike
.”
American Scientist
63
(
3
):
265
268
.
Slater
,
Matthew
,
Joanna
Huxster
, and
Julia
Bresticker
.
2019
. “
Understanding and Trusting Science
.”
Journal for General Philosophy of Science
50
:
247
261
.
Spruijt
,
Pita
,
Anne B.
Knol
,
Eleftheria
Vasileiadou
,
Jeroen
Devilee
,
Erik
Lebret
, and
Arthur C.
Petersen
.
2014
.“
Roles of Scientists As Policy Advisers on Complex Issues: A Literature Review
.”
Environmental Science & Policy
40
:
16
25
.
Sterk
,
Guido
,
Britta
Peters
,
Zhenglei
Gao
, and
Ulrich
Zumkier
.
2016
. “
Large-Scale Monitoring of Effects of Clothianidin-Dressed OSR Seeds on Pollinating Insects in Northern Germany: Effects on Large Earth Bumble Bees (Bombus terrestris)
.”
Ecotoxicology
25
(
9
):
1666
1678
. ,
[PubMed]
Sturgis
,
Patrick
,
Ian
Brunton-Smith
, and
Jonathan
Jackson
.
2021
. “
Trust in Science, Social Consensus and Vaccine Confidence
.”
Nature Human Behaviour
5
:
1528
1534
. ,
[PubMed]
van der Linden
,
Sander
.
2016
. “
Why Doctors Should Convey the Medical Consensus on Vaccine Safety
.”
Evidence Based Medicine
21
(
3
):
119
. ,
[PubMed]
Whitehorn
,
Penelope R.
,
Stephanie
O’Connor
,
Felix L.
Wackers
, and
Dave
Goulson
.
2012
. “
Neonicotinoid Pesticide Reduces Bumble Bee Colony Growth and Queen Production
.”
Science
336
(
6079
):
351
352
. ,
[PubMed]
Wintermantel
,
Dimitry
,
Barbara
Locke
,
Georg K. S.
Andersson
,
Emilia
Semberg
,
Eva
Forsgren
,
Julia
Osterman
,
Thorsten Rahbek
Pedersen
,
Riccardo
Bommarco
,
Henrik G.
Smith
, and
Maj
Rundlöf
.
2018
. “
Field-Level Clothianidin Exposure Affects Bumblebees but Generally Not Their Pathogens
.”
Nature Communications
9
(
1
):
1
10
. ,
[PubMed]
Woodcock
,
Ben A.
,
James M.
Bullock
,
Richard F.
Shore
,
Matthew S.
Heard
,
M. Gloria
Pereira
,
John W.
Redhead
,
Lucy
Ridding
,
Hannah J.
Dean
,
Darren
Sleep
, and
Peter A.
Henrys
.
2017
. “
Country-Specific Effects of Neonicotinoid Pesticides on Honey Bees and Wild Bees
.”
Science
356
(
6345
):
1393
1395
. ,
[PubMed]

Author notes

This paper has been written thanks to the support of the ISEED research project (Grant Agreement No. 960366, European Union’s Horizon 2020 Research and Innovation Programme). It reflects only the author’s views, and the European Union is not liable for any use that may be made of the information contained therein. For their comments and advice we thank in particular Roberto Gronda, Pierluigi Barrotta, Stephanie Ruphie, and Elena Denia.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.