PEER REVIEW
Current reform movements in science seek to change how researchers do science and the tools and infrastructure they use to so, and how they assess each other's work in terms of quality and value (Chiarelli, Loffreda, & Johnson, 2021; Munafò, Nosek et al., 2017; Randall & Welser, 2018). Here, we argue that openness and replicability are quickly becoming key indicators for such quality assessments and they sometimes operate through citation strategies that actively pursue (some degree of) oblivion for nonreformed science (Dimbath, 2022). We do not oppose a genuine pursuit of transparency and methodological quality but are very concerned by how uncritical and oversimplified interpretations of both are skewing the collective memory of the scholarly community.
We offer two cases—one on replication and one on transparency—to highlight how scientific reform is rewriting scientific history through citation strategies. First: transparency. Folk and Dunn (2023) recently published a critical systematic review on happiness research. They scrutinize the evidence supporting happiness-increasing strategies in the literature—but that is not what concerns us most. In their evaluation of the strength of evidence, they write that they demoted “small non-preregistered experiments” to the online supplementary information and do not discuss them in their review, “due to their relatively low evidentiary value.” The studies they demote by these criteria comprise 89% of all eligible studies. Notably, the exclusion criteria did not involve comparing the methods of the published study against the preregistration but simply whether a sufficiently detailed preregistration existed or not. Just as the size of a study does not guarantee methodological quality, the sheer existence of preregistration status does not either. However, both were used as a proxy for evidential value, as exemplified by the text of the review, as well as its citation record. Here, the heuristic-level commitment to openness and transparency to the level of demanding full methodological transparency exclusively in the form of preregistration as a bare minimum is not helping research along, but—in the words of Leonelli (2023) “become[s] an obstacle to the [open science] movement's efforts to promote reliable and responsible research.” Particularly considering the role systematic reviews play in shaping future citation patterns, the uncritical use of transparency as a key criterion for exclusion is concerning.
Second: replication. Clark and Connor (2023) demonstrate that following a failed replication study, scientists cite the original study less and consider this shift in citations a contribution to the self-correction of science. Here, our aim is not to contest the descriptive value of their data but to highlight how the assumptions on the meaning of a failed replication frame their results as a net benefit to science.
Trivially, failing to replicate a result once does not categorically render it unreplicable (Buzbas, Devezer, & Baumgaertner, 2023), contrary to what Clark and Connor claim. “One study is no study” also applies to replication attempts, after all. More importantly, a failed replication does not equal a disqualification of its central claim (Devezer, Navarro et al., 2021) or warrant a retraction of the original paper. It raises questions, some mundane and boring, but also some new and exciting avenues for research (Macleod, 2022). It makes little sense to treat studies and their replication attempts as gladiators fighting to the death where only one may survive. The metaphor of “signal-to-noise ratio” that Clark and Connor use suggests a similar dichotomous view of the value of a study, between those with and without value, but in fact the accumulation of data over multiple studies enhances the signal and reduces noise incrementally.
Clark and Connor suggest that the decline in citations “may indeed be contributing to a more self-correcting science.” This framing worries us, for it is scientifically misguided for at least two reasons. First, if authors decide not to cite a study due to a failed replication, this suggests that they heuristically interpret the replication attempts as arbiters of truth, instead of valuing each study on its own merits (and cite it accordingly). Linking citation trends with self-correction ignores the epistemic, moral, and political content of what it means to replicate. We do not replicate a study to close a case and have the last word. Replication can be framed as a civilization process or an act of repair (Penders, 2022; Penders, de Rijcke, & Holbrook, 2020), but that does not grant replications a higher epistemic status. Second, scientists cite papers because of their methods, theoretical or conceptual contributions, results, and more. To refrain from citing a paper because one of its claims did not successfully replicate equals a disqualification of all the potential qualities of a study and paper. This does not correct the scientific record but distorts it.
Differences between a study and its replication fuel scientific theorizing, urging scrutiny and citation of both original and replication, not just the latter. They should make us wonder, not rush to judgment. Replication, like any research endeavor, is part of an ongoing inquiry and learning process. Failed replication is a clue, a starting point, a source of questions, and a motivator for scientific progress when considered alongside the original study and every iteration in between. Similarly, nonpreregistered studies are not of lesser quality because they lack a preregistration. Preregistration is a bureaucratic characteristic of a study, not a methodological one, and far from the only available path to transparency (Lee, Criss et al., 2019).
Irreplicability and the absence of a preregistration are not proxies for poor science, deserving fewer citations. Garcìa-Castro mockingly wrote on Twitter1: “Scientific evidence was discovered circa 2018, thanks to the invention of pre-registration. Little is known about how humanity thrived during the dark times before this event.” An exaggeration, yet the two cases discussed do foreshadow the conflation of bureaucracy and quality, which is in turn enacted through citation. The mantra “open science is just science done right” (Imming & Tennant, 2018) further moralizes this conflation. Only science and scholarship that complies with the bureaucratic requirements of reformed science deserves citation, deserves to be remembered and to serve as the basis for new scholarship; only those complying with reform bureaucracy deserve to escape scientific oblivion (Dimbath, 2022). Citation has always been political and scientific reform is not necessarily monolithic (Tunç, Tunç, & Eper, 2023), meaning that the citational politics we describe here need not be exemplary of all reform efforts. However, in this simplified form, we can only echo Leonelli's (2023) concern that this is not helping, but instead harming the integrity of scholarly practice and the scholarly record.
AUTHOR CONTRIBUTIONS
Berna Devezer: Conceptualization, Writing—review & editing. Bart Penders: Conceptualization, Writing—original draft, Writing—review & editing.
COMPETING INTERESTS
The authors have no competing interests.
FUNDING INFORMATION
The authors received no dedicated funding for this work.
Note
Before the platform was rebranded X. Tweet URL: https://twitter.com/gongcastro/status/1682311236451459072?s=20.
REFERENCES
Author notes
Handling Editor: Vincent Larivière