Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Jodi Schneider
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2024) 5 (1): 219–245.
Published: 01 March 2024
FIGURES
| View All (7)
Abstract
View article
PDF
Although systematic reviews are intended to provide trusted scientific knowledge to meet the needs of decision-makers, their reliability can be threatened by bias and irreproducibility. To help decision-makers assess the risks in systematic reviews that they intend to use as the foundation of their action, we designed and tested a new approach to analyzing the evidence selection of a review: its coverage of the primary literature and its comparison to other reviews. Our approach could also help anyone using or producing reviews understand diversity or convergence in evidence selection. The basis of our approach is a new network construct called the inclusion network, which has two types of nodes: primary study reports (PSRs, the evidence) and systematic review reports (SRRs). The approach assesses risks in a given systematic review (the target SRR) by first constructing an inclusion network of the target SRR and other systematic reviews studying similar research questions (the companion SRRs) and then applying a three-step assessment process that utilizes visualizations, quantitative network metrics, and time series analysis. This paper introduces our approach and demonstrates it in two case studies. We identified the following risks: missing potentially relevant evidence, epistemic division in the scientific community, and recent instability in evidence selection standards. We also compare our inclusion network approach to knowledge assessment approaches based on another influential network construct, the claim-specific citation network, discuss current limitations of the inclusion network approach, and present directions for future work.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2022) 2 (4): 1144–1169.
Published: 01 December 2021
FIGURES
| View All (7)
Abstract
View article
PDF
We present the first database-wide study on the citation contexts of retracted papers, which covers 7,813 retracted papers indexed in PubMed, 169,434 citations collected from iCite, and 48,134 citation contexts identified from the XML version of the PubMed Central Open Access Subset. Compared with previous citation studies that focused on comparing citation counts using two time frames (i.e., preretraction and postretraction), our analyses show the longitudinal trends of citations to retracted papers in the past 60 years (1960–2020). Our temporal analyses show that retracted papers continued to be cited, but that old retracted papers stopped being cited as time progressed. Analysis of the text progression of pre- and postretraction citation contexts shows that retraction did not change the way the retracted papers were cited. Furthermore, among the 13,252 postretraction citation contexts, only 722 (5.4%) citation contexts acknowledged the retraction. In these 722 citation contexts, the retracted papers were most commonly cited as related work or as an example of problematic science. Our findings deepen the understanding of why retraction does not stop citation and demonstrate that the vast majority of postretraction citations in biomedicine do not document the retraction.
Includes: Supplementary data