The COVID-19 pandemic led to a surge of academic publications in medical journals in early 2020. A concern has been that the methodological quality of this research is poor, due to the large volume of publications submitted to journals and the rapidity of peer review. The aim of the present study was to examine the COVID-19 papers that appeared in 15 top-ranked generalist public health journals in 2020. The COVID-19 related publications contributing to each journal’s h5 index were identified and the following data were collected: publication type (research report versus nonresearch); number of citations; length of peer review; registration of the study; and type of study design. Of 962 articles that contributed to the journals’ h5-index scores 109 pertained to COVID-19. Three journals accounted for about 70% of the total COVID-19 articles and the subgroup of 74 research reports. Two journals accounted for 18 of the 25 research reports, with over 200 citations. Nearly two-thirds of research reports were cross-sectional surveys (mostly using convenience samples), narrative reviews or analyses of internet data. Median time in peer review was 21.5 days. Only one study was registered. Dissemination of research that has undergone insufficient peer review can lead to misguided public health practice.

The coronavirus disease 2019 (COVID-19) pandemic led to a rapid and dramatic increase in the number of research publications in medical and healthcare journals starting earlier in 2020 (Raynaud, Zhang et al., 2021; Schwab & Held, 2020). Although some of this increase was researcher-driven, it was also facilitated by academic journals soliciting pandemic-related manuscripts (Brown & Horton, 2020; JMIR Publications, 2020). Studies have shown that the median time from submission to acceptance of COVID-19 manuscripts in medical journals during the early phase of the pandemic was significantly shorter than other articles published in the same journals (Horbach, 2020; Palayew, Norgaard et al., 2020). COVID-19 papers have also been found to use weaker study designs (e.g., case studies and reports) and to be of lower methodological quality when judged in terms of adherence to reporting guidelines and risk of bias (Jung, Di Santo et al., 2021; Khatter, Norton et al., 2021; Quinn, Burton et al., 2021; Zdravkovic, Berger-Estilita et al., 2020). This has led to concern that research beset by bias and error is being disseminated by academic journals (Bramstedt, 2020), although the extent to which such studies are read and cited by others is unclear.

In the field of public health, Digitale, Stojanovski et al. (2021) examined observational studies that evaluated nonpharmaceutical national and state policy interventions designed to slow the transmission of COVID-19. They found that the most common study designs used were pretest-posttest, time-series analysis, and difference-in-difference models. They also noted the limitations of such study designs in allowing causal inferences to be drawn and discussed the methodological strengths and weaknesses of the specific studies reviewed (e.g., presence or absence of a control condition in those using time-series analysis).

We aimed to examine the quality of COVID-19 research articles published in top-ranked generalist public health journals according to Google Scholar (Delgado López-Cózar & Cabezas-Clavijo, 2013). This journal ranking system is based on citations and allows assessment of the number of times articles have been cited in other publications, thereby providing an indication of the extent to which they have been disseminated. Given this focus on dissemination, Google Scholar has some advantages over other journal metrics. First, unlike Web of Science and Scopus, it is an open access search engine, making it accessible to a wide range of the population beyond those with institutional access through a library. Second, Google Scholar groups journals into subject categories, one of which is Public Health. This feature makes it very user friendly for anyone seeking academic information about a public health problem such as COVID-19. Third, Google Scholar has by far the widest database coverage citation count of current bibliometric platforms (Martín-Martín, Thelwall et al., 2021; Marsicano & Nichols, 2022), thereby providing the best indication of how widely a paper has been disseminated. Fourth, this broader coverage results largely from the fact that Google Scholar includes citations from a wider array of sources than competitors such as Scopus and Web of Science, although some of these might reasonably be considered nonscholarly sources (e.g., student handbooks and Web sites (Kulkarni, Aziz et al., 2009; Martín-Martín et al., 2021). For the purposes of the current study, this is not a problem, as the focus is on dissemination of papers in general, not just within academic literature.

Data pertaining to journal ranks and citations of publications were downloaded on March 25 and April 19, 2022 from Google Scholar, which categorizes journals within broad disciplines and subdisciplines. Public Health is a subdiscipline within Health and Medical Sciences. The current analysis focused on generalist rather than specialist journals in the Google Scholar top 20 Public Health category. It was reasoned that the former would be more likely to publish papers pertaining to COVID-19, especially in the initial phase of the pandemic. Journals that exclusively publish solicited reviews were also excluded, as these were unlikely to have published such reviews in the very early phase of the pandemic.

Google Scholar ranks journals according to their h5-index, which “is the h-index for articles published in the last 5 complete years. It is the largest number h such that h articles published in 2016–2020 have at least h citations each” (Google Scholar, 2022). Given the calculation used in the h5-index, the score determines the number of articles listed in Google Scholar for each journal (e.g., a score of 100 indicates that 100 cited articles in Journal X received at least 100 citations in the past five years).

The list of articles for each of the included journals was downloaded, along with their year of publication and h5-index score. The titles, abstracts (if the title was ambiguous), and dates were reviewed to determine whether the paper pertained to COVID-19. While the h5-index has a 2-year window, COVID-19 was not identified until the end of December 2019 (Centers for Disease Control and Prevention, 2022), so publications pertaining to it would only appear in the final year (2020) of this time period.

The full text PDF files of publications pertaining to COVID-19 were then downloaded and reviewed, and the following information recorded:

  • Types of publication: research report versus nonresearch article (e.g., editorial, perspective, commentary, correspondence, protocol). Research reports contained analyses of either original or secondary data and typically contained a methods and results section.

  • The number of citations of the papers according to Google Scholar.

  • The date the journal received a manuscript describing original research for review and the date it accepted it for publication. For those journals that did not provide dates of receipt and acceptance, the corresponding author was emailed (up to three times) and asked to provide this information.

  • Whether the study was prospectively or retrospectively registered in a registry such as ClinicalTrials.gov.

  • Type of study design used in original research reports. Five categories commonly employed to describe epidemiological research and to create hierarchies of study designs were used: cross-sectional survey; ecological; case-control, cohort; and randomized controlled trial (RCT) (Aschengrau & Seage, 2014; Friis & Sellers, 2021).

    • ○ 

      Cross-sectional Studies. Those that collected individual-level data pertaining to COVID-19 and/or variables related to its etiology, prevalence, prevention, or treatment at one point in time using interviews or questionnaires.

    • ○ 

      Ecological studies. Those in which the unit of analysis was a geographic entity and not an individual. These included studies that described risk factors or outcomes within geographic units such as countries or states, changes in these over time, or comparisons across such units of analysis.

    • ○ 

      Case-control studies. Those that compared a group of individual subjects with COVID-19 to a matched group without and attempted to identify past exposure.

    • ○ 

      Cohort studies. Those that observed a group of individuals at different levels of risk of exposure and assessed future occurrence of one or more outcomes (Prospective Cohort) or those that assessed past exposure and occurrence of one or more outcomes (Retrospective Cohort).

    • ○ 

      RCTs Those that randomly allocated individual subjects to an intervention condition and a control condition and followed them up over time.

Finally, in recognition that infectious disease epidemiologists and public health researchers employ study designs other than these commonly used epidemiological methods, such as pretest-posttest, interrupted time-series, mathematical/computational modeling and natural experiments, an addition category of Other Studies was included in the study design categorization.

The information from the publications was downloaded into Word and Excel documents, and the latter used to organize and summarize data and create the figure.

Of the 20 top-ranked public health journals in Google Scholar, 15 were judged to be generalist publications; these, along with their h5-index score and rank, are listed in columns 1 and 2 of Table 1. The four excluded specialist journals were the International Journal of Behavioral Nutrition and Physical Activity (ranked 5th), Tobacco Control (9th), Nicotine & Tobacco Research (11th), and AIDS & Behavior (12th). The Annual Review of Public Health (10th), which only publishes solicited reviews, was also excluded. The International Journal of Environmental Research and Public Health (IJERPH) was the top-ranked journal in the Google Scholar Public Health category, with an h5 index score of 113, meaning it had 113 papers with at least 113 citations between 2016 and 2020.

Table 1.

COVID-19 papers in the top-ranked Google Scholar general public health journals

Journal1h5-index score (rank)COVID-19 papers (% of journal’s total h5-index papers)Median citations of COVID-19 papers (range)Research reportResearch reports median days in review (range)6
Yes4No5
IJERPH 113 (1) 26 (23.0) 216.5 (113–4,553) 23 18 (7–68) 
AJPH 90 (2) 1 (1.1) 117 (–) – 
BMCPH 89 (3) 3 (3.4) 123 (115–159) 99 (84–229) 
AJPM 82 (4) 5 (6.1) 106 (86–218) 40 (16–57) 
LPH 70 (6) 23 (32.9) 255 (96–1,430) 10 13 37 (10–68)7 
BWHO 68 (7) 4 (5.9) 176.5 (91–242) 104 (82–126) 
PM 68 (8) 1 (1.5) 74 (–) 90 (–) 
JMIRPHS 51 (13) 26 (52.0)3 100 (53–464) 20 16 (4–61) 
IJEH 51 (14) 3 (5.9) 61 (55–152) 49 (–) 
HPP 50 (15) 0 (0) – – – – 
EJPH 49 (16) 1 (2.0) 74 (–) – 
PH 48 (17) 8 (16.7) 125 (54–241) 33 (4–104) 
PMR 47 (18) 0 (0) – – – – 
JGH 44 (19) 6 (13.6) 73.5 (55–106) 21.5 (13–30)8 
GHA 43 (20) 1 (2.3) 50 (–) 21 (–) 
Total 9622 108 (11.2) – 74 34 21.5 (4–229) 
Journal1h5-index score (rank)COVID-19 papers (% of journal’s total h5-index papers)Median citations of COVID-19 papers (range)Research reportResearch reports median days in review (range)6
Yes4No5
IJERPH 113 (1) 26 (23.0) 216.5 (113–4,553) 23 18 (7–68) 
AJPH 90 (2) 1 (1.1) 117 (–) – 
BMCPH 89 (3) 3 (3.4) 123 (115–159) 99 (84–229) 
AJPM 82 (4) 5 (6.1) 106 (86–218) 40 (16–57) 
LPH 70 (6) 23 (32.9) 255 (96–1,430) 10 13 37 (10–68)7 
BWHO 68 (7) 4 (5.9) 176.5 (91–242) 104 (82–126) 
PM 68 (8) 1 (1.5) 74 (–) 90 (–) 
JMIRPHS 51 (13) 26 (52.0)3 100 (53–464) 20 16 (4–61) 
IJEH 51 (14) 3 (5.9) 61 (55–152) 49 (–) 
HPP 50 (15) 0 (0) – – – – 
EJPH 49 (16) 1 (2.0) 74 (–) – 
PH 48 (17) 8 (16.7) 125 (54–241) 33 (4–104) 
PMR 47 (18) 0 (0) – – – – 
JGH 44 (19) 6 (13.6) 73.5 (55–106) 21.5 (13–30)8 
GHA 43 (20) 1 (2.3) 50 (–) 21 (–) 
Total 9622 108 (11.2) – 74 34 21.5 (4–229) 
1

IJERPH (International Journal of Environmental Research & Public Health); AJPH (American Journal of Public Health); BMCPH (BMC Public Health); AJPM (American Journal of Preventive Medicine); LPH (Lancet Public Health); BWHO (Bulletin of the World Health Organization); PM (Preventive Medicine); JMIRPHS (JMIR Public Health & Surveillance); IJEH (International Journal of Equity in Health); HPP (Health Policy & Planning); EJPH (European Journal of Public Health); PH (Public Health); PMR (Preventive Medicine Reports); JGH (Journal of Global Health); GHA (Global Health Action). Google Scholar data pertaining to journal ranks and citations of publications were downloaded on March 25 and April 19, 2022.

2

The h5-index score also denotes the number of papers for each journal. The only exception is JMIRPHS, which has an h5-index score of 51 but included one study pertaining to COVID-19 twice (once as a preprint and once as the final published version). The preprint, which had a lower h5-index score than the published version of the manuscript (145 versus 247), was excluded from the analysis, leaving 50 papers in total and 26 pertaining to COVID-19.

3

Based on 50 total articles and 26 pertaining to COVID-19 due to duplicate study included in the h5-index.

4

Includes one Research Letter (AJPM) and two Short Communications (PH).

5

Comprised of: Comment (LPH – 7); Commentary (IJEH – 2); Correspondence/Letter (LPH – 6; PH – 2); Editor’s Choice (AJPH – 1); Editorial (BWHO – 2; EJPH – 1; IJERPH – 2; JMIRPHS – 1; PH – 1); Protocol (JMIRPHS – 2); Viewpoint/Perspective (IJERPH – 1; JGH – 3; JMIRPHS – 3).

6

Data for AJPM, JGH, and LPH obtained through email requests to corresponding authors. Data for all other journals from the published papers. Total based on 72/74 papers.

7

Based on 9/10 responses from corresponding authors.

8

Based on 2/3 responses from corresponding authors.

There was no ambiguity in the titles of the 2020 papers as to whether they pertained to COVID-19, and there were no papers published prior to 2020 with a title indicating they were about the pandemic. Of the 962 articles included in the h5-index across the 15 journals, 109 (11.3%) pertained to COVID-19. However, JMIR Public Health and Surveillance (JMIRPHS) included a duplicate publication, listed both in its preprint format (145 citations; Bhagavathula, Aldhaleei et al., 2020a) and in its final published format (247 citations; Bhagavathula, Aldhaleei et al., 2020b). As the study design and sample were the same in both papers, and review times absent from the preprint, only the final published version was included in the analysis, leaving 108 COVID-19 papers.

There was considerable variability across journals in the proportion of COVID-19 papers contributing to their h5-index, ranging from zero to more than 50%. Three journals accounted for close to 70% of the 108 COVID-19 articles: IJERPH (26; 24.1%), JMIRPHS (26; 24.1%), and Lancet Public Health (LPH) (23; 21.3%). All of the 10 most cited articles in JMIRPHS were COVID-19 related, as were 9/10 in LPH and 5/10 in IJERPH. The two most highly cited articles were in two of these journals, IJERPH (4,553 citations) and LPH (1,430 citations), and they had by far the highest median citations (IJERPH 216.5; LPH 255).

Seventy-four of the 108 (68.5%) COVID-19 papers were research reports. The majority of COVID-19 articles published in IJERPH and JMIRPHS were research reports, although the most highly cited article in the latter was a viewpoint. In contrast, nearly 60% of the articles in LPH were correspondence or comments. This journal accounted for 38.1% of the papers that were not research reports. Almost 72% of the COVID-19 research reports were published in IJERPH (23; 31.1%), JMIRPHS (20; 27%) and LPH (10; 13.5%).

All but three of the 11 journals that published research reports included manuscript receipt and acceptance dates on their papers, the exceptions being AJPM, JGH, and LPH. Emails to corresponding authors resulted in data for 16 of 18 research reports published in these journals. The median time from submission to acceptance was 21.5 days across the 72 research reports for which data were available, with a range of 4–229. The high value was an outlier, as the next longest review period was 126 days. Eighty-one percent of peer reviews of the research reports were completed within 6 weeks. Only 10% took longer than 10 weeks, while close to one quarter took less than 2 weeks. The two journals that published the most COVID-19 research reports completed their reviews in the shortest median time: IJERPH (18 days) and JMIRPHS (16 days). When the 43 research reports in these two journals were excluded, the median days in review of the remaining 29 published in the other nine journals rose to 40.

Figure 1 shows the number of citations for the 74 research articles. The concentration of high citation papers in the IJERPH and LPH is noticeable, with 18/25 (72%) research reports with over 200 citations appearing in these journals. The most cited report, published in the IJERPH, was the top cited paper across all those that comprised the h5-indexes of the 15 journals, irrespective of whether a paper pertained to COVID-19. It had more than three times as many citations as the next most cited COVID-19 report, published in LPH, which had more than one-and-a-half times the citations of the third most cited report. This, along with five other papers, comprised a group with 600–900 citations; all these reports were published in IJERPH and LPH, as were the four with 300–400 citations. Six of the 13 research reports with 200–300 citations were also in these two journals, with the remaining seven published in JMIRPHS (four), BWHO (two), and AJPM (one). The group of 29 research reports with 100–200 citations was dominated by the IJERPH (12 papers) and JMIRPHS (seven papers). The latter journal also accounted for nine of the 20 papers with under 100 citations. The single research report published in GHA, IJEH, and PM each had fewer than 100 citations, as did the three in JGH. In summary, IJERPH and LPH accounted for all the most highly cited research reports (i.e., those over 400 citations), the 21 in JMIRPHS concentrated among the lower cited papers (under 200), and, apart from one AJPM and two BWHO reports, all of those from the other journals were in this lower citation group.

Figure 1.

Number of citations of research reports (n = 74) grouped by journal.

Figure 1.

Number of citations of research reports (n = 74) grouped by journal.

Close modal

Table 2 shows the categorization of the 74 research reports in terms of study design (Supplementary Table 3 contains a brief description of each study placed in each of the categories). None of the 74 reported case-control studies or RCTs and only three were categorized as cohort studies. These were all prospective cohort studies with follow-up periods of no greater than six weeks. These studies have 196, 255, and 729 citations, with the latter being the fourth most cited research report (729 citations) and the only one of the 74 to be registered. It was submitted for registration to ClinicalTrials.gov one week after the study start date, making it retrospectively registered.

Table 2.

Study design used in the 74 research reports1

Journal (research reports)Cross-sectional survey2EcologicalCohortNarrative reviewAnalysis of internet dataMathematical model/simulationOther
IJERPH (23) 14     
BMCPH (3)           
AJPM (5)         
LPH (10)         
BWHO (2)             
PM (1)             
JMIRPHS (20)     11   
IJEH (1)             
PH (5)       
JGH (3)         
GHA (1)             
TOTAL (74) 27 14 12 
Journal (research reports)Cross-sectional survey2EcologicalCohortNarrative reviewAnalysis of internet dataMathematical model/simulationOther
IJERPH (23) 14     
BMCPH (3)           
AJPM (5)         
LPH (10)         
BWHO (2)             
PM (1)             
JMIRPHS (20)     11   
IJEH (1)             
PH (5)       
JGH (3)         
GHA (1)             
TOTAL (74) 27 14 12 
1

See Supplementary Table 3 for details of the research reports included in each category.

2

All convenience samples, except those published in AJPM, PM, and PH.

Twenty-seven studies were categorized as cross-sectional surveys. These papers had between 63 and 4,553 citations. All but one was conducted online and 23 used convenience samples (see Supplementary Table 3). Fourteen of the cross-sectional surveys were published in IJERPH, including the most highly cited research report (4,553 citation) and the fifth (704 citations) and seventh (640 citations) most cited (see Supplementary Table 3 for details of the specific highly cited papers referred to in each study design category). These three highly cited research reports were revised, resubmitted, and accepted for publication within 3 weeks of initial submission. All drew causal inferences about the psychological or mental health impact of the pandemic, even though they are cross-sectional studies.

There were six studies classified as ecological. The geographic unit of analysis in these ranged from map grids to countries. The studies examined differences in COVID-19 prevalence and mortality across units in terms of variables such as social vulnerability, distress and disparities, weather, control measures, and institutional trust. These studies have between 50 and 102 citations.

The remaining 38 studies were initially placed in the “Other” category. A post hoc refinement of this category created three additional categories: Narrative Reviews for studies that presented a narrative summary of existing published research; Infodemiology for studies that extracted and analyzed (in various forms) data from an internet source (also called Infoveillance; Eysenbach, 2009); and Mathematical and Simulation Models for studies that used these models, and not statistical models, to understand the potential effects of interventions. Seven research reports were categorized as Narrative Reviews, 14 as Infodemiology, and five as Mathematical and Simulation Models. The remaining 12 research reports appear in the Other column in Table 2).

There were seven reviews of published literature, all of which were narrative reviews (i.e., they summarized the key findings from the studies reviewed in tables and text but did not synthesize the data from these or conduct statistical analysis of them). These studies had between 73 and 629 citations. Each of the three published in BMCPH, JGH, and PH described the search procedures and inclusion criteria used to identify studies and presented a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-type flow diagram. Three databases were searched in each study, and they reviewed between eight and 16 studies. In contrast, the four reviews in IJERPH contained no methods section, no list of studies reviewed, no PRISMA flow diagram, no description of how studies were identified, and no details of the data extraction and assessment procedures. One of these research reports was the eighth most cited, with 629 citations.

The 14 Infodemiology/Infoveillance research reports typically captured data pertaining to posts on social media, such as Twitter, or examined trends in Google search terms that pertained to the pandemic. There was also one study that described a content analysis of YouTube videos. Eleven of the 14 studies in the Infodemiology/Infoveillance category were published in JMIRPHS. This group of studies had between 56 and 880 citations. The latter, a study of active Weibo users published in IJERPH, was the third most cited of the 74 research reports. It contained missing age data for 78% of its nearly 18,000 participants and made claims about the “psychological consequences” of the pandemic based on minute changes in word-count frequencies that are statistically significant because the sample is so large.

Five research reports used mathematical/simulation models, typically based on the classic infectious disease Susceptible-Infected-Recovered (SIR) compartmental model. Three were published in LPH. These studies had between 106 and 1,430 citations, with the latter the second most highly cited research report.

Finally, there were 12 studies that remained categorized as Other (see Supplementary Table 3). These studies had between 55 and 703 citations. Six used various methods to address clinical research questions: One synthesized data from published reports to estimate the infection fatality rate of COVID-19, one examined the effects of change in COVID-19 case definitions on the number of cases in China, one compared the effectiveness of two COVID-19 diagnostic procedures, and three presented descriptive analysis of clinical data sets. Three studies used surveillance and survey data to estimate the effects of various national nonpharmaceutical policy interventions. Two were described as “modelling studies” in their titles but were included in this category as they used statistical, and not simulation or mathematical, models. The other, described as an “observational study,” was the sixth most cited of the 74 research reports, with 704 citations. There was one economic event study in which COVID-19 was the independent variable and stock market activity the dependent, one mixed methods study that developed an e-package to support the psychological well-being of healthcare workers during and after the COVID-19 pandemic, and one mapping study that assessed the feasibility of social distancing measures.

For the most part, there were few papers and research reports with a focus on COVID-19 among those most highly cited in the 15 top-ranked Google Scholar generalist public health journals examined in this study. This is unsurprising, as the Google Scholar h5-index has a 5-year window and citations of a publication typically take time to accrue. For those publications pertaining to COVID-19, research reports outnumbered other types of papers, such as editorials and correspondence, by just over two to one. For the majority of journals that published research reports, these underwent a reasonably thorough peer review which took about 7 weeks. This compares favorably, if one considers a long period in peer review to be some assurance of quality, to the median times to acceptance reported in two reviews of clinical COVID-19 studies, which were 13 and 6 days (Khatter et al., 2021; Quinn et al., 2021). Most research reports used study designs that could be implemented and executed in a short period of time, and they frequently relied upon easily accessible data. The vast majority had received fewer than 200 citations at the time the data were collected.

There were, however, three journals that displayed exceptions to some aspects of this general pattern. These all had many COVID-19-related publications contributing to their high h5-index. Of the three, LPH published more nonresearch reports than research reports, all of which were comments and correspondence; three of these had received more than 400 citations. The COVID-19 research reports published in LPH were in peer review for about 5 weeks on average, and they contained detailed descriptions of their methods. Three of the five COVID-19 mathematical/simulation models published by the journals reviewed were in LPH, one of which was the second most highly cited of the 74 research reports.

The two other journals that were exceptions to the general pattern, IJERPH and JMIRPHS, had at least twice as many COVID-19 research reports included in their h5-index as any of the other journals included in the analysis. They also completed much faster peer reviews than the others, both with medians under 3 weeks. Although the number of citations of research reports was modest for those published in JMIRPHS, IJERPH had five of the eight most highly cited, one of which was the most highly cited research report included in all 15 of journal’s h5-indexes, irrespective of whether they pertained to COVID-19.

Most COVID-19 papers published in IJERPH and JMIRPHS used research methods that relied on easily collected, nonrepresentative data, such as cross-sectional surveys of convenience samples and harvesting of internet and social media data. Although low dissemination of such studies is probably of little concern, there are potential problems if the results of methodologically weak studies become widely disseminated. As highlighted above, the COVID-19 narrative reviews published in IJERPH, including one with a high number of citations, contained few details of their methods. Similar problems are apparent in the three very highly cited IJERPH research reports describing surveys based on internet convenience samples, all of which went beyond the capacity of their cross-sectional study designs to draw invalid causal inferences.

These very highly cited methodologically weak studies are concerning, as they could have a corrosive effect on the quality of future COVID-19 public health research. A discipline’s research practices, whether rigorous or otherwise, are driven by its prevailing norms and can quickly deteriorate in response to changes in the academic incentive system, such as journals making some types of studies easier to publish than others (Edwards & Roy, 2017; Smaldino & McElreath, 2016). Accordingly, it is possible that a group of methodologically weak but highly cited studies could foster a culture within public health COVID-19 research defined by an inattention to detail and low editorial standards, and attract academics who are looking for quick and easy publications with little regard for the validity of results reported or the inferences they draw from these. This is especially likely to occur if the highly cited papers appear in journals that are “high impact” according to bibliometric indicators such as the Google Scholar h5-index.

The results reported here, along with those from COVID-19 studies in clinical and medical research, indicate that speeding up the peer review process resulted in a lot of poor-quality research being published. The justification for fast tracking peer review during the early stages of the pandemic was that the severity of the threat presented demanded rapid dissemination of scientific knowledge and that failing to do this would impede the public health response to the pandemic. However, the problem with this approach, especially when the existence of online journals sets almost no limits on what can be published, is that any signal in the published research that might be truly useful in responding to the pandemic will be lost in the overwhelming noise being generated.

In hindsight, it seems academic journal editors should have exercised a more balanced approach towards relaxing the peer review process in response to the pandemic. In the future, the potential cost and benefit of lowering the rigor of peer review should be estimated for each manuscript submitted for review. Assessing which manuscripts have the potential to make meaningful contributions in responding to a pandemic and concentrating scarce review resources on these is preferable to wasting these resources on manuscripts with no clear signs that they can make a useful contribution. For many of the papers discussed herein, the mere quality of the study designs used (narrative reviews with no methods reported, surveys, and analysis of internet data based on rapid collection of convenience samples) indicate that the benefits of publication were low due to potential bias. Such weak studies simply cannot tell us anything useful and probably should not have been published; they certainly did not warrant rapid review. On the other hand, the costs of spreading misinformation (about, for example, the association between COVID-19 and mental health) by publishing large numbers of clearly substandard studies can undermine confidence in public health research and practice, making the discipline seem inconsequential. As Gai, Aoyama et al. (2021) observe, not only might the relaxing of rigorous editorial standards for scientific research during the pandemic have failed to contribute to the objective of producing a solid evidence base from which the public, clinicians, and policymakers could make informed decisions, it may well have undermined this process.

The current study has several limitations. First, the focus is on a group of generalist public health journals that are considered high impact based on their Google Scholar h5-index scores. Research from other disciplines suggests that use of other bibliometrics, such as the Journal Citation Reports Journal Impact Factor, would identify a different group of public health journals (Diaz, Soares et al., 2021; Gorman & Huber, 2022). For example, although two of the three journals that accounted for the most COVID-19 publications in this study are currently ranked first (LPH) and 10th (JMIR – PHS) in the Journal Citation Reports category Public, Environmental and Occupational Health according to their 2020 Science Citation Index Journal Impact Factor, the other (IJERPH) is ranked 74th. Those wishing to identify highly cited COVID-19 related public health studies might do better to consult this source. However, it requires a paid subscription to use, and therefore it is likely that many individuals without institutional access will continue to consult Google Scholar, which is free, to identify public health scholarship pertaining to the pandemic.

Second, the study is based on the premise that bibliometric indicators such as the Google Scholar h5-index can accurately identify high-impact journals and does not address the extensive literature that has critiqued such an assumption (Brembs, Button, & Munafò, 2013; Tressoldi, Giofre et al., 2013). Regarding the specific bibliometric platform used in the study, Google Scholar, this has been rightly criticized for its lack of transparency, its failure to deal with data manipulation, its inability to allow large-scale data extraction, and the uncontrolled citation universe it draws upon (Delgado López-Cózar & Cabezas-Clavijo, 2013). However, as noted in Section 1, the latter weakness of the metric makes it useful to the present study, as this breath gives the best indication of the extent to which the studies published in these journals have been disseminated. Indeed, it has been proposed that, despite its shortcomings, among bibliometric sources, “Google Scholar is the best choice in almost all subject areas for those needing the most comprehensive citation counts” (Martín-Martín et al., 2021, p. 901).

Third, citations change over time and therefore the ranking of publications and journals based on these will vary over time. The current study presents the situation as it existed in the initial phase of the pandemic, and in this respect is like other studies of COVID-19 publishing (Digitale et al., 2021; Quinn et al., 2021; Zdravkovic et al., 2020). Fourth, we examined only public health journals, and it is possible that the most highly cited COVID-19 research reports in the discipline are published in other types of journals, such as general medical or epidemiology. Fifth, the classification of the research reports into study types was based on the judgement of one individual, and other reviewers may have chosen other study design categories and/or classified some of the studies differently. Categories such as “observational” and “descriptive” would have led to different groupings of the research reports. The system used was intended to allow an initial straightforward allocation of studies to commonly used epidemiological study designs with clear characteristics in terms of unit of analysis, number of assessment points, and number of study conditions. Its utility was limited, as relatively few of the studies used these specific research designs and therefore post hoc subgrouping of those in the category of Other became necessary.

In summary, there are many COVID-19 research reports published in high-impact public health journals during the first phase of the pandemic that have received a reasonable number of citations and appear to have undergone thorough peer review. However, there are also a large number of research reports that appear to have undergone minimal peer review and use methods that severely limit the conclusions that can reasonably be drawn from the results they present. Surprisingly, and of concern, a few of these papers have very high citation counts. Future research should examine whether the quality of highly cited public health COVID-19 research reports becomes more consistently high as time progresses.

The author has no competing interests.

No funding was received for writing this research article or for collecting or analyzing the data presented in it.

The data used in the study were Google Scholar h5-index scores for the 15 general public health journals in the top 20 public health journals for 2021 and the papers that contributed to these scores that pertained to COVID-19. Supplementary Table 1 contains the list of the top 20 journals with their h5-index scores. Supplementary Table 2 contains the full references of all the papers that contributed to each of the 15 general public health journals’ h5-indices. Those that were judged to pertain to COVID-19 are denoted by the date of the publication being in red font. For those journals, or publications, that are open access, the links in the table provide access to the PDF of the published paper. Supplementary Table 3 contains the assessment of the study design used in the COVID-19 research report papers.

Aschengrau
,
A.
, &
Seage
III,
G. R.
(
2014
).
Epidemiology in public health
(3rd edn.).
Burlington, MA
:
Jones & Bartlett
.
Bhagavathula
,
A. S.
,
Aldhaleei
,
W. A.
,
Rahmani
,
J.
,
Mahabadi
,
M. A.
, &
Bandari
,
D. K.
(
2020a
).
Novel coronavirus (COVID-19) knowledge and perceptions: A survey of healthcare workers
.
medRxiv
.
Bhagavathula
,
A. S.
,
Aldhaleei
,
W. A.
,
Rahmani
,
J.
,
Mahabadi
,
M. A.
, &
Bandari
,
D. K.
(
2020b
).
Knowledge and perceptions of COVID-19 among health care workers: Cross-sectional study
.
JMIR Public Health Surveillance
,
6
(
2
),
e19160
. ,
[PubMed]
Bramstedt
,
K. A.
(
2020
).
The carnage of substandard research during the COVID-19 pandemic: A call for quality
.
Journal of Medical Ethics
,
46
(
12
),
803
807
. ,
[PubMed]
Brembs
,
B.
,
Button
,
K.
, &
Munafò
,
M.
(
2013
).
Deep impact: Unintended consequences of journal rank
.
Frontiers in Human Neuroscience
,
7
,
291
. ,
[PubMed]
Brown
,
A.
, &
Horton
,
R.
(
2020
).
A planetary health perspective on COVID-19: A call for papers
.
Lancet
,
395
(
10230
),
1099
. ,
[PubMed]
Centers for Disease Control and Prevention
. (
2022
).
CDC Museum COVID-19 timeline
. https://www.cdc.gov/museum/timeline/covid19.html#:~:text=January%2020%2C%202020%20CDC,18%20in%20Washington%20state
(accessed September 21, 2022)
.
Delgado López-Cózar
,
E.
, &
Cabezas-Clavijo
,
A.
(
2013
).
Ranking journals: Could Google Scholar metrics be an alternative to Journal Citation Reports and Scimago Journal Rank?
Learned Publication
,
26
(
2
),
101
114
.
Diaz
,
A. P.
,
Soares
,
J. C.
,
Brambilla
,
P.
,
Young
,
A. H.
, &
Selvaraj
,
S.
(
2021
).
Journal metrics in psychiatry: What do the rankings tell us?
Journal of Affective Disorders
,
287
,
354
358
. ,
[PubMed]
Digitale
,
J. C.
,
Stojanovski
,
K.
,
McCulloch
,
C. E.
, &
Handley
,
M. A.
(
2021
).
Study designs to assess real-world interventions to prevent COVID-19
.
Frontiers in Public Health
,
9
,
657976
. ,
[PubMed]
Edwards
,
M. A.
, &
Roy
,
S.
(
2017
).
Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition
.
Environmental Engineering Science
,
34
(
1
),
51
61
. ,
[PubMed]
Eysenbach
,
G.
(
2009
).
Infodemiology and infoveillance: Framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the Internet
.
Journal of Medical Internet Research
,
11
(
1
),
e11
. ,
[PubMed]
Friis
,
R. H.
, &
Sellers
,
T. H.
(
2021
).
Epidemiology for public health practice
(6th edn.).
Burlington, MA
:
Jones & Bartlett
.
Gai
,
N.
,
Aoyama
,
K.
,
Faraoni
,
D.
,
Goldenberg
,
N. M.
,
Levin
,
D. N.
, …
Steinberg
,
B. E.
(
2021
).
General medical publications during COVID-19 show increased dissemination despite lower validation
.
PLOS ONE
,
16
(
2
),
e0246427
. ,
[PubMed]
Google Scholar
. (
2022
).
Categories
. https://scholar.google.com/citations?view_op=top_venues
(accessed May 26, 2022)
.
Gorman
,
D. M.
, &
Huber
,
J. C.
(
2022
).
Ranking of addiction journals in eight widely used impact metrics
.
Journal of Behavioural Addictions
,
1
(
2
),
348
360
. ,
[PubMed]
Horbach
,
S. P. J. M.
(
2020
).
Pandemic publishing: Medical journals strongly speed up their publication process for COVID-19
.
Quantitative Science Studies
,
1
(
3
),
1056
1067
.
JMIR Publications
. (
2020
).
Call for papers: COVID-19 research rapidly peer-reviewed and published in JMIR journals
. https://www.jmir.org/announcements/202
(accessed September 25, 2020)
.
Jung
,
R. G.
,
Di Santo
,
P.
,
Clifford
,
C.
,
Prosperi-Porta
,
G.
,
Skanes
,
S.
, …
Hibbert
,
B.
(
2021
).
Methodological quality of COVID-19 clinical research
.
Nature Communications
,
12
(
1
),
943
. ,
[PubMed]
Khatter
,
A.
,
Norton
,
M.
,
Dambha-Miller
,
H.
, &
Redmond
,
P.
(
2021
).
Is rapid scientific publication also high quality? Bibliometric analysis of highly disseminated COVID-19 research papers
.
Learned Publishing
,
34
(
4
),
568
577
. ,
[PubMed]
Kulkarni
,
A.
,
Aziz
,
B.
,
Shams
,
I.
, &
Busse
,
J. W.
(
2009
).
Comparisons of citations in Web of Science, Scopus, and Google Scholar for articles published in general medical journals
.
JAMA
,
302
(
10
),
1092
1096
. ,
[PubMed]
Marsicano
,
C. R.
, &
Nichols
,
A. R. K.
(
2022
).
In search of an academic “greatest hits” album: An examination of bibliometrics and bibliometric web platforms
.
Innovative Higher Education
,
47
(
6
),
1007
1023
. ,
[PubMed]
Martín-Martín
,
A.
,
Thelwall
,
M.
,
Orduna-Malea
,
E.
, &
Delgado López-Cózar
,
E.
(
2021
).
Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via citations
.
Scientometrics
,
126
(
1
),
871
906
. ,
[PubMed]
Palayew
,
A.
,
Norgaard
,
O.
,
Safreed-Harmon
,
K.
,
Andersen
,
T. H.
,
Rasmussen
,
L. N.
, &
Lazarus
,
J. V.
(
2020
).
Pandemic publishing poses a new COVID-19 challenge
.
Nature Human Behavior
,
4
(
7
),
666
669
. ,
[PubMed]
Quinn
,
T. J.
,
Burton
,
J. K.
,
Carter
,
B.
,
Cooper
,
N.
,
Dwan
,
K.
, …
Xin
,
Y.
(
2021
).
Following the science? Comparison of methodological and reporting quality of covid-19 and other research from the first wave of the pandemic
.
BMC Medicine
,
19
(
1
),
46
. ,
[PubMed]
Raynaud
,
M.
,
Zhang
,
H.
,
Louis
,
K.
,
Goutaudier
,
V.
,
Wang
,
J.
, …
Loupy
,
A.
(
2021
).
COVID-19-related medical research: A meta-research and critical appraisal
.
BMC Medical Research Methodology
,
21
(
1
),
1
. ,
[PubMed]
Schwab
,
S.
, &
Held
,
L.
(
2020
).
Science after Covid-19. Faster, better, stronger?
Significance
,
17
(
4
),
8
9
.
Smaldino
,
P. E.
, &
McElreath
,
R.
(
2016
).
The natural selection of bad science
.
Royal Society Open Science
,
3
(
9
),
160384
. ,
[PubMed]
Tressoldi
,
P. E.
,
Giofre
,
D.
,
Sella
,
F.
, &
Cumming
,
G.
(
2013
).
High impact = high statistical standards? Not necessarily so
.
PLOS ONE
,
8
(
2
),
e56180
. ,
[PubMed]
Zdravkovic
,
M.
,
Berger-Estilita
,
J.
,
Zdravkovic
,
B.
, &
Berger
,
D.
(
2020
).
Scientific quality of COVID-19 and SARS CoV-2 publications in the highest impact medical journals during the early phase of the pandemic: A case control study
.
PLOS ONE
,
15
(
11
),
e0241826
. ,
[PubMed]

Author notes

Handling Editor: Ludo Waltman

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data