Although funding is essential for some types of research and beneficial for others, it may constrain academic choice and creativity. Thus, it is important to check whether it ever seems unnecessary. Here we investigate whether funded U.K. research tends to be higher quality in all fields and for all major research funders. Based on peer review quality scores for 113,877 articles from all fields in the U.K.’s Research Excellence Framework (REF) 2021, we estimate that there are substantial disciplinary differences in the proportion of funded journal articles, from Theology and Religious Studies (16%+) to Biological Sciences (91%+). The results suggest that funded research is likely to be of higher quality overall, for all the largest research funders, and for 30 out of 34 REF Units of Assessment (disciplines or sets of disciplines), even after factoring out research team size. There are differences between funders in the average quality of the research supported, however. Funding seems particularly associated with higher research quality in health-related fields. The results do not show cause and effect and do not take into account the amount of funding received but are consistent with funding either improving research quality or being won by high-quality researchers or projects.

Writing and managing grants occupies a substantial amount of academic time, but it is not clear whether the benefits outweigh the costs in all fields. In some cases, researchers may be unable to experiment without funding, but scholars not needing new equipment, resources, or time buyout may be able to work equally well without financing. Nevertheless, little is known about the proportion of academic time spent on grant writing and administering, so it is difficult to weigh the benefits of funding against its costs. Although many studies report the proportion of time spent by academics on research, teaching, and administration (Bentley & Kyvik, 2012), they rarely ask about grant writing as a separate activity. One exception estimated that each Australian National Health and Medical Research Council grant proposal took 38 working days (nearly two months), or 28 for a resubmitted proposal. In 79% of cases, this effort was unrewarded (Herbert, Barnett et al., 2013), so Australian researchers collectively spent between half a year and a full year writing grant proposals for each one funded. A survey of 12 European countries found that between 51% and 84% of academics (71% in the United Kingdom) wrote grant proposals or otherwise responded to calls for proposals each year (Drennan, Clarke et al., 2013). This work is highly stressful, affecting work–life balance in almost all applicants, but deemed necessary partly due to institutional pressure and expectations from colleagues (Herbert, Coveney et al., 2014). There have also been claims that the constant pressure to win grants undermines the quality of research, particularly in situations where ongoing employment is funding dependent (Fumasoli, Goastellec, & Kehm, 2015), and perhaps through research time lost to grant writing. Thus, it is important to assess whether the benefits of funding always outweigh the costs. This article focuses on the narrower issue of whether funding is always beneficial, at least in the sense of being associated with higher research quality outputs.

Although the time and equipment for early scientific research was self-financed or informally supported by benefactors, the system of competitively awarding national grants for future research emerged from the prize system (for previous achievements) in France before the First World War (Crosland & Galvez, 1989). Over the last half century, university funding in many countries has changed from being awarded unconditionally for the benefit of science, albeit with a focus on government priority areas such as defense, to being mainly accountable and harnessed for societal benefits (Banchoff, 2002; Demeritt, 2000; Lepori, van den Besselaar et al., 2007; Mustar & Larédo, 2002), such as medical priority areas (Xu, Zhang et al., 2014). Resource-seeking behaviors (“academic capitalism”; Slaughter & Leslie, 2001) are long-established norms in several major research countries (Johnson & Hirt, 2011; Metcalfe, 2010). Research funding is now primarily awarded for achievements (i.e., performance-based funding; Hicks, 2012) or future promise, through competitive grants (OECD, 2014). This is supplemented by incentives to seek finances from industry and other nonacademic sources to fund research for nonacademic benefits (Laudel, 2005). External funding is considered valuable despite the huge amount of lost time spent by experts writing grant proposals (Polster, 2007) and the potential to skew science (Thyer, 2011). National grant awards may aim to generally support promising research or researchers, or support research with societal benefits (e.g., Takemura, 2021).

The effect of funding seems likely to depend on the researcher, with field-related variations. For example, in specialties needing funding for any kind of research (e.g., areas of medicine or genetics), such as to employ enough assistants or to access equipment or consumables, a researcher without funding cannot research, so what do they do instead? If their role is not changed to teaching only (Edwards, 2022; Nyamapfene, 2018) or professional (e.g., clinical doctor), they might use any research time allowance to write grant proposals, accept consultancy or advisory roles, read academic articles, develop aspects of their skills, develop their research methods or theory, or devote more time to teaching, administration, or other roles. They may also support others’ funded research projects in a minor role. In some cases, they may also write short papers about aspects of their research process, such as ethics or minor methods details. In contrast, other researchers may easily be able to conduct unfunded research, even though funding might improve their work with better equipment or larger teams. From a U.K. Research Excellence Framework (REF) perspective, funding might improve research productivity by supporting larger teams (allowing divisions of labor) and give the funded researcher a larger pool of publications from which to choose from for REF evaluations. Thus, any comparison between funded and unfunded research presupposes that it is possible to do research in a field without funding and necessarily excludes researchers that can only conduct funded research but did not receive funding in the period examined. The situation is complicated in the United Kingdom because university teaching budgets subsidize 13% of research (Olive, 2017) in a way that may not be recorded. Researchers with the choice may prefer unfunded research because it gives them autonomy from funder goals and requirements (Edwards, 2022), particularly in the era of challenge-led research (Olive, 2017).

Few studies have explicitly examined the relationship between research quality and funding, except with bibliometric proxies for quality, such as citation rates. One exception found industry-funded submissions to a 2006 scoliosis conference not to have statistically significantly different peer review quality scores (Roach, Skaggs et al., 2008). Another showed that urogynecology randomized controlled trials had higher quality methods when they were funded (Kim, Chung et al., 2018). Although, as reviewed below, many studies have shown in different contexts that funded research tends to be more cited than unfunded research, it is not known whether this is true in all fields and if funded research also tends to be higher quality. Thus, the research questions below collectively address issues about the relationship between funding and research quality that have not previously been directly investigated (particularly RQ2). A descriptive first question is included for background to the main questions. The focus is on funders rather than funding because of substantial differences between funders in practice, and differences between funders are investigated in RQ3. Team size is also included (RQ4) because many funders mandate or otherwise encourage larger authorship/grant teams, so it is important to differentiate authorship team size effects (which are associated with more highly cited research) from the funding itself. A question about citation counts is included (RQ5) to assess their value as a tool because of their use in most previous investigations of research funding. The questions of this exploratory study are mostly addressed with a combination of simple bivariate statistical tests or correlations on a data set of U.K. articles with research quality scores and funding information (e.g., average quality of research supported by funders or unfunded) but regression is also used for RQ4.

  • RQ1: How prevalent is research funding for U.K. REF journal articles and are there disciplinary differences in the answer?

  • RQ2: Is funded research (U.K. REF journal articles) higher quality for all major research funders?

  • RQ3: Do research funders support different quality research (U.K. REF journal articles)?

  • RQ4: Is funded research (U.K. REF journal articles) of higher quality irrespective of authorship team size?

  • RQ5: Are average citation counts effective proxies for average quality for externally funded research (U.K. REF journal articles)?

This section mainly reviews research findings. No theory of the disciplinary organization of science has yet shed light on the relationship between funding and research quality. A partial exception is that grant review outcomes might be expected to be more unpredictable in fields with low level of agreement on the objects and methods of research (task uncertainty: Whitley, 2000), probably including most of the social sciences and humanities.

2.1. Types of Funding

There are many types of grant funding, in addition to recurring block grants. Each has its own unique characteristics, including goal specificity, competitiveness, funding a project or person, applicant restrictions, duration, cost scope, and embedded evaluation criteria. These factors may affect whether a grant is beneficial to a researcher’s output, so it is unsurprising that the performance of funded researchers varies between funding schemes, even for public funders (Wang, Wang et al., 2020). It is impossible to differentiate between funding types in practice for any comprehensive investigation into the impact of all types of funding on research. Nevertheless, relevant evidence has emerged from prior studies for two dimensions, as summarized below.

  • Size: The average size of individual grants has increased over recent decades, for example with long-term funding for large centers of excellence at the expense of sets of individual grants (Bloch & Sørensen, 2015; OECD, 2014). In the United States, block-funded National Science Foundation centers do not seem to improve the journal outputs of members, although they do improve commercial partnerships (Gaughan & Bozeman, 2002). Smaller grants seem to help productivity more than larger grants for research centers (Bloch, Schneider, & Sinkjær, 2016). Smaller research awards lead to more citations overall for biological science research (Gallo, Carpenter et al., 2014). In medicine, a small amount of funding from public and private research contracts and consultancies reduces research impact but a large amount increases it (Muscio, Ramaciotti, & Rizzo, 2017). The latter may reflect the large-scale funding needed for effective medical studies in many cases, with underfunded research also being underpowered.

  • Rationale/source: The source of funding received by a research group influences their research agenda (Currie-Alder, 2015; Hottenrott & Lawson, 2017; Tellmann, 2022). In terms of quality, spinal research harnessed weaker types of evidence (e.g., case series) when it had industry funding but was more likely to report positive outcomes (Amiri, Kanesalingam et al., 2014).

2.2. Effectiveness of Grant Proposal Peer Review and Characteristics of Recipients

For any analysis of the influence of funding on research, it is difficult to distinguish between cause and effect in terms of funders finding the best research/researchers or the funding improving/allowing research/researchers. Although some grant selection processes focus on applicant characteristics, most concentrate on the proposal, checking its rationale, evaluating its validity, and (often) match with funding criteria (Chubin, 1994; Franssen, Scholten et al., 2018; van Arensbergen & van den Besselaar, 2012).

There is limited overall evidence of the effectiveness of peer review for grant proposals (Liaw, Freedman et al., 2017), based on evaluations typically using citation indicators as a proxy for research quality or achievements. In some contexts, higher scores or success in winning awards have been shown to associate with more citations (Bornmann & Daniel, 2006; Gallo et al., 2014) or more outputs (Fang, Bowen, & Casadevall, 2016; Győrffy, Herman, & Szabó, 2020). In contrast, for economic and social sciences research council grants in the Netherlands, while weak researchers tended to be rejected, awardees performed substantially worse in bibliometric terms than rejected researchers with similar scores (van den Besselaar & Leydesdorff, 2009). This suggests that the research council process selected above-average researchers but not the very highest performing (at least bibliometrically), or that the funding was detrimental.

Many studies have found disparities in review outcomes that are suggestive of bias, whether deliberate or accidental, or systemic effects. These biases include gender (Cruz-Castro, Ginther, & Sanz-Menendez, 2022; Tricco, Thomas et al., 2017), age (Levitt & Levitt, 2017), ethnicity (Cruz-Castro et al., 2022; Hayden, 2015), interdisciplinarity (Seeber, Vlegels, & Cattaneo, 2022), and institutional prestige (Ali, Bhattacharyya, & Olejniczak, 2010; Enger & Castellacci, 2016; Horta, Huisman, & Heitor, 2008; Jappe & Heinze, 2023). All biases seem likely to reduce the effectiveness of the grant allocation process and hence, presumably, the overall benefits of funding.

2.3. The Impact of Grants on Research Productivity and Impact

Funding could be expected to increase the productivity or impact of the funded researchers. The benefits of research funding are impossible to fully quantify, and it is difficult to generate meaningful statistics because of the lack of effective control groups in most cases, and particularly the ability of unfunded groups to receive funding from sources other than the one examined (Neufeld, 2016; Schneider & van Leeuwen, 2014). Most previous studies have analyzed individual funding sources and assumed that the papers acknowledging them were primarily caused by the funding, whereas journal articles often draw upon a range of different long-term and short-term funding for equipment and different team members as well as specific project-based grants, at least for biomedical research (Rigby, 2011). Moreover, many studies do not distinguish between selection effects and funding effects (Neufeld, 2016): Are funded researchers more productive because of the money or because better researchers/proposals were selected, or both? Moreover, all studies so far have had limited scope: There are different types of funding and disciplinary differences in funding uses and procedures so there is unlikely to be a simple relationship between funding and impacts. For example, larger funded studies may find it easier to get ethical approval to research in clinical settings (Jonker, Cox, & Marshall, 2011), reducing the number of unfunded studies.

Funding usually associates with (i.e., correlates with but does not necessarily cause) increased research productivity, as measured by journal articles, often even after the end of the funding period (Chudnovsky, López et al., 2008; Godin, 2003; Defazio, Lockett, & Wright, 2009; Ebadi & Schiffauerova, 2016; El-Sawi, Sharp, & Gruppen, 2009; Hussinger & Carvalho, 2022; Saygitov, 2018; Shimada, Tsukada, & Suzuki, 2017) but commercial funding can slow academic publishing because of the need to write patents or produce other outcomes (Hottenrott & Thorwarth, 2011). A systematic attempt to track down all funding sources for research from one university suggested that funding increased productivity but not citation impact, although it would be difficult to disentangle disciplinary differences in funding value with this data (Sandström, 2009).

Funding also usually associates with higher citation impact (e.g., Álvarez-Bornstein, Díaz-Faes, & Bordons, 2019; Berman, Borkowski et al., 1995; Gush, Jaffe et al., 2018; Heyard & Hottenrott, 2021; Jowkar, Didegah, & Gazni, 2011; Levitt, 2011; Lewison & Dawson, 1998; Neufeld, 2016; Peritz, 1990; Rigby, 2011; Roshani, Bagherylooieh et al., 2021; Thelwall, Kousha et al., 2016; Yan, Wu, & Song, 2018) but there are exceptions (Alkhawtani, Kwee, & Kwee, 2020; Jowkar et al., 2011; Langfeldt, Bloch, & Sivertsen, 2015; Neufeld, 2016; Sandström, 2009). In support of the latter, 89% of the most cited rhinoplasty articles published by 2015 were unfunded (Sinha, Iqbal et al., 2016) and 30% of key papers for physics, chemistry, and medicine Nobel Prize winners 2000–2008 declared no funding (Tatsioni, Vavva, & Ioannidis, 2010). Unfunded research might sometimes be highly cited because it has more scope to be innovative, at least in fields such as library and information science not needing expensive resources (Zhao, 2010). Grants may constrain academic freedom, which is a particular threat to the role of social science research in challenging authority and in being able to interpret results free from external pressures (Kayrooz, Åkerlind, & Tight, 2007).

2.4. Levels and Types of Unfunded Research

Most research in the previous century was unfunded, at least as reported in journals. An early study of 900 journal articles in three medical journals from 1987, 1989, and 1991 found high levels of unfunded research (at least without declared funding sources): internal medicine (60%), pathology (62%), and surgery (74%) (Berman et al., 1995). Similarly, in 1987, 1989, and 1991, 84% of journal articles by pathologists were unfunded (Borkowski, Berman, & Moore, 1992) and 63% of emergency medicine articles were unfunded in 1994 (Ernst, Houry, & Weiss, 1997). In 1992, however, only 23% of internal medicine and neurology journal articles were unfunded (Stein, Rubenstein, & Wachtel, 1993). Partly funded research is also common in medicine (Mai, Agan et al., 2013).

Early unfunded research was often different from funded research (Bodensteiner, 1995; Silberman & Snyderman, 1997; Stein et al., 1993) and a few studies have compared funded with unfunded research types this century. For Spanish virology, cardiology, and cardiovascular scholars, unfunded research was hospital based and clinical, suggesting that it had been internally supported by hospital resources (Álvarez-Bornstein et al., 2019). Unfunded investigations may tend to be desk research or other cheaper types, including secondary data analysis (Vaduganathan, Nagarur et al., 2018), guidelines (Goddard, James et al., 2011), review articles (e.g., Imran, Aamer et al., 2020), retrospective records-based analyses (e.g., Brookes, Farr et al., 2021; Sedney, Daffner et al., 2016), small case studies (e.g., Qi & Wei, 2021), or analytical/theoretical/opinion papers without primary data (Underhill, Dols et al., 2020). In nursing, evidence-based practice research may often be unfunded because the data analyzed may come mainly from investigators’ daily work roles (Higgins, Downes et al., 2019). Researching may be a compulsory part of some higher-level courses, such as for radiology, and this may result in many small-scale unfunded studies by educators and learners (Johnson, Mathews, & Artemakis, 2002). In medicine, unfunded research may be disproportionately from general practitioners compared to hospital doctors because they lack the infrastructure to obtain and maintain large grants (van Driel, Deckx et al., 2017).

3.1. Data

For this study, the U.K. Research and Innovation (UKRI) national science and research funding government agency gave us the preliminary scores from March 2022 of all 148,977 journal articles submitted to the U.K. REF 2021, excluding those from the University of Wolverhampton (the project host institution, for confidentiality reasons). The REF (REF, 2022) is a periodic (up to seven-year gaps) exercise to evaluate U.K. academic research to, among other things, allocate the U.K. block funding research grants known as “Mainstream QR” and worth over £2 billion per year for up to 7 years. The REF includes postpublication expert review of selected outputs (1–5 per researcher), from which we were given the journal articles. Academics submit only their best outputs over the period and teaching staff do not need to submit anything, so the articles analyzed are likely to represent predominantly the best research produced by U.K. researchers 2014–20. Each article had been given a “quality” score by at least two out of over 1,000 expert assessors (usually full professors), with the grades being 1* (nationally recognized), 2* (internationally recognized), 3* (internationally excellent), and 4* (world leading). The grades reflect originality, significance, and rigor, with different and detailed guidelines for these from each of four overseeing Main Panels (REF, 2020, pp. 34–51). There was careful norm referencing between assessors within each of the 34 Units of Assessment (UoAs) to which they had been assigned to ensure that the scores by different pairs of assessors were comparable. There was also overall norm referencing for the entire REF. The peer review process is carefully managed because of its multi-billion pound financial value (about £50,000 per individual score, on average), although the reviewers are not experts in all areas that they need to assess. Each UoA covers what might be called a broad field (very broad in some cases, like UoA 34: Communication, Cultural and Media Studies, Library and Information Management) and is either a recognizable discipline (e.g., UoA 18 Law) or a set of related disciplines (e.g., UoA 8 Agriculture, Food and Veterinary Sciences). The four Main Panels group together cognitively related UoAs for administrative and norm referencing purposes.

The REF articles were matched against Scopus records by DOI comparisons (n = 133,218). REF articles without a DOI in Scopus were matched instead by title, with a manual check to accept or reject all potential matches (n = 997). The Scopus record was used for funding and citation information. Scopus cross-references information in articles with funding information gained from other sources populating its funding database (McCullough, 2021). Scopus reports a single funder for each paper, at least through its Applications Programming Interface (API), as used to gather the data, although some studies have multiple funders. Thus, the funder-level results reported here are based on incomplete data.

Some of the articles were given multiple grades from the same or different UoAs. This is possible because each author is entitled to submit between one and five outputs for which they are a coauthor, and coauthors from different institutions may choose the same article(s). For analysis, duplicate articles were removed within the grouping analyzed (UoA, Main Panel, or all). When an article had received different scores, it was given the median or a random median when there were two.

Scopus was used for funding information because of its wider coverage than the Web of Science (Martín-Martín, Thelwall et al., 2021) and because Google Scholar does not extract relevant information. Scopus started systematically indexing funding in 2011 (Rigby, 2011) so its data should be mature for the REF period 2014–20. Funding data in academic articles might be in a separate “Funding sources” section, in the acknowledgments, or as a footnote alongside author information. The acknowledgment section was a common place for funding information (Paul-Hus, Díaz-Faes et al., 2017) before the rise of the dedicated funding section.

Some article funders were universities, suggesting that the authors had been allocated internal university money for their research or that it was unfunded but recorded as university-funded to reflect employers allowing research time for the scholars involved, or for university policy reasons. As it was not possible to distinguish between the two, for the regression analysis, research was classed as unfunded if the funder was a university, irrespective of country. For this, we checked the 4,042 funders and classified 1,317 of them as internal university or research institute funding (e.g., Weizmann Institute of Science) and 2,725 as external funding (e.g., American Mathematical Society). After this stage, research was classified as externally funded if it declared a funder in Scopus and the funder name was not one of the 1,317 universities found. When funding information was present (e.g., a grant number) but no name for the funder was given, it was assumed to be externally funded.

3.2. Data Quality Checks

To check whether the Scopus API funding information was accurate, for six UoAs chosen to represent different field types, we read samples of articles for details of research support. For each UoA, 100 unfunded articles, 100 university-funded articles, and 100 nonuniversity funded articles were selected with a random number generator for checking (or 100%, when fewer than 100). The checks were performed by two people, the first author for all and either ES, MM, or MA. Publisher versions of articles were checked for funding information except when the preprint was online with funding information. In one case (Theology, unfunded) we were unable to obtain the article through any method (including interlibrary loans) and it was substituted with the next article selected by the random number generator. A study was counted as university funded if the only funding source mentioned was university based. It was recorded as externally funded if any nonuniversity funding source was mentioned.

Funding could be mentioned in multiple places, although a “Funding” section or an “Acknowledgments” section at the end of the article were common, and a “Disclosure of Interests” end section sometimes also included funding information. Other places included first page footnotes, last page footnotes (rare), a notes section at the end of the article, the first paragraph of the article (rare), and the last paragraph of the conclusions (common in Physics, one example in Theology). Articles sometimes declared that the research was unfunded, usually within a funding section, and sometimes in a disclosure of interests section. In one case, a funding section declared that the research was unfunded but the acknowledgments section thanked a funder, so this was coded as funded. Some articles included author biographies that might have mentioned funding sources but never did.

Funding statements varied in length from short declarations of the funder name and grant number to several paragraphs of thanks. In some fields it was common to thank departments hosting a visit or seminar and current and former employers. An article was classed as funded if this was stated directly (e.g., “funded by”) or if it was suggested by the context, such as by naming a research funding organization or thanking one for an unspecified type of “support.” Acknowledgments of support from universities were not counted as funding if these seemed to be minor and routine, such as hosting a visit or supporting a seminar. University support was counted as funding for the purposes of these checks if the term “funding” was mentioned or “grant” or it was obvious from the context that a financial transaction had occurred, as in the case of a PhD studentship. In a few cases, support in kind was provided, such as through access to equipment, but this was not counted as funding. Research described as part-funded was recorded as funded.

Although in some cases the article appeared to be the primary outcome of a grant, in most cases the relationship between the funding and the output was less clear. For example, the article could be one of the outputs of a PhD studentship or Leverhulme Trust fellowship. Many articles had authors with differing funding sources, suggesting that the study itself had not been funded but had been made possible by funding given to the participants. Such studies were counted as funded. In Medicine and Physics, for example, long paragraphs often recorded the financial support given to all participants as well as the equipment used and the study itself.

The information found manually is unreliable. A funded article may have no declaration within the text if the author forgot or the journal style or field norms discouraged it. Checks were made of cases where Scopus recorded a funder but the article didn’t mention one. These checks found examples where Scopus was correct because the article was listed on a funding website as an output of the grant. Although Scopus has reported that its funding information is imported from the acknowledgment sections of articles (Beatty, 2017), it seems likely that it now automatically links articles to funding records from elsewhere and might also perform wider searches of article text. In other cases the funding was plausible because the scholar thanked the same funder on a different output at a similar time or listed the funder on their online CV. Scopus also seemed to have listed incorrect funders in at least two cases: the wrong funder altogether in one case, and a university in another case where the author had included an acknowledgment that an earlier version had been presented at a seminar at that university. These were not altered in our data because the checks were for quality assessment rather than correction.

Comparing the Scopus API information with manual checks, the Scopus API results were always imperfect and substantially misleading in some cases (Figure 1). Almost all Clinical Medicine and Physics articles were externally funded (i.e., at least one nonuniversity funder) even if the Scopus API listed none. In these cases, Scopus had presumably not found where the funding was listed in the article. Physics article funding statements were often in the last paragraph of the conclusions, where they may have been missed. For all six fields, most articles classed as university funded (i.e., the Scopus API funder was apparently a university) were externally funded. This typically occurred because the Scopus API reports only one external funder, and the manual checks classed an article as externally funded if any of the funders were not universities. In four cases, most Scopus API results were correct for unfunded and externally funded articles, however. This information should be taken into consideration when interpreting the results.

Figure 1.

The results of manual checks of random samples of REF2021 articles recorded by Scopus as funded (listing a university or funder) or unfunded for six UoAs.

Figure 1.

The results of manual checks of random samples of REF2021 articles recorded by Scopus as funded (listing a university or funder) or unfunded for six UoAs.

Close modal

3.3. Analyses

For RQ1, the proportion of articles declaring research funding was calculated for each UoA and Main Panel.

For RQ2 and RQ3, the average quality of the articles from each funder was calculated and compared to the average quality of unfunded research. The grade point average (GPA) was used for this, which is the arithmetic mean of the quality scores. Although widely used in the United Kingdom in rankings of institutions, the GPA is a convenience and not theoretically informed because there is no reason to believe that a 4* article is four times as good as a 1* article. Nevertheless, it at least gives a straightforward and easily understandable indicator of average quality scores for funded journal articles. The 30 largest funders (including unfunded and unknown funder) were reported. The choice of 30 is relatively arbitrary. The GPA for small funders with a few articles would be imprecise estimates of the funder’s average research quality, and 30 is a common statistical choice for the minimum size to identify a pattern. This calculation ignores funders not reported by the Scopus API, which particularly affects articles with multiple funders. The RQ2 test involves making multiple comparisons using confidence intervals and this increases the chance of getting at least one statistically positive result by accident, the problem of familywise error rates. We have reported confidence intervals without error rates because the individual funders are of interest, but use Bonferroni corrections (see Perneger, 1998) to discuss the results as a group. These increase the probability threshold for a difference to be deemed statistically significant in a way that protects the chance of making at least one false positive (i.e., a Type I error) at the 0.05 level.

We used ordinal regression (the polr function in the R MASS package with the Hess = TRUE option) to answer RQ4, with research quality as the dependent variable and research funding (binary) and the log of the number of authors as independent variables. A similar approach has been used with citations as a proxy for research quality as the dependent variable (Ebadi & Schiffauerova, 2016). We excluded 23 articles with no authors listed from the regressions. We ran a separate regression for each UoA and Main Panel (combining similar UoAs) and for all the data. Ordinal regression only assumes that the four quality scores are in ascending order but does not assume that they are equidistant, so it is better than types of regression requiring a scalar output (Gutiérrez, Perez-Ortiz et al., 2015). By including both authors and funding as independent variables, the regression output can show whether one of the two is redundant in any area. The log of the number of authors was used instead of the number of authors because the relationship between author numbers and log-transformed citation counts is approximately logarithmic (Thelwall & Maflahi, 2020), and the shape is similar for the relationship between REF scores and author numbers (Thelwall, Kousha et al., 2022a).

For RQ5, we calculated a field-normalized citation score for every REF2021 article to allow fair comparisons between articles from different fields. For this, we first log normalized each citation count with ln(1 + x) to reduce skewing in the data set caused by very highly cited articles (Thelwall & Fairclough, 2017). Then, we calculated the average of the logged citations for each of the 330 Scopus narrow fields and each year 2014–18 (i.e., 5 × 330 averages) and divided each article’s logged citation count by the average for its narrow field and year. Articles in multiple fields were instead divided by the average of the relevant field averages. This gives a Normalized Log-transformed Citation Score (NLCS) (Thelwall, 2017) for each journal article. These can be compared between fields and years because, by design, a score of 1 always reflects an average number of citations for the field and year of an article. Averaging the NLCS of all articles associated with a funder gives the funder’s Mean NLCS (MNLCS), which is a measure of the normalized average citation rate for the journal articles it funded. Again, an MNLCS above 1 always reflects a funder that tends to fund articles that are more cited than average for their fields and years. The most recent 2 years were excluded from the calculation to give a citation window of at least 2 years, reducing the influence of short citation windows. Although a 3-year citation window is better (Wang, 2013), it would reduce the amount of data and the log transformation in the NLCS formula reduces the statistical variability caused by short time windows.

In all analyses, we did not take into account any dependencies in the data caused by up to five outputs being submitted by a single researcher, and this is a limitation. On average, each full-time equivalent (FTE) researcher submitted 2.5 outputs. A minimum of one and a maximum of five outputs could be submitted by a single academic (whether full time or part time). Accounting for some nonarticle outputs and an unknown number of part timers, researchers (whether full time or part time) probably submitted about two articles each, on average. If every researcher produced uniform quality solo work, then this would reduce the effective sample sizes in all the analyses by 50%. Nevertheless, uniform quality work for all researchers is unrealistic and most work was coauthored, so the effective sample size reduction due to dependency (i.e., two articles are more likely to have the same REF score if they have at least one author in common) is unknown. Because of this, the widths of the confidence intervals in all the graphs should be treated with caution.

4.1. RQ1: Prevalence of Research Funding

Just under two-thirds (63%) of journal articles submitted to REF2021 had funding information recorded by the Scopus API, with substantial disciplinary differences (Table 1, Figure 2). This figure excludes funded journal articles where the funder was not recorded by the author, the journal did not allow a funding declaration, or a technical issue prevented Scopus from finding the declaration (see Figure 1). This also includes research that was internally funded, whether nominally (part of the scholar’s job to research) or more substantially, such as with money for equipment or research assistants. Some universities (e.g., University of Wolverhampton, not in the data set) now require scholars to record their employer as the funder within the internal research information management system for articles not externally funded, and this may encourage them to report the same within their articles.

Table 1.

Number of articles, unfunded articles, and university funded articles. Number of funders per UoA, main panel or all

SetUoA or Main PanelArticlesUnfunded articlesUniversity-funded articlesFundersFunders with 5+ articles
1: Clinical Medicine 9,916 1,173 262 844 134 
2: Public Health, Health Services & Primary Care 3,890 745 130 401 75 
3: Allied Health Prof., Dentistry, Nursing & Pharm 9,675 2,885 701 1,045 166 
4: Psychology, Psychiatry & Neuroscience 8,173 2,271 390 672 111 
5: Biological Sciences 6,376 576 244 592 86 
6: Agriculture, Food & Veterinary Sciences 3,147 653 178 446 52 
7: Earth Systems & Environmental Sciences 3,724 541 198 456 51 
8: Chemistry 3,274 426 172 321 43 
9: Physics 4,499 396 98 272 43 
10: Mathematical Sciences 5,111 1,402 245 424 50 
11: Computer Science & Informatics 4,646 1,565 250 438 57 
12: Engineering 16,335 4,395 1,000 1,095 195 
13: Architecture, Built Environment & Planning 2,582 1,225 207 334 37 
14: Geography & Environmental Studies 3,439 947 266 467 52 
15: Archaeology 545 156 50 123 14 
16: Economics & Econometrics 1,762 856 132 216 28 
17: Business & Management Studies 11,853 8,210 759 810 117 
18: Law 1,864 1,442 80 153 13 
19: Politics & International Studies 2,502 1,610 150 245 21 
20: Social Work & Social Policy 3,295 1,779 209 334 34 
21: Sociology 1,498 727 85 175 21 
22: Anthropology & Development Studies 977 443 84 160 18 
23: Education 3,337 2,028 186 308 37 
24: Sport & Exercise Sciences, Leisure & Tourism 2,812 1,753 205 327 37 
25: Area Studies 524 329 31 86 
26: Modern Languages & Linguistics 962 583 49 111 10 
27: English Language and Literature 768 592 30 69 
28: History 1,082 769 47 91 
29: Classics 111 82 17 
30: Philosophy 806 559 49 75 
31: Theology & Religious Studies 185 155 15 
32: Art and Design: History, Practice & Theory 1,117 693 65 145 12 
33: Music, Drama, Dance, Perform. Arts, Film 544 380 19 52 
34: Comm. Cultural & Media Stud. Lib & Info Man 1,020 724 49 90 11 
Main Panel A (UoAs 1–6) 39,248 7,925 1,804 4,107 438 
Main Panel B (UoAs 7–12) 36,614 8,610 1,925 1,825 335 
Main Panel C (UoAs 13–24) 35,634 20,819 2,361 1,858 328 
Main Panel D (UoAs 25–34) 7,071 4,842 345 456 52 
All All (UoAs 1–34) 113,877 41,649 6,297 4,107 882 
SetUoA or Main PanelArticlesUnfunded articlesUniversity-funded articlesFundersFunders with 5+ articles
1: Clinical Medicine 9,916 1,173 262 844 134 
2: Public Health, Health Services & Primary Care 3,890 745 130 401 75 
3: Allied Health Prof., Dentistry, Nursing & Pharm 9,675 2,885 701 1,045 166 
4: Psychology, Psychiatry & Neuroscience 8,173 2,271 390 672 111 
5: Biological Sciences 6,376 576 244 592 86 
6: Agriculture, Food & Veterinary Sciences 3,147 653 178 446 52 
7: Earth Systems & Environmental Sciences 3,724 541 198 456 51 
8: Chemistry 3,274 426 172 321 43 
9: Physics 4,499 396 98 272 43 
10: Mathematical Sciences 5,111 1,402 245 424 50 
11: Computer Science & Informatics 4,646 1,565 250 438 57 
12: Engineering 16,335 4,395 1,000 1,095 195 
13: Architecture, Built Environment & Planning 2,582 1,225 207 334 37 
14: Geography & Environmental Studies 3,439 947 266 467 52 
15: Archaeology 545 156 50 123 14 
16: Economics & Econometrics 1,762 856 132 216 28 
17: Business & Management Studies 11,853 8,210 759 810 117 
18: Law 1,864 1,442 80 153 13 
19: Politics & International Studies 2,502 1,610 150 245 21 
20: Social Work & Social Policy 3,295 1,779 209 334 34 
21: Sociology 1,498 727 85 175 21 
22: Anthropology & Development Studies 977 443 84 160 18 
23: Education 3,337 2,028 186 308 37 
24: Sport & Exercise Sciences, Leisure & Tourism 2,812 1,753 205 327 37 
25: Area Studies 524 329 31 86 
26: Modern Languages & Linguistics 962 583 49 111 10 
27: English Language and Literature 768 592 30 69 
28: History 1,082 769 47 91 
29: Classics 111 82 17 
30: Philosophy 806 559 49 75 
31: Theology & Religious Studies 185 155 15 
32: Art and Design: History, Practice & Theory 1,117 693 65 145 12 
33: Music, Drama, Dance, Perform. Arts, Film 544 380 19 52 
34: Comm. Cultural & Media Stud. Lib & Info Man 1,020 724 49 90 11 
Main Panel A (UoAs 1–6) 39,248 7,925 1,804 4,107 438 
Main Panel B (UoAs 7–12) 36,614 8,610 1,925 1,825 335 
Main Panel C (UoAs 13–24) 35,634 20,819 2,361 1,858 328 
Main Panel D (UoAs 25–34) 7,071 4,842 345 456 52 
All All (UoAs 1–34) 113,877 41,649 6,297 4,107 882 
Figure 2.

The percentage of U.K. REF2021 journal articles with a declared source of funding in Scopus.

Figure 2.

The percentage of U.K. REF2021 journal articles with a declared source of funding in Scopus.

Close modal

Funding is the norm for Main Panels A (80%) and B (76%), but half as prevalent in Main Panels C (40%) and D (32%). The difference is presumably due to the need for equipment and large teams in the health, life, and physical sciences (except for purely theoretical contributions), whereas expensive or perishable equipment is probably rarer in the social sciences, arts, and humanities, except for long-term purchases (e.g., musical instruments). Moreover, there may be more social sciences, arts, and humanities topics that can be researched in small teams or alone. The three UoAs with the highest proportions of funded papers are Biological Sciences (91%), Physics (91%), Clinical Medicine (88%), and Chemistry (87%). All these subjects have subfields that do not need expensive equipment: theoretical physics, theoretical chemistry, biostatistics (related to medicine), and systems biology. Thus, the result may reflect “cheaper” specialties being rare in the United Kingdom or globally.

4.2. RQ2: Is Funded Research Higher Quality for All Major Research Funders?

The GPA of the REF2021 scores of funded journal articles tends to be higher than the unfunded article GPA for most large research funders in Main Panels A–D (Figures 36). In the few cases where the funded GPA is lower than the unfunded GPA, the confidence interval for the former almost always includes the latter. The sole minor exception is the European Commission funding in Main Panel C (Figure 5). Nevertheless, this exception could be a side effect of the large number of tests (29 × 4), and with a Bonferroni correction, the difference between European Commission-funded research and unfunded research in Main Panel C is not statistically significant. Thus, at the Main Panel level, the results are broadly consistent with research funding being an advantage for all major funders, albeit marginal in some cases.

Figure 3.

The average quality score of REF2021 journal articles by research funder for Main Panel A (mainly health and life sciences) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Figure 3.

The average quality score of REF2021 journal articles by research funder for Main Panel A (mainly health and life sciences) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Close modal
Figure 4.

The average quality score of REF2021 journal articles by research funder for Main Panel B (mainly engineering, physical sciences and mathematics) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Figure 4.

The average quality score of REF2021 journal articles by research funder for Main Panel B (mainly engineering, physical sciences and mathematics) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Close modal
Figure 5.

The average quality score of REF2021 journal articles by research funder for Main Panel C (mainly social sciences) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Figure 5.

The average quality score of REF2021 journal articles by research funder for Main Panel C (mainly social sciences) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Close modal
Figure 6.

The average quality score of REF2021 journal articles by research funder for Main Panel D (mainly arts and humanities) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Figure 6.

The average quality score of REF2021 journal articles by research funder for Main Panel D (mainly arts and humanities) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Close modal

For Main Panel A (Figure 3), all research funder GPAs are above the unfunded GPA and none of the research funder confidence intervals contain the unfunded GPA. Thus, funding from a major funder is an advantage in Main Panel A. The same is broadly true for Main Panel B (Figure 3) except that four of the funder GPA confidence intervals contain the unfunded score.

The pattern is mixed for Main Panel C (Figure 5), perhaps because of smaller sample sizes giving less accurate mean estimates and wider confidence intervals. Although there are three funders with GPAs below the unfunded GPA, there are many funders with GPAs substantially above it and with narrow confidence intervals. Thus, there is still a general trend for major funder money to be advantageous in Main Panel C. For Main Panel D, most funders have a GPA above the unfunded GPA, and a few have substantially higher GPAs with narrow confidence intervals, suggesting that major funder money is also an advantage here.

Major funders also tend to support higher quality research when the data are aggregated to the level of individual UoAs, although there are some exceptions. Some illustrative examples are discussed here, focusing on larger UoAs for which the patterns are clearest. For Clinical Medicine (UoA 1, Figure 7), Engineering and Physical Sciences Research Council (EPSRC) funded research surprisingly generated lower quality scores than unfunded research. The reason for this may be that UoA 1 assessors did not value research with substantial inputs from nonmedical fields in the context of their UoA (e.g., because of more rigid quality criteria: Whitley, 2000). There is no similar problem for UoAs 2 (Figure 8) and 3 (Figure 9).

Figure 7.

The average quality score of REF2021 journal articles by research funder for UoA 1 Clinical Medicine for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Figure 7.

The average quality score of REF2021 journal articles by research funder for UoA 1 Clinical Medicine for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Close modal
Figure 8.

The average quality score of REF2021 journal articles by research funder for UoA 2 Public Health, Health Services and Primary Care for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Figure 8.

The average quality score of REF2021 journal articles by research funder for UoA 2 Public Health, Health Services and Primary Care for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Close modal
Figure 9.

The average quality score of REF2021 journal articles by research funder for UoA 3 Allied Health Prof., Dentistry, Nursing & Pharmacy for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Figure 9.

The average quality score of REF2021 journal articles by research funder for UoA 3 Allied Health Prof., Dentistry, Nursing & Pharmacy for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Close modal

The presence of pharmaceutical companies as funders for health and medical research is clear in UoAs 1–3 (Figures 79). The research that they fund tends to have a substantially higher GPA than unfunded research, suggesting that the commercial income enhances rather than compromises academic quality, or that a commercial funder boosts the significance component of quality for REF assessors.

The second largest UoA, Engineering (Figure 10) illustrates the general advantage of major research funders for quality in this field. Although most of the funders are governmental research funding bodies, military funding clearly produces above-average quality research.

Figure 10.

The average quality score of REF2021 journal articles by research funder for UoA 12 Engineering for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Figure 10.

The average quality score of REF2021 journal articles by research funder for UoA 12 Engineering for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Close modal

Funding seems to be a marginal advantage for the largest UoA, Business and Management, as eight of the top 28 funders have a below average GPA (Figure 11). Moreover, the core funder, the Economic and Social Research Council (ESRC), confers the relatively minor advantage of a 0.1 higher average GPA. The European Research Council was (pre-Brexit) particularly effective at funding high-quality research, but this is a logical side effect of its strategy of selecting “top researchers” through very competitive grants.

Figure 11.

The average quality score of REF2021 journal articles by research funder for UoA 17 Business and Management Studies for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Figure 11.

The average quality score of REF2021 journal articles by research funder for UoA 17 Business and Management Studies for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Close modal

4.3. RQ3: Do Research Funders Support Different Quality Research?

As the graphs above illustrate, there are statistically significant differences in the average quality of research supported by different funders. For example, in UoA 1 Clinical Medicine (Figure 7), the average GPA of the main three research funders is different, with their confidence intervals not overlapping. In particular, the National Institutes of Health (NIH) funded particularly high-quality research, followed by the Wellcome Trust (U.K. charity) and the Medical Research Council (MRC), all of which have large budgets and general funding remits. The NIH advantages may be that its research funded with U.K. partners would usually be international, because it is based in the United States, and its funding is backed by the greater financial resources of the United States.

4.4. RQ4: Does Authorship Team Size Moderate the Effect of Funding on Research Quality?

Research has shown that articles with more authors tend to be more cited, and funding seems to attract large team sizes, so it is possible that the advantage of funding is sometimes primarily in bringing together many authors. In our data, for all UoAs and Main Panels, funder GPA correlates positively with the average (geometric mean) number of authors on papers associated with the funder (Figure 12, GPA vs. authors). In other words, the larger the average authorship team size supported by a funder, the higher the average quality of the research it funds. The correlations tend to be strong in Main Panels A and B.

Figure 12.

Pearson correlations between funder MNLCS, GPA, and geometric mean authors by UoA or Main Panel. MNLCS correlations only cover research published 2014–18. UoAs are included only when they have at least 30 funders associated with at least five papers each.

Figure 12.

Pearson correlations between funder MNLCS, GPA, and geometric mean authors by UoA or Main Panel. MNLCS correlations only cover research published 2014–18. UoAs are included only when they have at least 30 funders associated with at least five papers each.

Close modal

For UoAs with at least 30 funders associated with at least five papers each, the weakest correlation between GPA and authors is for Business and Management Studies (0.06). Thus, for Business and Management Studies research funders, there is almost no relationship between average funded authorship team size and average research quality. This may be due to relatively little variation in GPA between funders and typically small research teams (average three authors per paper for all major funders, varying between 1.7 and 4.3). In contrast, the highest correlation is for Agriculture, Food and Veterinary Sciences (0.84), partly due to medical funders (MRC, NIH, Wellcome) and the Gordon and Betty Moore Foundation supporting large team research with high GPAs.

Ordinal regressions for each UoA, Main Panel, and overall (39 regressions) allow the effects of funding and author numbers to be analyzed separately. As a conservative step (see Figure 1), university-funded research was classed as unfunded, so the focus is on external funding for research. Because of the incompleteness of the funding data, the results will tend to underestimate any differences that exist. In the regressions, an exponentiated coefficient of 1 indicates that the independent variable (logged number of authors or external funding) has no effect on the odds ratios for quality scores (1, 2, 3, or 4). Values greater than 1 indicate that the variable increases the odds ratio for a higher quality score and values less than 1 the opposite. Every increase of 1 in the exponentiated regression coefficient for funding increases the odds ratio for higher quality research by 1 for funded research compared to unfunded research. Similarly, every increase of 1 in the exponentiated regression coefficient for logged authors increases the odds ratio for higher quality research by 1 for research with e = 2.718 times more authors.

The results show that, when considered independently from the number of authors, funding associates with improved odds of higher quality research in 33 out of 34 UoAs and all four Main Panels (Figure 13). The confidence intervals exclude the null value 1 for 30 out of 34 (and 30 out of all 39) regressions. These calculations do not include familywise error rate corrections for 39 separate tests, so may include some false positives; hence the true number of UoAs where funding is important may be less than 30. Conversely, on the basis of 30 out of 39 positives and the possibility that the remainder could have a funding advantage because values greater than 1 are comfortably in all 95% confidence intervals, it is also plausible that, after factoring out the number of authors, funding always associates with an improved chance of higher quality journal articles. This is because the few exceptions could be due to normal levels of chance. Nevertheless, while funding has the most substantial association with quality in Main Panels A and B, its association is marginal in some UoAs from Main Panels C and D. Thus, overall, there is evidence that funding has weak or moderate value, even after factoring out authorship team size, in associating with higher quality research in the social sciences, arts, and humanities, but there is strong evidence that it has a considerable value in medicine, life and physical sciences, and engineering.

Figure 13.

Exponentiated ordinal regression coefficients for quality score against external funding (binary) and the logged number of authors for REF2021 articles 2014–20. Error bars show 95% confidence intervals. University-funded research and research without declared funding is classified as unfunded. Each pair of coefficients shown above is from a separate model.

Figure 13.

Exponentiated ordinal regression coefficients for quality score against external funding (binary) and the logged number of authors for REF2021 articles 2014–20. Error bars show 95% confidence intervals. University-funded research and research without declared funding is classified as unfunded. Each pair of coefficients shown above is from a separate model.

Close modal

Although less important here, increased author numbers usually, but not always, associate with increased odds of higher quality journal articles, even after factoring out research funding. The exceptions are mainly in the arts and humanities.

4.5. RQ5: Are Average Citation Counts Effective Proxies for Average Quality for Externally Funded Research?

Research funders often have their own evaluation teams to assess the effectiveness of their grants. For this, the main quantitative evidence is likely to be citation data, perhaps with project grades from end-of-grant reviewers in some cases. If they make like-for-like comparisons against similar funders, then the only quantitative data that they would have for both would be citation counts. Thus, it is useful to check whether the average citation impact of funders is an effective proxy for the average quality of the research that they fund.

Correlations between funder citation rates (MNLCS) and average quality (GPA) are strong (> 0.5) in all Main Panels (Figure 12), suggesting that citation impact is a reasonable proxy for research quality overall. The correlations also tend to be moderate or strong in the UoAs of Main Panels A and B (Figure 12), but are variable in the UoAs of Main Panel C. In particular, the correlations are close to 0 (positive or negative) in UoAs 17 (Business and Management Studies), 20 (Social Work and Social Policy), and 23 (Education) and weak (0.2) in UoA 13 (Architecture, Built Environment and Planning). Thus, citation rates are inappropriate proxies for funder quality in these areas. By extension, and due to a lack of evidence, it seems that citation rates should not be used as proxies for funder research quality throughout the social sciences, arts, and humanities, except for Geography and Environmental Studies.

The results are limited to journal articles from the United Kingdom, and to the best 1–5 journal articles written by U.K. academics 2014–20, so are not representative of typical U.K. research (especially books). Moreover, while the scores given to the articles by the REF assessors are relatively carefully allocated, usually by two senior field experts following written guidelines (REF, 2022) and norm referenced within each UoA and broadly between UoAs, they are imperfect. In particular, an unknown but nontrivial number of articles will have been assessed by people without the knowledge to understand them, so guesswork was needed for these. Moreover, research quality is subjective and other assessors may well have given different scores to the same outputs; the assessors may also have taken into account funding when allocating scores (especially nonacademic funding as an indicator of significance). Nevertheless, the scores seem likely to be broadly reasonable, with unreasonable scores or errors being noise in the data. This hypothesis is sufficient for the above results to make sense, although noise in the data would tend to reduce the magnitude of any differences found. As a caveat, however, there are different ways of conceiving research quality and although the REF definition is relatively universal (combining originality, significance, and rigor: REF, 2022), there are others (Langfeldt, Nedeva et al., 2020). Related to this, researchers may not submit their most creative unfunded articles to the REF because of the significance and rigor criteria, and this may influence the results.

Another limitation is that the results only consider the funder reported by the Scopus API, ignoring any that Scopus could not find and all funders except one in the case of multiple-funded articles. This is a substantial limitation, as discussed in the evaluation at the end of Section 3. In particular, the extent of funding is underestimated in the data here. This does not invalidate the findings because funded research is still more likely to be recorded as such in the API (Figure 1), so the funded and unfunded groups are statistically distinct. This limitation nevertheless indicates that differences found between funded and unfunded research are larger than shown in the data (because the unfunded subsets are “polluted” with funded articles). The errors in the Scopus API data would also tend to reduce the difference between funded and unfunded research for the same reason. This reduction is likely to be largest when the Scopus API has the most missing information (probably lower-numbered UoAs).

The findings ignore the value of each grant, whether the funding was partial, what the money was spent on, how many publications were produced from it, and whether journal articles were the primary outcome of the project or a side effect. They also ignore disciplinary differences in the need to record funding sources, with biomedical fields apparently most affected due to a need to register any potential conflicts of interest. They also ignore the purpose of the funding, which may not be to conduct high-quality research but to develop a technology for industry, to train a PhD student, to develop a junior postdoc, to build research networks, or to support researcher mobility. The results do not differentiate between projects awarded explicit funding by a university and projects without explicit funding but presumably consuming university resources and time: Both are classed as unfunded for the regression and are otherwise recorded as university funded only if this is stated in the funding information. More generally, the results do not take into account the time taken to write funding proposals for either successful or unsuccessful bids. Finally, funding here is tied to publications, although a team may be partly funded and draw on different sources (Aagaard, Mongeon et al., 2021).

5.1. Comparison with Prior Work

The findings mostly have little directly comparable prior work. For RQ1, the prevalence of research funding for any country is reported apparently for the first time, albeit with partial data. The existence of disciplinary differences in funding rates is unsurprising but does not seem to have been previously investigated for all academic fields. The prevalence of funding is much higher than previously reported (Berman et al., 1995; Borkowski et al., 1992; Ernst et al., 1997; Jowkar et al., 2011; Lim, Yoon et al., 2012; Shandhi, Goldsack et al., 2021; Stein et al., 1993), with a few exceptions (Godin, 2003), probably at least partly due to more systematic funding reporting now, and the U.K. sample (e.g., excluding publishing practitioners/professionals).

The higher quality rates for major funders (RQ2) are a new finding but echo many previous studies of individual funders that have shown funded articles or researchers to be more cited than a comparable group (unfunded articles, unsuccessful applicants, or researchers before the funding) (Álvarez-Bornstein et al., 2019; Berman et al., 1995; Gush et al., 2018; Heyard & Hottenrott, 2021; Levitt, 2011; Lewison & Dawson, 1998; Peritz, 1990; Rigby, 2011; Roshani et al., 2021; Yan et al., 2018), and conflicts with the few studies not showing this or showing the reverse in specific fields or contexts (Jowkar et al., 2011; Muscio et al., 2017; Neufeld, 2016). The discrepancies include two fields where citations are reasonably reliable indicators of quality—Biology/Biochemistry and Environment/Ecology in Iran (Jowkar et al., 2011)—so it is possible that there are international differences in the value of research funding.

The unsurprising finding that funders can support different quality research (RQ3) aligns with prior findings that research funders can support research with different average citation impacts (Thelwall et al., 2016), and that the amount of research funding influences the citation impact of the research (Muscio et al., 2017).

The finding that funded research is higher quality than unfunded research even after factoring out team size (RQ4) is not directly comparable to prior studies. It contradicts claims that the current managerial approach to research in higher education reduces the quality of research in the social sciences by restricting the autonomy of researchers (Horta & Santos, 2020), although it is not clear whether academics with more autonomy but the same amount of funding would produce better work. The evidence of fields in which average citation counts are effective proxies for average quality (as conceived in the REF) for externally funded research (RQ5) is also not directly comparable to prior studies.

5.2. Alternative Causes of Funded Research Being Higher Quality

The higher quality of funded research has multiple possible causes, all of which may be true to some extent. Although it seems self-evident that funding improves research, it is not always true (Jowkar et al., 2011; Muscio et al., 2017; Neufeld, 2016). There are many pathways that could explain the usually positive relationship.

5.2.1. Funders select more successful researchers to fund

Research, albeit with limited scope, suggests that funding councils may be good at excluding weak researchers but not good at identifying the very best, at least if citations are accepted as a proxy for research quality (van den Besselaar & Leydesdorff, 2009). Assuming that the first group, together with researchers that were unable to submit funding bids, formed a majority or were substantially weaker than the other two groups, this would likely translate into a statistical association between funding and researcher quality. There may also be a REF selection effect that would strengthen the results, with stronger researchers differentially submitting their funded research and weaker researchers often not having funded research.

5.2.2. Funding improves existing research

At the simplest level, funding may allow some researchers to conduct better versions of the research that they had already intended to pursue. For example, the funding might support a larger scale survey, newer equipment, expert collaborators, or additional supporting analyses. It seems unlikely that a project given extra funding would often become worse, for example because new equipment was bought but did not work well, or an expanded survey incorporated lower quality data collection methods in the additional areas.

5.2.3. Funding changes the research carried out, replacing weaker (or no) with stronger work

Funding might allow a study that would be impossible for the applicant(s) without external funding (Bloch, Graversen, & Pedersen, 2014). If the funding was for expensive equipment or other processes (e.g., large-scale in-person interviews) then the work seems likely to be more original than average, assuming that few researchers in a field would have access to funding for investigations with a similar purpose. For example, perhaps an Alzheimer’s researcher gets funding to run a large-scale genetic screening test and produces one of the few studies on this topic. Originality is one of the three components of research quality (Langfeldt et al., 2020), so increasing this would be enough to improve the overall quality grade for an article. Of course, funded types of research could also sometimes tend to be weaker than unfunded research in some fields or contexts. For example, funding commonly supports PhD projects (Ates & Brechelmacher, 2013), and PhD research could be better or worse than average, depending on the field.

5.2.4. Funding-led research goals are more valued

Research projects that align with funders’ strategic priorities may be highly valued if assessors accept these priorities. Although there are open call grants, some may pursue unfunded research because of the freedom to choose their own priorities (Behrens & Gray, 2001; Cheek, 2008), so strategic goals seem likely to be more common in funded research. Funding also generates an implicit hierarchy of research value, with even unfunded goals aligning with societal needs potentially being undervalued (Frickel, Gibbon et al., 2010).

5.2.5. Funding is regarded as a good in itself

Given high levels of competition for research funding, a funding declaration may be seen as an important achievement, especially as the evaluators are mainly from a U.K. higher education environment in which funding is encouraged and rewarded. Conversely, in funding-rich areas, articles lacking funding may be treated with extra suspicion.

5.2.6. Funding entails impact requirements

Although industry funding typically has commercial value as a goal, research council grants have societal impact requirements and give resources to achieve these through dissemination activities. Thus, funded research may be more impactful through multiple pathways related to the funding sources.

In the United Kingdom, there are substantial disciplinary differences in the proportions of funded research and the extent to which funded research tends to be of higher quality than unfunded research. Although this was only evaluated in a limited U.K. REF context, the results suggest, but do not prove, that there are few (and perhaps no) broad fields of research in which funding does not help academics to produce higher quality research. The main exceptions are a few individual funders in some contexts, and the evidence is weak for the arts and humanities and some social sciences. Moreover, as the results could be equally explained by better researchers being more successful at attracting funding or funding improving the researchers’ outputs, no cause-and-effect relationship can be claimed. The results are not due to funded research tending to involve larger teams because the regressions showed a residual funding advantage after taking into account team size. Overall, however, because the results are at least consistent with research funding adding value nearly universally across disciplines, avoiding grants seems like a risk for all researchers, unless they have good reasons to believe that their research is an exception.

This study does not take into account productivity and the time taken writing successful and unsuccessful bids, so the results cannot be used for a cost–benefit analysis of funding. More detailed research that considers the amount of funding available for each study and the role of the funding (e.g., improving existing research, allowing expensive studies) would be needed to make a reasonable cost–benefit analysis to give useful information about the disciplinary differences in the effectiveness of funding, but this seems unlikely to be possible with current public data.

A secondary finding is that citations are not always effective proxies for average funder quality, especially in the social sciences, arts, and humanities. Funders and studies that use citations as proxies for quality to assess the impact of funding should only do so for the fields identified above where appropriately field-normalized citation counts correlate at least moderately with quality.

Mike Thelwall: Formal analysis, Methodology, Writing—original draft. Kayvan Kousha: Writing—review & editing. Mahshid Abdoli: Formal analysis, Writing—review & editing. Emma Stuart: Formal analysis, Writing—review & editing. Meiko Makita: Formal analysis, Writing—review & editing. Cristina I. Font-Julián: Writing–review & editing. Paul Wilson: Writing—review & editing. Jonathan Levitt: Methodology, Writing—review & editing.

The authors have no competing interests.

This study was funded by Research England, Scottish Funding Council, Higher Education Funding Council for Wales, and Department for the Economy, Northern Ireland as part of the Future Research Assessment Programme (https://www.jisc.ac.uk/future-research-assessment-programme). The content is solely the responsibility of the authors and does not necessarily represent the official views of the funders.

The raw data were deleted before submission to follow UKRI policy for REF2021. More data information is available in an associated report (Thelwall, Kousha et al., 2022b; with extra information here: https://cybermetrics.wlv.ac.uk/ai/).

Aagaard
,
K.
,
Mongeon
,
P.
,
Ramos-Vielba
,
I.
, &
Thomas
,
D. A.
(
2021
).
Getting to the bottom of research funding: Acknowledging the complexity of funding dynamics
.
PLOS ONE
,
16
(
5
),
e0251488
. ,
[PubMed]
Ali
,
M. M.
,
Bhattacharyya
,
P.
, &
Olejniczak
,
A. J.
(
2010
).
The effects of scholarly productivity and institutional characteristics on the distribution of federal research grants
.
Journal of Higher Education
,
81
(
2
),
164
178
.
Alkhawtani
,
R. H. M.
,
Kwee
,
T. C.
, &
Kwee
,
R. M.
(
2020
).
Funding of radiology research: Frequency and association with citation rate
.
American Journal of Roentgenology
,
215
(
5
),
1286
1289
. ,
[PubMed]
Álvarez-Bornstein
,
B.
,
Díaz-Faes
,
A. A.
, &
Bordons
,
M.
(
2019
).
What characterises funded biomedical research? Evidence from a basic and a clinical domain
.
Scientometrics
,
119
(
2
),
805
825
.
Amiri
,
A. R.
,
Kanesalingam
,
K.
,
Cro
,
S.
, &
Casey
,
A. T. H.
(
2014
).
Does source of funding and conflict of interest influence the outcome and quality of spinal research?
Spine Journal
,
14
(
2
),
308
314
. ,
[PubMed]
Ates
,
G.
, &
Brechelmacher
,
A.
(
2013
).
Academic career paths
. In
U.
Teichler
&
E.
Höhle
(Eds.),
The work situation of the academic profession in Europe: Findings of a survey in twelve countries
(pp.
13
35
).
Springer
.
Banchoff
,
T.
(
2002
).
Institutions, inertia and European Union research policy
.
Journal of Common Market Studies
,
40
(
1
),
1
21
.
Beatty
,
S.
(
2017
).
New on Scopus: Link to datasets, search funding acknowledgements and find more CiteScore transparency
.
Scopus Blog
. https://web.archive.org/web/20170920090909/https://blog.scopus.com/posts/new-on-scopus-link-to-datasets-search-funding-acknowledgements-and-find-more-citescore
Behrens
,
T. R.
, &
Gray
,
D. O.
(
2001
).
Unintended consequences of cooperative research: Impact of industry sponsorship on climate for academic freedom and other graduate student outcome
.
Research Policy
,
30
(
2
),
179
199
.
Bentley
,
P. J.
, &
Kyvik
,
S.
(
2012
).
Academic work from a comparative perspective: A survey of faculty working time across 13 countries
.
Higher Education
,
63
(
4
),
529
547
.
Berman
,
J. J.
,
Borkowski
,
A.
,
Rachocka
,
H.
, &
Moore
,
G. W.
(
1995
).
Impact of unfunded research in medicine, pathology, and surgery
.
Southern Medical Journal
,
88
(
3
),
295
299
. ,
[PubMed]
Bloch
,
C.
,
Graversen
,
E. K.
, &
Pedersen
,
H. S.
(
2014
).
Competitive research grants and their impact on career performance
.
Minerva
,
52
(
1
),
77
96
.
Bloch
,
C.
,
Schneider
,
J. W.
, &
Sinkjær
,
T.
(
2016
).
Size, accumulation and performance for research grants: Examining the role of size for centres of excellence
.
PLOS ONE
,
11
(
2
),
e0147726
. ,
[PubMed]
Bloch
,
C.
, &
Sørensen
,
M. P.
(
2015
).
The size of research funding: Trends and implications
.
Science and Public Policy
,
42
(
1
),
30
43
.
Bodensteiner
,
J. B.
(
1995
).
The saga of the septum pellucidum: A tale of unfunded clinical investigations
.
Journal of Child Neurology
,
10
(
3
),
227
231
. ,
[PubMed]
Borkowski
,
A.
,
Berman
,
J. J.
, &
Moore
,
G. W.
(
1992
).
Research by pathologists not funded by external grant agencies: A success story
.
Modern Pathology
,
5
(
5
),
577
579
.
[PubMed]
Bornmann
,
L.
, &
Daniel
,
H. D.
(
2006
).
Selecting scientific excellence through committee peer review—A citation analysis of publications previously published to approval or rejection of post-doctoral research fellowship applicants
.
Scientometrics
,
68
(
3
),
427
440
.
Brookes
,
M. J.
,
Farr
,
A.
,
Phillips
,
C. J.
, &
Trudgill
,
N. J.
(
2021
).
Management of iron deficiency anaemia in secondary care across England between 2012 and 2018: A real-world analysis of hospital episode statistics
.
Frontline Gastroenterology
,
12
(
5
),
363
369
. ,
[PubMed]
Cheek
,
J.
(
2008
).
The practice and politics of funded qualitative research
. In
N. K.
Denzin
&
Y. S.
Lincoln
(Eds.),
Strategies of qualitative inquiry
(pp.
45
74
).
Sage
.
Chubin
,
D. E.
(
1994
).
Grants peer review in theory and practice
.
Evaluation Review
,
18
(
1
),
20
30
.
Chudnovsky
,
D.
,
López
,
A.
,
Rossi
,
M. A.
, &
Ubfal
,
D.
(
2008
).
Money for science? The impact of research grants on academic output
.
Fiscal Studies
,
29
(
1
),
75
87
.
Crosland
,
M.
, &
Galvez
,
A.
(
1989
).
The emergence of research grants within the prize system of the French Academy of Sciences, 1795–1914
.
Social Studies of Science
,
19
(
1
),
71
100
.
Cruz-Castro
,
L.
,
Ginther
,
D. K.
, &
Sanz-Menendez
,
L.
(
2022
).
Gender and underrepresented minority differences in research funding
. https://www.nber.org/system/files/working_papers/w30107/w30107.pdf.
Currie-Alder
,
B.
(
2015
).
Research for the developing world: Public funding from Australia, Canada, and the UK
.
Oxford
:
Oxford University Press
.
Defazio
,
D.
,
Lockett
,
A.
, &
Wright
,
M.
(
2009
).
Funding incentives, collaborative dynamics and scientific productivity: Evidence from the EU framework program
.
Research Policy
,
38
(
2
),
293
305
.
Demeritt
,
D.
(
2000
).
The new social contract for science: Accountability, relevance, and value in US and UK science and research policy
.
Antipode
,
32
(
3
),
308
329
.
Drennan
,
J.
,
Clarke
,
M.
,
Hyde
,
A.
, &
Politis
,
Y.
(
2013
).
The research function of the academic profession in Europe
. In
U.
Teichler
&
E.
Höhle
(Eds.),
The work situation of the academic profession in Europe: Findings of a survey in twelve countries
(pp.
109
136
).
Springer
.
Ebadi
,
A.
, &
Schiffauerova
,
A.
(
2016
).
How to boost scientific production? A statistical analysis of research funding and other influencing factors
.
Scientometrics
,
106
(
3
),
1093
1116
.
Edwards
,
R.
(
2022
).
Why do academics do unfunded research? Resistance, compliance and identity in the UK neo-liberal university
.
Studies in Higher Education
,
47
(
4
),
904
914
.
El-Sawi
,
N. I.
,
Sharp
,
G. F.
, &
Gruppen
,
L. D.
(
2009
).
A small grants program improves medical education research productivity
.
Academic Medicine
,
84
(
Suppl. 10
),
S105
S108
. ,
[PubMed]
Enger
,
S. G.
, &
Castellacci
,
F.
(
2016
).
Who gets Horizon 2020 research grants? Propensity to apply and probability to succeed in a two-step analysis
.
Scientometrics
,
109
(
3
),
1611
1638
.
Ernst
,
A. A.
,
Houry
,
D.
, &
Weiss
,
S. J.
(
1997
).
Research funding in the four major emergency medicine journals
.
American Journal of Emergency Medicine
,
15
(
3
),
268
270
. ,
[PubMed]
Fang
,
F. C.
,
Bowen
,
A.
, &
Casadevall
,
A.
(
2016
).
NIH peer review percentile scores are poorly predictive of grant productivity
.
eLife
,
5
,
e13323
. ,
[PubMed]
Franssen
,
T.
,
Scholten
,
W.
,
Hessels
,
L. K.
, &
de Rijcke
,
S.
(
2018
).
The drawbacks of project funding for epistemic innovation: Comparing institutional affordances and constraints of different types of research funding
.
Minerva
,
56
(
1
),
11
33
. ,
[PubMed]
Frickel
,
S.
,
Gibbon
,
S.
,
Howard
,
J.
,
Kempner
,
J.
,
Ottinger
,
G.
, &
Hess
,
D. J.
(
2010
).
Undone science: Charting social movement and civil society challenges to research agenda setting
.
Science, Technology, & Human Values
,
35
(
4
),
444
473
. ,
[PubMed]
Fumasoli
,
T.
,
Goastellec
,
G.
, &
Kehm
,
B. M.
(
2015
).
Academic careers and work in Europe: Trends, challenges, perspectives
. In
T.
Fumasoli
(Ed.),
Academic work and careers in Europe: Trends, challenges, perspectives
(pp.
201
214
).
Springer
.
Gallo
,
S. A.
,
Carpenter
,
A. S.
,
Irwin
,
D.
,
McPartland
,
C. D.
,
Travis
,
J.
, …
Glisson
,
S. R.
(
2014
).
The validation of peer review through research impact measures and the implications for funding strategies
.
PLOS ONE
,
9
(
9
),
e106474
. ,
[PubMed]
Gaughan
,
M.
, &
Bozeman
,
B.
(
2002
).
Using curriculum vitae to compare some impacts of NSF research grants with research center funding
.
Research Evaluation
,
11
(
1
),
17
26
.
Goddard
,
A. F.
,
James
,
M. W.
,
McIntyre
,
A. S.
, &
Scott
,
B. B.
(
2011
).
Guidelines for the management of iron deficiency anaemia
.
Gut
,
60
(
10
),
1309
1316
. ,
[PubMed]
Godin
,
B.
(
2003
).
The impact of research grants on the productivity and quality of scientific research
.
Ottawa
:
INRS Working Paper, 2003
.
Gush
,
J.
,
Jaffe
,
A.
,
Larsen
,
V.
, &
Laws
,
A.
(
2018
).
The effect of public funding on research output: The New Zealand Marsden Fund
.
New Zealand Economic Papers
,
52
(
2
),
227
248
.
Gutiérrez
,
P. A.
,
Perez-Ortiz
,
M.
,
Sanchez-Monedero
,
J.
,
Fernandez-Navarro
,
F.
, &
Hervas-Martinez
,
C.
(
2015
).
Ordinal regression methods: Survey and experimental study
.
IEEE Transactions on Knowledge and Data Engineering
,
28
(
1
),
127
146
.
Győrffy
,
B.
,
Herman
,
P.
, &
Szabó
,
I.
(
2020
).
Research funding: Past performance is a stronger predictor of future scientific output than reviewer scores
.
Journal of Informetrics
,
14
(
3
),
101050
.
Hayden
,
E. C.
(
2015
).
Racial bias continues to haunt NIH grants
.
Nature
,
527
(
7578
),
286
287
. ,
[PubMed]
Herbert
,
D. L.
,
Barnett
,
A. G.
,
Clarke
,
P.
, &
Graves
,
N.
(
2013
).
On the time spent preparing grant proposals: An observational study of Australian researchers
.
BMJ Open
,
3
(
5
),
e002800
. ,
[PubMed]
Herbert
,
D. L.
,
Coveney
,
J.
,
Clarke
,
P.
,
Graves
,
N.
, &
Barnett
,
A. G.
(
2014
).
The impact of funding deadlines on personal workloads, stress and family relationships: A qualitative study of Australian researchers
.
BMJ Open
,
4
(
3
),
e004462
. ,
[PubMed]
Heyard
,
R.
, &
Hottenrott
,
H.
(
2021
).
The value of research funding for knowledge creation and dissemination: A study of SNSF Research Grants
.
Humanities and Social Sciences Communications
,
8
(
1
),
1
16
.
Hicks
,
D.
(
2012
).
Performance-based university research funding systems
.
Research Policy
,
41
(
2
),
251
261
.
Higgins
,
A.
,
Downes
,
C.
,
Varley
,
J.
,
Doherty
,
C. P.
,
Begley
,
C.
, &
Elliott
,
N.
(
2019
).
Evidence-based practice among epilepsy specialist nurses in the Republic of Ireland: Findings from the SENsE study
.
Journal of Nursing Management
,
27
(
4
),
840
847
. ,
[PubMed]
Horta
,
H.
,
Huisman
,
J.
, &
Heitor
,
M.
(
2008
).
Does competitive research funding encourage diversity in higher education?
Science and Public Policy
,
35
(
3
),
146
158
.
Horta
,
H.
, &
Santos
,
J. M.
(
2020
).
Organisational factors and academic research agendas: An analysis of academics in the social sciences
.
Studies in Higher Education
,
45
(
12
),
2382
2397
.
Hottenrott
,
H.
, &
Lawson
,
C.
(
2017
).
Fishing for complementarities: Research grants and research productivity
.
International Journal of Industrial Organization
,
51
(
1
),
1
38
.
Hottenrott
,
H.
, &
Thorwarth
,
S.
(
2011
).
Industry funding of university research and scientific productivity
.
Kyklos
,
64
(
4
),
534
555
.
Hussinger
,
K.
, &
Carvalho
,
J. N.
(
2022
).
The long-term effect of research grants on the scientific output of university professors
.
Industry and Innovation
,
29
(
4
),
463
487
.
Imran
,
N.
,
Aamer
,
I.
,
Sharif
,
M. I.
,
Bodla
,
Z. H.
, &
Naveed
,
S.
(
2020
).
Psychological burden of quarantine in children and adolescents: A rapid systematic review and proposed solutions
.
Pakistan Journal of Medical Sciences
,
36
(
5
),
1106
1116
. ,
[PubMed]
Jappe
,
A.
, &
Heinze
,
T.
(
2023
).
Research funding in the context of high institutional stratification. Policy scenarios for Europe based on insights from the United States
. In
B.
Lepori
,
B.
Jongbloed
, &
D.
Hicks
(Eds.),
Handbook of public research funding
(pp.
203
220
).
Edward Elgar
.
Johnson
,
A. J.
,
Mathews
,
V. P.
, &
Artemakis
,
A.
(
2002
).
American Society of Neuroradiology research survey 2001
.
Academic Radiology
,
9
(
7
),
810
814
. ,
[PubMed]
Johnson
,
A. T.
, &
Hirt
,
J. B.
(
2011
).
Reshaping academic capitalism to meet development priorities: The case of public universities in Kenya
.
Higher Education
,
61
(
4
),
483
499
.
Jonker
,
L.
,
Cox
,
D.
, &
Marshall
,
G.
(
2011
).
Considerations, clues and challenges: Gaining ethical and trust research approval when using the NHS as a research setting
.
Radiography
,
17
(
3
),
260
264
.
Jowkar
,
A.
,
Didegah
,
F.
, &
Gazni
,
A.
(
2011
).
The effect of funding on academic research impact: A case study of Iranian publications
.
Aslib Proceedings
,
63
(
6
),
593
602
.
Kayrooz
,
C.
,
Åkerlind
,
G. S.
, &
Tight
,
M.
(Eds.) (
2007
).
Autonomy in social science research, volume 4: The view from United Kingdom and Australian universities
.
Bradford, UK
:
Emerald Group Publishing Limited
.
Kim
,
K. S.
,
Chung
,
J. H.
,
Jo
,
J. K.
,
Kim
,
J. H.
,
Kim
,
S.
, …
Lee
,
S. W.
(
2018
).
Quality of randomized controlled trials published in the International Urogynecology Journal 2007–2016
.
International Urogynecology Journal
,
29
(
7
),
1011
1017
. ,
[PubMed]
Langfeldt
,
L.
,
Bloch
,
C. W.
, &
Sivertsen
,
G.
(
2015
).
Options and limitations in measuring the impact of research grants—Evidence from Denmark and Norway
.
Research Evaluation
,
24
(
3
),
256
270
.
Langfeldt
,
L.
,
Nedeva
,
M.
,
Sörlin
,
S.
, &
Thomas
,
D. A.
(
2020
).
Co-existing notions of research quality: A framework to study context-specific understandings of good research
.
Minerva
,
58
(
1
),
115
137
.
Laudel
,
G.
(
2005
).
Is external research funding a valid indicator for research performance?
Research Evaluation
,
14
(
1
),
27
34
.
Lepori
,
B.
,
van den Besselaar
,
P.
,
Dinges
,
M.
,
Potì
,
B.
,
Reale
,
E.
, …
van der Meulen
,
B.
(
2007
).
Comparing the evolution of national research policies: What patterns of change?
Science and Public Policy
,
34
(
6
),
372
388
.
Levitt
,
J. M.
(
2011
).
Are funded articles more highly cited than unfunded articles? A preliminary investigation
. In
E.
Noyons
,
P.
Ngulube
, &
J.
Leta
(Eds.),
Proceedings of ISSI 2011
(pp.
1013
1015
).
ISSI Press
.
Levitt
,
M.
, &
Levitt
,
J. M.
(
2017
).
Future of fundamental discovery in US biomedical research
.
Proceedings of the National Academy of Sciences
,
114
(
25
),
6498
6503
. ,
[PubMed]
Lewison
,
G.
, &
Dawson
,
G.
(
1998
).
The effect of funding on the outputs of biomedical research
.
Scientometrics
,
41
(
1–2
),
17
27
.
Liaw
,
L.
,
Freedman
,
J. E.
,
Becker
,
L. B.
,
Mehta
,
N. N.
, &
Liscum
,
L.
(
2017
).
Peer review practices for evaluating biomedical research grants: A scientific statement from the American Heart Association
.
Circulation Research
,
121
(
4
),
e9
e19
. ,
[PubMed]
Lim
,
K. J.
,
Yoon
,
D. Y.
,
Yun
,
E. J.
,
Seo
,
Y. L.
,
Baek
,
S.
, …
Kim
,
S. S.
(
2012
).
Characteristics and trends of radiology research: A survey of original articles published in AJR and Radiology between 2001 and 2010
.
Radiology
,
264
(
3
),
796
802
. ,
[PubMed]
Mai
,
T. V.
,
Agan
,
D. L.
,
Clopton
,
P.
,
Collins
,
G.
, &
DeMaria
,
A. N.
(
2013
).
The magnitude and nature of unfunded published cardiovascular research
.
Journal of the American College of Cardiology
,
61
(
3
),
275
281
. ,
[PubMed]
Martín-Martín
,
A.
,
Thelwall
,
M.
,
Orduna-Malea
,
E.
, &
Delgado López-Cózar
,
E.
(
2021
).
Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via citations
.
Scientometrics
,
126
(
1
),
871
906
. ,
[PubMed]
McCullough
,
R.
(
2021
).
Improvements to funding data in Scopus: Now 16.5M articles with funding information and easier to identify funded research
.
Scopus Blog
. https://web.archive.org/web/20220630145119/https://blog.scopus.com/posts/improvements-to-funding-data-in-scopus-now-165m-articles-with-funding-information-and-easier
Metcalfe
,
A. S.
(
2010
).
Revisiting academic capitalism in Canada: No longer the exception
.
Journal of Higher Education
,
81
(
4
),
489
514
.
Muscio
,
A.
,
Ramaciotti
,
L.
, &
Rizzo
,
U.
(
2017
).
The complex relationship between academic engagement and research output: Evidence from Italy
.
Science and Public Policy
,
44
(
2
),
235
245
.
Mustar
,
P.
, &
Larédo
,
P.
(
2002
).
Innovation and research policy in France (1980–2000) or the disappearance of the Colbertist state
.
Research Policy
,
31
(
1
),
55
72
.
Neufeld
,
J.
(
2016
).
Determining effects of individual research grants on publication output and impact: The case of the Emmy Noether Programme (German Research Foundation)
.
Research Evaluation
,
25
(
1
),
50
61
.
Nyamapfene
,
A. Z.
(
2018
).
Teaching-only academics in a research intensive university: From an undesirable to a desirable academic identity
.
Exeter, UK
:
University of Exeter
.
Olive
,
V.
(
2017
).
How much is too much? Cross-subsidies from teaching to research in British universities
.
Oxford
:
Higher Education Policy Institute
.
Paul-Hus
,
A.
,
Díaz-Faes
,
A. A.
,
Sainte-Marie
,
M.
,
Desrochers
,
N.
,
Costas
,
R.
, &
Larivière
,
V.
(
2017
).
Beyond funding: Acknowledgement patterns in biomedical, natural and social sciences
.
PLOS ONE
,
12
(
10
),
e0185578
. ,
[PubMed]
Peritz
,
B.
(
1990
).
The citation impact of funded and unfunded research in economics
.
Scientometrics
,
19
(
3–4
),
199
206
.
Perneger
,
T. V.
(
1998
).
What’s wrong with Bonferroni adjustments
.
British Medical Journal
,
316
(
7139
),
1236
1238
. ,
[PubMed]
Polster
,
C.
(
2007
).
The nature and implications of the growing importance of research grants to Canadian universities and academics
.
Higher Education
,
53
(
5
),
599
622
.
Qi
,
J.
, &
Wei
,
C.
(
2021
).
Performance evaluation of climate-adaptive natural ventilation design: A case study of semi-open public cultural building
.
Indoor and Built Environment
,
30
(
10
),
1714
1724
.
REF
. (
2022
).
Guidance on the REF 2021 results
. https://ref.ac.uk/guidance-on-results/guidance-on-ref-2021-results/
Rigby
,
J.
(
2011
).
Systematic grant and funding body acknowledgement data for publications: New dimensions and new controversies for research policy and evaluation
.
Research Evaluation
,
20
(
5
),
365
375
.
Roach
,
J. W.
,
Skaggs
,
D. L.
,
Sponseller
,
P. D.
, &
MacLeod
,
L. M.
(
2008
).
Is research presented at the scoliosis research society annual meeting influenced by industry funding?
Spine
,
33
(
20
),
2208
2212
. ,
[PubMed]
Roshani
,
S.
,
Bagherylooieh
,
M. R.
,
Mosleh
,
M.
, &
Coccia
,
M.
(
2021
).
What is the relationship between research funding and citation-based performance? A comparative analysis between critical disciplines
.
Scientometrics
,
126
(
9
),
7859
7874
.
Sandström
,
U.
(
2009
).
Research quality and diversity of funding: A model for relating research money to output of research
.
Scientometrics
,
79
(
2
),
341
349
.
Saygitov
,
R. T.
(
2018
).
The impact of grant funding on the publication activity of awarded applicants: A systematic review of comparative studies and meta-analytical estimates
.
bioRxiv
.
Schneider
,
J. W.
, &
van Leeuwen
,
T. N.
(
2014
).
Analysing robustness and uncertainty levels of bibliometric performance statistics supporting science policy. A case study evaluating Danish postdoctoral funding
.
Research Evaluation
,
23
(
4
),
285
297
.
Sedney
,
C. L.
,
Daffner
,
S. D.
,
Stefanko
,
J. J.
,
Abdelfattah
,
H.
,
Emery
,
S. E.
, &
France
,
J. C.
(
2016
).
Fracture of fusion mass after hardware removal in patients with high sagittal imbalance
.
Journal of Neurosurgery: Spine
,
24
(
4
),
639
643
. ,
[PubMed]
Seeber
,
M.
,
Vlegels
,
J.
, &
Cattaneo
,
M.
(
2022
).
Conditions that do or do not disadvantage interdisciplinary research proposals in project evaluation
.
Journal of the Association for Information Science and Technology
,
73
(
8
),
1106
1126
.
Shandhi
,
M. M. H.
,
Goldsack
,
J. C.
,
Ryan
,
K.
,
Bennion
,
A.
,
Kotla
,
A. V.
, …
Dunn
,
J.
(
2021
).
Recent academic research on clinically relevant digital measures: Systematic review
.
Journal of Medical Internet Research
,
23
(
9
),
e29875
. ,
[PubMed]
Shimada
,
Y. A.
,
Tsukada
,
N.
, &
Suzuki
,
J.
(
2017
).
Promoting diversity in science in Japan through mission-oriented research grants
.
Scientometrics
,
110
(
3
),
1415
1435
.
Silberman
,
E. K.
, &
Snyderman
,
D. A.
(
1997
).
Research without external funding in North American psychiatry
.
American Journal of Psychiatry
,
154
(
8
),
1159
1160
. ,
[PubMed]
Sinha
,
Y.
,
Iqbal
,
F. M.
,
Spence
,
J. N.
, &
Richard
,
B.
(
2016
).
A bibliometric analysis of the 100 most-cited articles in rhinoplasty
.
Plastic and Reconstructive Surgery Global Open
,
4
(
7
),
e820
. ,
[PubMed]
Slaughter
,
S.
, &
Leslie
,
L. L.
(
2001
).
Expanding and elaborating the concept of academic capitalism
.
Organization
,
8
(
2
),
154
161
.
Stein
,
M. D.
,
Rubenstein
,
L.
, &
Wachtel
,
T. J.
(
1993
).
Who pays for published research?
JAMA
,
269
(
6
),
781
782
. ,
[PubMed]
Takemura
,
S.
(
2021
).
Health research policy and systems in Japan: A review focused on the Health, Labour and Welfare Sciences Research Grants
.
Journal of the National Institute of Public Health
,
70
(
1
),
2
12
.
Tatsioni
,
A.
,
Vavva
,
E.
, &
Ioannidis
,
J. P. A.
(
2010
).
Sources of funding for Nobel Prize-winning work: Public or private?
FASEB Journal
,
24
(
5
),
1335
1339
. ,
[PubMed]
Tellmann
,
S. M.
(
2022
).
The societal territory of academic disciplines: How disciplines matter to society
.
Minerva
,
60
(
2
),
159
179
.
Thelwall
,
M.
(
2017
).
Three practical field normalised alternative indicator formulae for research evaluation
.
Journal of Informetrics
,
11
(
1
),
128
151
.
Thelwall
,
M.
, &
Fairclough
,
R.
(
2017
).
The accuracy of confidence intervals for field normalised indicators
.
Journal of Informetrics
,
11
(
2
),
530
540
.
Thelwall
,
M.
,
Kousha
,
K.
,
Abdoli
,
M.
,
Stuart
,
E.
,
Makita
,
M.
, …
Levitt
,
J.
(
2022a
).
Why are co-authored academic articles more cited: Higher quality or larger audience?
arXiv
,
arXiv:2212.06571
.
Thelwall
,
M.
,
Kousha
,
K.
,
Abdoli
,
M.
,
Stuart
,
E.
,
Makita
,
M.
, …
Levitt
,
J.
(
2022b
).
Can REF output quality scores be assigned by AI? Experimental evidence
.
arXiv
,
arXiv:2212.08041
.
Thelwall
,
M.
,
Kousha
,
K.
,
Dinsmore
,
A.
, &
Dolby
,
K.
(
2016
).
Alternative metric indicators for funding scheme evaluations
.
Aslib Journal of Information Management
,
68
(
1
),
2
18
.
Thelwall
,
M.
, &
Maflahi
,
N.
(
2020
).
Academic collaboration rates and citation associations vary substantially between countries and fields
.
Journal of the Association for Information Science and Technology
,
71
(
8
),
968
978
.
Thyer
,
B. A.
(
2011
).
Harmful effects of federal research grants
.
Social Work Research
,
35
(
1
),
3
7
.
Tricco
,
A. C.
,
Thomas
,
S. M.
,
Antony
,
J.
,
Rios
,
P.
,
Robson
,
R.
, …
Straus
,
S. E.
(
2017
).
Strategies to prevent or reduce gender bias in peer review of research grants: A rapid scoping review
.
PLOS ONE
,
12
(
1
),
e0169718
. ,
[PubMed]
Underhill
,
L. J.
,
Dols
,
W. S.
,
Lee
,
S. K.
,
Fabian
,
M. P.
, &
Levy
,
J. I.
(
2020
).
Quantifying the impact of housing interventions on indoor air quality and energy consumption using coupled simulation models
.
Journal of Exposure Science & Environmental Epidemiology
,
30
(
3
),
436
447
. ,
[PubMed]
Vaduganathan
,
M.
,
Nagarur
,
A.
,
Qamar
,
A.
,
Patel
,
R. B.
,
Navar
,
A. M.
, …
Butler
,
J.
(
2018
).
Availability and use of shared data from cardiometabolic clinical trials
.
Circulation
,
137
(
9
),
938
947
. ,
[PubMed]
van Arensbergen
,
P.
, &
van den Besselaar
,
P.
(
2012
).
The selection of scientific talent in the allocation of research grants
.
Higher Education Policy
,
25
(
3
),
381
405
.
van den Besselaar
,
P.
, &
Leydesdorff
,
L.
(
2009
).
Past performance, peer review and project selection: A case study in the social and behavioral sciences
.
Research Evaluation
,
18
(
4
),
273
288
.
van Driel
,
M.
,
Deckx
,
L.
,
Cooke
,
G.
,
Pirotta
,
M.
,
Gill
,
G. F.
, &
Winzenberg
,
T.
(
2017
).
Growing and retaining general practice research leaders in Australia: How can we do better?
Australian Family Physician
,
46
(
10
),
757
762
.
[PubMed]
Wang
,
J.
(
2013
).
Citation time window choice for research impact evaluation
.
Scientometrics
,
94
(
3
),
851
872
.
Wang
,
L.
,
Wang
,
X.
,
Piro
,
F. N.
, &
Philipsen
,
N. J.
(
2020
).
The effect of competitive public funding on scientific output: A comparison between China and the EU
.
Research Evaluation
,
29
(
4
),
418
429
.
Whitley
,
R.
(
2000
).
The intellectual and social organization of the sciences
.
Oxford University Press
.
Xu
,
G.
,
Zhang
,
Z.
,
Lv
,
Q.
,
Li
,
Y.
,
Ye
,
R.
, …
Liu
,
X.
(
2014
).
NSFC health research funding and burden of disease in China
.
PLOS ONE
,
9
(
11
),
e111458
. ,
[PubMed]
Yan
,
E.
,
Wu
,
C.
, &
Song
,
M.
(
2018
).
The funding factor: A cross-disciplinary examination of the association between research funding and citation impact
.
Scientometrics
,
115
(
1
),
369
384
.
Zhao
,
D.
(
2010
).
Characteristics and impact of grant-funded research: A case study of the library and information science field
.
Scientometrics
,
84
(
2
),
293
306
.

Author notes

Handling Editor: Ludo Waltman

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.