Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-11 of 11
Lutz Bornmann
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2023) 4 (4): 800–819.
Published: 01 November 2023
FIGURES
Abstract
View article
PDF
This paper compares two measures of the organizational size of higher education institutions (HEIs) widely used in the literature: the number of academic personnel (AP) measured according to definitions from international education statistics, and the scientific talent pool (STP) (i.e., the number of unique authors affiliated with the HEI as derived from the Scopus database). Based on their definitions and operationalizations, we derive expectations on the factors generating differences between these two measures, as related to the HEI’s research orientation and subject mix, as well as to the presence of a university hospital. We test these expectations on a sample of more than 1,500 HEIs in Europe by combining data from the European Tertiary Education Register and from the SCImago Institutions Ranking. Our results provide support for the expected relationships and also highlight cases where the institutional perimeter of HEIs is systematically different between the two sources. We conclude that these two indicators provide complementary measures of institutional size, one more focused on the organizational perimeter as defined by employment relationships, the other on the persons who contribute to the HEI’s scientific visibility. Comparing the two indicators is therefore likely to provide a more in-depth understanding of the HEI resources available.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2022) 2 (4): 1246–1270.
Published: 01 December 2021
FIGURES
| View All (4)
Abstract
View article
PDF
Controlling for confounding factors is one of the central aspects of quantitative research. Although methods such as linear regression models are common, their results can be misleading under certain conditions. We demonstrate how statistical matching can be utilized as an alternative that enables the inspection of post-matching balancing. This contribution serves as an empirical demonstration of matching in bibliometrics and discusses the advantages and potential pitfalls. We propose matching as an easy-to-use approach in bibliometrics to estimate effects and remove bias. To exemplify matching, we use data about papers published in Physical Review E and a selection classified as milestone papers. We analyze whether milestone papers score higher in terms of a proposed class of indicators for measuring disruptiveness than nonmilestone papers. We consider disruption indicators DI1, DI5, DI1n, DI5n, and DEP and test which of the disruption indicators performs best, based on the assumption that milestone papers should have higher disruption indicator values than nonmilestone papers. Four matching algorithms (propensity score matching (PSM), coarsened exact matching (CEM), entropy balancing (EB), and inverse probability weighting (IPTW)) are compared. We find that CEM and EB perform best regarding covariate balancing and DI5 and DEP performing well to evaluate disruptiveness of published papers.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2022) 2 (4): 1486–1510.
Published: 01 December 2021
FIGURES
| View All (10)
Abstract
View article
PDF
While previous research has mostly focused on the “number of mentions” of scientific research on social media, the current study applies “topic networks” to measure public attention to scientific research on Twitter. Topic networks are the networks of co-occurring author keywords in scholarly publications and networks of co-occurring hashtags in the tweets mentioning those publications. We investigate which topics in opioid scholarly publications have received public attention on Twitter. Additionally, we investigate whether the topic networks generated from the publications tweeted by all accounts (bot and nonbot accounts) differ from those generated by nonbot accounts. Our analysis is based on a set of opioid publications from 2011 to 2019 and the tweets associated with them. Results indicated that Twitter users have mostly used generic terms to discuss opioid publications, such as “Pain,” “Addiction,” “Analgesics,” “Abuse,” “Overdose,” and “Disorders.” A considerable amount of tweets is produced by accounts that were identified as automated social media accounts, known as bots . There was a substantial overlap between the topic networks based on the tweets by all accounts (bot and nonbot accounts). This result indicates that it might not be necessary to exclude bot accounts for generating topic networks as they have a negligible impact on the results. This study provided some preliminary evidence that scholarly publications have a network agenda-setting effect on Twitter.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2021) 2 (2): 438–453.
Published: 15 July 2021
Abstract
View article
PDF
Open Science is an umbrella term that encompasses many recommendations for possible changes in research practices, management, and publishing with the objective to increase transparency and accessibility. This has become an important science policy issue that all disciplines should consider. Many Open Science recommendations may be valuable for the further development of research and publishing, but not all are relevant to all fields. This opinion paper considers the aspects of Open Science that are most relevant for scientometricians, discussing how they can be usefully applied.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (4): 1510–1528.
Published: 01 December 2020
FIGURES
Abstract
View article
PDF
Adequately disambiguating author names in bibliometric databases is a precondition for conducting reliable analyses at the author level. In the case of bibliometric studies that include many researchers, it is not possible to disambiguate each single researcher manually. Several approaches have been proposed for author name disambiguation, but there has not yet been a comparison of them under controlled conditions. In this study, we compare a set of unsupervised disambiguation approaches. Unsupervised approaches specify a model to assess the similarity of author mentions a priori instead of training a model with labeled data. To evaluate the approaches, we applied them to a set of author mentions annotated with a ResearcherID, this being an author identifier maintained by the researchers themselves. Apart from comparing the overall performance, we take a more detailed look at the role of the parametrization of the approaches and analyze the dependence of the results on the complexity of the disambiguation task. Furthermore, we examine which effects the differences in the set of metadata considered by the different approaches have on the disambiguation results. In the context of this study, the approach proposed by Caron and van Eck (2014) produced the best results.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (4): 1553–1569.
Published: 01 December 2020
FIGURES
Abstract
View article
PDF
Since the 1980s, many different methods have been proposed to field-normalize citations. In this study, an approach is introduced that combines two previously introduced methods: citing-side normalization and citation percentiles. The advantage of combining two methods is that their advantages can be integrated in one solution. Based on citing-side normalization, each citation is field weighted and, therefore, contextualized in its field. The most important advantage of citing-side normalization is that it is not necessary to work with a specific field categorization scheme for the normalization procedure. The disadvantages of citing-side normalization—the calculation is complex and the numbers are elusive—can be compensated for by calculating percentiles based on weighted citations that result from citing-side normalization. On the one hand, percentiles are easy to understand: They are the percentage of papers published in the same year with a lower citation impact. On the other hand, weighted citation distributions are skewed distributions with outliers. Percentiles are well suited to assigning the position of a focal paper in such distributions of comparable papers. The new approach of calculating percentiles based on weighted citations is demonstrated in this study on the basis of a citation impact comparison between several countries.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (3): 1242–1259.
Published: 01 August 2020
FIGURES
Abstract
View article
PDF
Recently, Wu, Wang, and Evans (2019) proposed a new family of indicators, which measure whether a scientific publication is disruptive to a field or tradition of research. Such disruptive influences are characterized by citations to a focal paper, but not its cited references. In this study, we are interested in the question of convergent validity. We used external criteria of newness to examine convergent validity: In the postpublication peer review system of F1000Prime, experts assess papers whether the reported research fulfills these criteria (e.g., reports new findings). This study is based on 120,179 papers from F1000Prime published between 2000 and 2016. In the first part of the study we discuss the indicators. Based on the insights from the discussion, we propose alternate variants of disruption indicators. In the second part, we investigate the convergent validity of the indicators and the (possibly) improved variants. Although the results of a factor analysis show that the different variants measure similar dimensions, the results of regression analyses reveal that one variant ( DI 5 ) performs slightly better than the others.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (2): 792–809.
Published: 01 June 2020
FIGURES
Abstract
View article
PDF
Societal impact considerations play an increasingly important role in research evaluation. In particular, in the context of publicly funded research, proposal templates commonly include sections to outline strategies for achieving broader impact. Both the assessment of the strategies and the later evaluation of their success are associated with challenges in their own right. Ever since their introduction, altmetrics have been discussed as a remedy for assessing the societal impact of research output. On the basis of data from a research center in Switzerland, this study explores their potential for this purpose. The study is based on the papers (and the corresponding metrics) published by about 200 either accepted or rejected applicants for funding by the Competence Center Environment and Sustainability (CCES). The results of the study seem to indicate that altmetrics are not suitable for reflecting the societal impact of research that was considered: The metrics do not correlate with the ex ante considerations of an expert panel.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (2): 675–690.
Published: 01 June 2020
Abstract
View article
PDF
Citations can be used in evaluative bibliometrics to measure the impact of papers. However, citation analysis can be extended by considering a multidimensional perspective on citation impact which is intended to receive more specific information about the kind of received impact. Bornmann, Wray, and Haunschild (2020) introduced the citation concept analysis (CCA) for capturing the importance and usefulness certain concepts (explained in publications) have in subsequent research. In this paper, we apply the method by investigating the impact various concepts introduced in Robert K. Merton’s book Social Theory and Social Structure has had. This book was to lay down a manifesto for sociological analysis in the immediate postwar period, and retains a major impact 70 years later. We found that the most cited concepts are “self-fulfilling” and “role” (about 20% of the citation contexts are related to one of these concepts). The concept “self-fulfilling” seems to be important especially in computer sciences and psychology. For “role,” this seems to be additionally the case for political sciences. These and further results of the study could demonstrate the high explanatory power of the CCA method.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (1): 331–346.
Published: 01 February 2020
FIGURES
| View All (6)
Abstract
View article
PDF
Recently, Hirsch (2019a) proposed a new variant of the h -index called the h α -index. The h α -index was criticized by Leydesdorff, Bornmann, and Opthof (2019) . One of their most important points is that the index reinforces the Matthew effect in science. The Matthew effect was defined by Merton (1968) as follows: “the Matthew effect consists in the accruing of greater increments of recognition for particular scientific contributions to scientists of considerable repute and the withholding of such recognition from scientists who have not yet made their mark” (p. 58). We follow up on the point about the Matthew effect in the current study by using a recently developed Stata command (h_index) and R package (hindex), which can be used to simulate h -index and h α -index applications in research evaluation. The user can investigate under which conditions h α reinforces the Matthew effect. The results of our study confirm what Leydesdorff et al. (2019) expected: The h α -index reinforces the Matthew effect. This effect can be intensified if strategic behavior of the publishing scientists and cumulative advantage effects are additionally considered in the simulation.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (1): 171–182.
Published: 01 February 2020
FIGURES
Abstract
View article
PDF
Fast-and-frugal heuristics are simple strategies that base decisions on only a few predictor variables. In so doing, heuristics may not only reduce complexity but also boost the accuracy of decisions, their speed, and transparency. In this paper, bibliometrics-based decision trees (BBDTs) are introduced for research evaluation purposes. BBDTs visualize bibliometrics-based heuristics (BBHs), which are judgment strategies solely using publication and citation data. The BBDT exemplar presented in this paper can be used as guidance to find an answer on the question in which situations simple indicators such as mean citation rates are reasonable and in which situations more elaborated indicators (i.e., [sub-]field-normalized indicators) should be applied.