Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Alexander Tekles
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2022) 2 (4): 1246–1270.
Published: 01 December 2021
FIGURES
| View All (4)
Abstract
View article
PDF
Controlling for confounding factors is one of the central aspects of quantitative research. Although methods such as linear regression models are common, their results can be misleading under certain conditions. We demonstrate how statistical matching can be utilized as an alternative that enables the inspection of post-matching balancing. This contribution serves as an empirical demonstration of matching in bibliometrics and discusses the advantages and potential pitfalls. We propose matching as an easy-to-use approach in bibliometrics to estimate effects and remove bias. To exemplify matching, we use data about papers published in Physical Review E and a selection classified as milestone papers. We analyze whether milestone papers score higher in terms of a proposed class of indicators for measuring disruptiveness than nonmilestone papers. We consider disruption indicators DI1, DI5, DI1n, DI5n, and DEP and test which of the disruption indicators performs best, based on the assumption that milestone papers should have higher disruption indicator values than nonmilestone papers. Four matching algorithms (propensity score matching (PSM), coarsened exact matching (CEM), entropy balancing (EB), and inverse probability weighting (IPTW)) are compared. We find that CEM and EB perform best regarding covariate balancing and DI5 and DEP performing well to evaluate disruptiveness of published papers.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (4): 1510–1528.
Published: 01 December 2020
FIGURES
Abstract
View article
PDF
Adequately disambiguating author names in bibliometric databases is a precondition for conducting reliable analyses at the author level. In the case of bibliometric studies that include many researchers, it is not possible to disambiguate each single researcher manually. Several approaches have been proposed for author name disambiguation, but there has not yet been a comparison of them under controlled conditions. In this study, we compare a set of unsupervised disambiguation approaches. Unsupervised approaches specify a model to assess the similarity of author mentions a priori instead of training a model with labeled data. To evaluate the approaches, we applied them to a set of author mentions annotated with a ResearcherID, this being an author identifier maintained by the researchers themselves. Apart from comparing the overall performance, we take a more detailed look at the role of the parametrization of the approaches and analyze the dependence of the results on the complexity of the disambiguation task. Furthermore, we examine which effects the differences in the set of metadata considered by the different approaches have on the disambiguation results. In the context of this study, the approach proposed by Caron and van Eck (2014) produced the best results.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (3): 1242–1259.
Published: 01 August 2020
FIGURES
Abstract
View article
PDF
Recently, Wu, Wang, and Evans (2019) proposed a new family of indicators, which measure whether a scientific publication is disruptive to a field or tradition of research. Such disruptive influences are characterized by citations to a focal paper, but not its cited references. In this study, we are interested in the question of convergent validity. We used external criteria of newness to examine convergent validity: In the postpublication peer review system of F1000Prime, experts assess papers whether the reported research fulfills these criteria (e.g., reports new findings). This study is based on 120,179 papers from F1000Prime published between 2000 and 2016. In the first part of the study we discuss the indicators. Based on the insights from the discussion, we propose alternate variants of disruption indicators. In the second part, we investigate the convergent validity of the indicators and the (possibly) improved variants. Although the results of a factor analysis show that the different variants measure similar dimensions, the results of regression analyses reveal that one variant ( DI 5 ) performs slightly better than the others.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (1): 331–346.
Published: 01 February 2020
FIGURES
| View All (6)
Abstract
View article
PDF
Recently, Hirsch (2019a) proposed a new variant of the h -index called the h α -index. The h α -index was criticized by Leydesdorff, Bornmann, and Opthof (2019) . One of their most important points is that the index reinforces the Matthew effect in science. The Matthew effect was defined by Merton (1968) as follows: “the Matthew effect consists in the accruing of greater increments of recognition for particular scientific contributions to scientists of considerable repute and the withholding of such recognition from scientists who have not yet made their mark” (p. 58). We follow up on the point about the Matthew effect in the current study by using a recently developed Stata command (h_index) and R package (hindex), which can be used to simulate h -index and h α -index applications in research evaluation. The user can investigate under which conditions h α reinforces the Matthew effect. The results of our study confirm what Leydesdorff et al. (2019) expected: The h α -index reinforces the Matthew effect. This effect can be intensified if strategic behavior of the publishing scientists and cumulative advantage effects are additionally considered in the simulation.