Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-5 of 5
Raf Guns
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2021) 2 (2): 588–615.
Published: 15 July 2021
FIGURES
| View All (13)
Abstract
View article
PDF
This study compares publisher ratings to the visibility and impact of individual books, based on a 2017 data set from three Nordic performance-based research funding systems (PRFS) (Denmark, Norway, and Finland). Although there are Journal Impact Factors (JIFs) for journals, there is no similar indicator for book publishers. National publisher lists are used instead to account for the general “quality” of books, leading to institutional rewards. But, just as the JIF is not recommended as a proxy for the “citedness” of a paper, a publisher rating is also not recommended as a proxy for the impact of an individual book. We introduce a small fish in a big pond versus big fish in a small pond metaphor, where a “ fish ” is a book and “ the pond ” represents its publishing house. We investigate how books fit on this metaphorical fish and pond continuum, using WorldCat holdings (visibility) and Google Scholar citations (impact), and test other variables to determine their predictive value with respect to these two indicators. Our statistics show that publisher levels do not have predictive value when other variables are held constant. This has implications for PRFS and book evaluations in general, as well as ongoing developments related to a newly proposed international publisher registry.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2021) 2 (2): 438–453.
Published: 15 July 2021
Abstract
View article
PDF
Open Science is an umbrella term that encompasses many recommendations for possible changes in research practices, management, and publishing with the objective to increase transparency and accessibility. This has become an important science policy issue that all disciplines should consider. Many Open Science recommendations may be valuable for the further development of research and publishing, but not all are relevant to all fields. This opinion paper considers the aspects of Open Science that are most relevant for scientometricians, discussing how they can be usefully applied.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2021) 2 (1): 65–88.
Published: 08 April 2021
FIGURES
Abstract
View article
PDF
Despite the centrality of disciplinary classifications in bibliometric analyses, it is not well known how the choice of disciplinary classification influences bibliometric representations of research in the social sciences and humanities (SSH). This is especially crucial when using data from national databases. Therefore, we examine the differences in the disciplinary profile of an article along with the absolute and relative number of articles across disciplines using five disciplinary classifications for journals. We use data on journal articles (2006–2015) from the national bibliographic databases VABB-SHW in Flanders (Belgium) and Cristin in Norway. Our study is based on pairwise comparisons of the local classifications used in these databases, the Web of Science subject categories, the Science-Metrix, and the ERIH PLUS journal classifications. For comparability, all classifications are mapped to the OECD Fields of Research and Development classification. The findings show that the choice of disciplinary classification can lead to over- or underestimation of the absolute number of publications per discipline. In contrast, if the focus is on the relative numbers, the choice of classification has practically no influence. These findings facilitate an informed choice of a disciplinary classification for journals in SSH when using data from national databases.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2021) 2 (1): 89–110.
Published: 08 April 2021
FIGURES
| View All (7)
Abstract
View article
PDF
We compare two supervised machine learning algorithms—Multinomial Naïve Bayes and Gradient Boosting—to classify social science articles using textual data. The high level of granularity of the classification scheme used and the possibility that multiple categories are assigned to a document make this task challenging. To collect the training data, we query three discipline specific thesauri to retrieve articles corresponding to specialties in the classification. The resulting data set consists of 113,909 records and covers 245 specialties, aggregated into 31 subdisciplines from three disciplines. Experts were consulted to validate the thesauri-based classification. The resulting multilabel data set is used to train the machine learning algorithms in different configurations. We deploy a multilabel classifier chaining model, allowing for an arbitrary number of categories to be assigned to each document. The best results are obtained with Gradient Boosting. The approach does not rely on citation data. It can be applied in settings where such information is not available. We conclude that fine-grained text-based classification of social sciences publications at a subdisciplinary level is a hard task, for humans and machines alike. A combination of human expertise and machine learning is suggested as a way forward to improve the classification of social sciences documents.
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (4): 1396–1428.
Published: 01 December 2020
FIGURES
| View All (7)
Abstract
View article
PDF
Open access (OA) has mostly been studied by relying on publication data from selective international databases, notably Web of Science (WoS) and Scopus. The aim of our study is to show that it is possible to achieve a national estimate of the number and share of OA based on institutional publication data providing a comprehensive coverage of the peer-reviewed outputs across fields, publication types, and languages. Our data consists of 48,177 journal, conference, and book publications from 14 Finnish universities in 2016–2017, including information about OA status, as self-reported by researchers and validated by data-collection personnel through their Current Research Information System (CRIS). We investigate the WoS, Scopus, and DOI coverage, as well as the share of OA outputs between different fields, publication types, languages, OA mechanisms (gold, hybrid, and green), and OA information sources (DOAJ, Bielefeld list, and Sherpa/Romeo). We also estimate the role of the largest international commercial publishers compared to the not-for-profit Finnish national publishers of journals and books. We conclude that institutional data, integrated at national and international level, provides one of the building blocks of a large-scale data infrastructure needed for comprehensive assessment and monitoring of OA across countries, for example at the European level.