Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Cameron Neylon
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (3): 1109–1135.
Published: 01 August 2020
FIGURES
| View All (5)
Abstract
View article
PDF
Pressured by globalization and demand for public organizations to be accountable, efficient, and transparent, university rankings have become an important tool for assessing the quality of higher education institutions. It is therefore important to assess exactly what these rankings measure. Here, the three major global university rankings—the Academic Ranking of World Universities, the Times Higher Education ranking and the Quacquarelli Symonds World University Rankings—are studied. After a description of the ranking methodologies, it is shown that university rankings are stable over time but that there is variation between the three rankings. Furthermore, using principal component analysis and exploratory factor analysis, we demonstrate that the variables used to construct the rankings primarily measure two underlying factors: a university’s reputation and its research performance. By correlating these factors and plotting regional aggregates of universities on the two factors, differences between the rankings are made visible. Last, we elaborate how the results from these analysis can be viewed in light of often-voiced critiques of the ranking process. This indicates that the variables used by the rankings might not capture the concepts they claim to measure. The study provides evidence of the ambiguous nature of university rankings quantification of university performance.
Journal Articles
Chun-Kai (Karl) Huang, Cameron Neylon, Chloe Brookes-Kenworthy, Richard Hosking, Lucy Montgomery ...
Publisher: Journals Gateway
Quantitative Science Studies (2020) 1 (2): 445–478.
Published: 01 June 2020
FIGURES
| View All (20)
Abstract
View article
PDF
Universities are increasingly evaluated on the basis of their outputs. These are often converted to simple and contested rankings with substantial implications for recruitment, income, and perceived prestige. Such evaluation usually relies on a single data source to define the set of outputs for a university. However, few studies have explored differences across data sources and their implications for metrics and rankings at the institutional scale. We address this gap by performing detailed bibliographic comparisons between Web of Science (WoS), Scopus, and Microsoft Academic (MSA) at the institutional level and supplement this with a manual analysis of 15 universities. We further construct two simple rankings based on citation count and open access status. Our results show that there are significant differences across databases. These differences contribute to drastic changes in rank positions of universities, which are most prevalent for non-English-speaking universities and those outside the top positions in international university rankings. Overall, MSA has greater coverage than Scopus and WoS, but with less complete affiliation metadata. We suggest that robust evaluation measures need to consider the effect of choice of data sources and recommend an approach where data from multiple sources is integrated to provide a more robust data set.