Highly prestigious international academic awards and their impact on university rankings

This study uses the checklist method, survey studies, and Highly Cited Researchers to identify 100 highly prestigious international academic awards. The study then examines the impact of using these awards on the Academic Ranking of World Universities (the Shanghai Ranking), the QS World University Rankings, and the Times Higher Education World University Rankings. Results show that awards considerably change the rankings and scores of top universities, especially those that receive a large number of awards and those that receive few or no awards. The rankings of all other universities with relatively similar numbers of awards remain intact. If given 20% weight, as was the case in this study, awards help ranking systems set universities further apart from each other, making it easier for users to detect differences in the levels of performance. Adding awards to ranking systems benefits United States universities the most as a result of winning 58% of 1,451 awards given in 2010–2019. Developers of ranking systems should consider adding awards as a variable in assessing the performance of universities. Users of university rankings should pay attention to both ranking positions and scores.


INTRODUCTION
University rankings have become important in higher education worldwide (Hägg & Wedlin, 2013;Rauhvargers, 2013), as evidenced by their increasing number and the increasing number of papers published annually about them. Before 2010, there were five international university ranking systems; today, there are 17 1 . In 2009, researchers published fewer than 20 journal articles on the topic; in 2019, they published over 100 according to the Scopus database. Universities participate in rankings and pursue higher ranks to obtain greater visibility, attract higher quality students and faculty, and get more resources from stakeholders (Hazelkorn, 2015;Hazelkorn & Gibson, 2017;Hou & Jacob, 2017).
University rankings claim to provide valid and useful information for determining academic and research excellence (Moed, 2017). Administrators rely on them as indicators of improvement over time, as methods to determine institutional priorities, and as benchmarking tools against peer institutions. Faculty, staff, and students and their parents use university rankings as tools to help them decide which institutions to apply to for employment or higher education. Rankings also boost faculty professional reputation. Governments and funding agencies use university rankings for information about the performance of their higher education 1 https://ireg-observatory.org/en/ a n o p e n a c c e s s j o u r n a l institutions or the ones in which they have invested resources. Media outlets utilize them to create commercial opportunities (Hägg & Wedlin, 2013;Hazelkorn, 2015). Universities constantly strive to become world class and aim to improve their rankings. These rankings are thus perceived by many at higher education institutions as ultimate tools for assessing academic and research performance. According to Hazelkorn (2015), Moed (2017), and Rauhvargers (2013), university ranking systems have made enormous progress in quality during the past decade. Their systems are currently much more informative and user friendly than they were some 10 years ago. Yet, more work is needed to improve them. There is a large body of literature on the role and nature of university rankings. Notable reviews of this literature can be found in Hazelkorn (2015), Johnes (2018), Moed (2017), Olcay and Bulu (2017), and Soh (2017).
Developers of ranking systems use a variety of metrics for assessing and comparing the academic and research performance of universities, including expert opinion, publication and citation metrics, intellectual property metrics (e.g., patents), research and development income and expenditures, student-faculty ratios, and international outlook (e.g., percentage of foreign faculty and students; Vernon, Balas, & Momani, 2018). Highly prestigious honors, awards, prizes, and medals, which play major roles at universities (Ma & Uzzi, 2018), are rarely considered in university rankings. Of the 12 international university rankings examined by Vernon et al. (2018), only two included prizes in their criteria: the Academic Ranking of World Universities (the Shanghai Ranking) 2 and the Center for World University Rankings 3 . The recently developed University Three Missions Moscow International University Rankings (MosIUR) 4 , which was first published in 2017, became the third university ranking system to use awards as one of its criteria. In this study, we use the terms awards, prizes, honors, and medals interchangeably.
It is unclear why so few university ranking systems include awards in their analyses or among their performance indicators. A contributing factor, however, could be the lack of a standard list of, or method to use for, prestigious awards. The Shanghai Ranking, for example, uses only the Nobel Prize and the Fields Medal as measures of the quality of faculty and education (with 30% of the total ranking score). This decision, however, raises doubts about the reliability of the rankings, because few individuals and institutions worldwide win these two prestigious awards (Dobrota & Dobrota, 2016;Hou & Jacob, 2017). The Center for World University Rankings (CWUR) bases 35% of its total ranking score on awards. CWUR uses 30 awards as a measure of universities' education and faculty quality without explaining how and why they selected these awards over others 5 . MosIUR assigns 6% of its total university score on prizes, using the IREG List of 99 International Academic Awards, which is based on the the study by Zheng and Liu (2015) 6 . The IREG list, however, misses 36 of the highly prestigious international awards identified in this study, includes 20 awards that none of the sources or methods used in this study has classified as highly prestigious, and includes 15 awards given from 2005 to 2019 exclusively to individuals affiliated with institutions located in a single country-a fact that in our opinion disqualifies these awards as international.
Awards identify and confirm distinctive research, advance scientific discoveries, and confer credibility to persons, ideas, and disciplines (Ma & Uzzi, 2018). Awards are also among the highest forms of recognition researchers accord one another (Frey & Neckermann, 2009). Moreover, receiving a major award provides much greater visibility within the scientific community and beyond, and measures research quality and contribution to society in general better than citations can (Seglen, 1992). In short, awards serve as important, easy signaling functions about academic and research excellence (Gallus & Frey, 2017).
The increasing number of awards worldwide and their merit in research assessment and funding decisions necessitates a standard list of the most prominent international academic awards (Jiang & Liu, 2018;Ma & Uzzi, 2018). Such a list would be instrumental in identifying, characterizing, and differentiating the academic and research excellence of authors, centers, institutes, schools, universities, and countries. This study describes how we created such a list. We then use the list to answer the following research question: To what extent does the use of highly prestigious international academic awards affect university rankings?
Answering this question may encourage the producers of rankings to consider awards as an indicator to generate more accurate assessments and comparisons of universities' academic and research performance. Answering this question may also lead to giving more weight to awards within the academic community, increasing the number and range of highly prestigious awards, and encouraging more high-quality academic and research work worldwide.

List of the Most Prestigious International Academic Awards
To create a list of the most prestigious international academic awards, we relied on three methods. We implemented a multimethod approach to alleviate the limitations of each method and to develop an accurate and comprehensive list of such awards. We considered an award international if given to individuals affiliated with institutions from more than one country during the most recent 10 years (for annual awards) or during the most recent 20 years for awards given once every 2 or more years. We did this to avoid skewing the results of one country's institutions over others. Examples of excluded prestigious awards include the MRS Medal Award (engineering), NAS Award in Chemical Sciences, the Priestley Medal (chemistry), and the Welch Award in Chemistry-annual awards given exclusively to individuals affiliated with institutions located in the United States.
2.1.1. Method 1: Assessing the prestige of awards through the tiered-checklist method This is popularly used in libraries for building must-have collections (Dennison, 2000;Lundin, 1989), and we utilize this method to identify highly prestigious academic awards. The method assumes that domain experts can produce reliable lists of highly prestigious awards and that the more prestigious awards are those included on multiple authoritative lists. After extensive web and database searching (using such search terms as "highly prestigious awards" and "most important academic prizes") and examining dozens of documents that resulted from these searches, we found seven notable lists: 1. List of "highly prestigious" awards (n = 191)  Roster of Distinguished Awards and classified as "most notable," "gold standard," "highly esteemed," "mega prizes," "challenge prizes," and "prototype awards" 9 ; 4. List of 63 awards in Wikipedia classified as "Prizes known as the Nobel of a field" 10 ; 5. List of 30 prizes annually used by the CWUR; 6. List of 26 awards from the Shanghai Ranking (2019) considered "top" by 454 professors at 84 institutions from 15 different countries 11 ; and 7. List of 20 medical research awards, which Naylor and Bell (2015) call "the Himalayas of medical research excellence." According to the authors, four features set these pinnacle awards apart. They are merit based, open to scientists worldwide, long standing, and subject to almost no constraints other than the transformative influence of the work of the awardees on some aspect of human biology or disease.
For this study, we considered awards highly prestigious if mentioned on three or more of these seven lists. We decided on a minimum of three lists to alleviate the weaknesses associated with the checklist method, such as the use of an arbitrary or subjective selection method by the selector and the limited size and obsolescence of a list (Lundin, 1989). We decided on a minimum of three lists also to increase the level of consensus among experts about which awards are highly prestigious. This method resulted in identifying 47 awards for the current study. We found two such studies.
1. Zheng and Liu (2015). They surveyed the 2,567 recipients of 207 awards over the period 1990-2013. They asked participants (n = 391) to evaluate quantitatively (on a fivepoint Likert scale) the relative prestige of the awards they were familiar with against the Nobel Prize as a benchmark with a reputation score of 1.00. 2. Jiang and Liu (2018). They surveyed both 2,228 chairpersons and deans of units in eight social sciences fields from 349 top-ranked universities as well as 563 highly cited social science researchers for the years 2001, 2014, 2015, 2016, and 2017. They asked participants (n = 536) to evaluate quantitatively (on a five-point Likert scale) the relative prestige of 180 preselected awards, against the Nobel Prize as the benchmark award with a reputation score of 1.00.
The response rate in both studies was relatively low; however, the results are important because of the large number of respondents and their notable academic, research, and scientific status.
From these two studies, we included all international awards rated above average (i.e., awards that had a score of 0.5 or higher) and listed in one of the abovementioned seven lists. 8 http://science.gc.ca/eic/site/063.nsf/eng/ h_9B434E5F.html 9 http://www.icda.org/ 10 https://en.wikipedia.org/wiki/ List_of_prizes_known_as_the_Nobel_of_a_field 11 http://www.shanghairanking.com/subject-survey/awards.html We used the latter criterion because Zheng & Liu and Jiang & Liu sought the opinion of experts from 1990 and 2001 on, respectively. We, however, wanted to ensure that we did not include awards that may have declined in prestige. This survey-based method and the condition we imposed resulted in identifying 56 awards for the current study: 55 from Zheng & Liu and four from Jiang & Liu, with three overlapping awards.
2.1.3. Method 3: Assessing the prestige of awards based on the ratio of the award recipients rated as highly cited researchers in their respective fields 1. We considered highly prestigious awards that had over 50% of their recipients classified as highly cited researchers (HCRs) by Clarivate Analytics 12 . Clarivate Analytics considers researchers on the HCR lists as the most influential researchers in the world. HCR recognizes researchers who produced multiple papers ranking in the top 1% by citations for their field and year of publication. These papers demonstrate significant research influence among peers. The papers surveyed include those published and cited during the 11 years before the list was published. One limitation of this method is that HCR does not cover humanities fields. This method identified 56 awards.
After removing overlap resulting from using the aforementioned three methods (or the 10 sources), the total number of awards that we considered to be academic, international, and highly prestigious, was 100. The tiered-checklist, survey studies, and the HCRs methods uniquely identified 10, 13, and 27 awards, respectively. Forty-one (or 41%) of the 100 awards were common to two or more methods, and nine were common to all three methods (see Table 1 for the list of all 100 most prestigious international academic awards). We needed to compile this comprehensive list of highly prestigious international academic awards to enable an accurate estimate of different levels of academic and research quality. After all, few universities can gather sufficient "Nobel-credits" if only that prize is considered (Charlton, 2007).

Award Prestige Rating
For the current study, we adopted the award ratings generated in the studies by Jiang and Liu (2018) and Zheng and Liu (2015). As mentioned earlier, these two studies surveyed hundreds of domain experts, asking them to evaluate the relative reputations of awards they were familiar with as compared to the Nobel Prizes. For each award on a questionnaire, respondents chose from a five-point Likert scale: Negligible = 0, Low = 0.25, Average = 0.50, High = 0.75, and Highest = 1. Respectively, the five levels of reputation represent whether a respondent considers a given award as "not important," "somewhat important," "important," "very important," and having "the same importance" as the Nobel Prize, which as the benchmark award has a reputation at the "highest" level. Of the 100 awards identified in the current study, 81 had ratings provided in the Jiang and Liu (2018) and Zheng and Liu (2015) studies, which we used here. The average prestige rate of those awards found exclusively via the checklist method or the HCRs method was 0.51, and those found via both of these methods was 0.64. We used these two average scores for the 19 awards not covered by Jiang and Liu (2018) and Zheng and Liu (2015). The average prestige rate of awards found via all three methods was 0.74. We classified the 100 awards into 11 major subject categories, using Jiang and Liu (2018) and Zheng and Liu (2015) as a model. We verified the accuracy of our classification by examining information on the awards' websites. Table 2 shows the list of the 11 subject categories, the number of award recipients, and the average prestige score of the awards in each subject category.

Calculating Universities' Scores on Awards
The 100 awards identified in this study serve as the basis of the score on awards-the sum of the prestige scores received from winning one or more of these awards over the most recent 10 calendar years. Similar to how the majority of university ranking systems calculate scores for all variables, for awards, we apply the method of normalizing by the maximum. We first calculate the sum of the total score of the top performer and then adjust that to a score of 100.0. Afterward, we adjust the scores of all other entities (e.g., researchers, universities) in the rankings relative to the top performer. For example, if the top performer had a sum of 60.0 points from winning several awards and the second top performer had 40.0 points, their scores are adjusted to 100.0 and 66.7, respectively. An institution with 30.0 points will have its score adjusted to 50.0. And so on. The higher the award score of a university is, the more prominent or distinguished it is in comparison to others as far as awards are concerned. For details about university ranking scores and what they mean, see Moed (2017) and Rauhvargers (2013).
We should emphasize here that almost all ranking systems provide numerical scores in addition to ranking positions for the institutions they cover. Scores are important because they help differentiate universities' performance more clearly than rankings alone. In the Shanghai Ranking, for example, Harvard ranks first worldwide with a score of 100.0. The next nine Lorentz Medal x Physics 0.54 M1 = awards identified through the tiered-checklist method. M2 = awards identified through survey studies. M3 = awards identified based on the ratio of the award recipients rated as highly cited researchers in their respective field(s). Rating = the reputation score of an award where 0 = negligible, 0.25 = low, 0.50 = average, 0.75 = high, and 1.00 = highest. Respectively, these five levels of reputation represent whether an award is "not important," "somewhat important," "important," "very important," and having "the same importance" as the Nobel Prize, which as the benchmark award has a reputation at the "highest" level.
ranked universities have significantly lower scores, ranging from 75.1 to 55.1. This significant difference in scores between ranked institutions necessitates that we pay close attention to scores received in university rankings, because ranking positions can be much less informative. Scores in university rankings are also important because they can help classify universities into appropriate tiers more precisely.

Affiliation Information of Award Recipients
We gave credit to each university based on the researcher's primary affiliation(s) at the time the award was received (and up to 5 years prior). We also gave credit to the researcher's new primary affiliation if it changed after receiving the award. We verified the affiliation information via the awards' websites, bibliographic searches in the Scopus database covering the period 2005-2019, Wikipedia profiles, and Google searches. Only 6.5% of the award recipients listed multiple primary academic affiliations and 2.0% had changed their primary academic affiliation during this period. We gave full credit to all primary affiliations of the recipients. We collected all data in November 2019.

RESULTS AND DISCUSSION
Our data show that in 2010-2019, the 100 prestigious awards went to 1,067 individuals in 46 countries for 1,451 awards. Ma and Uzzi (2018) demonstrate that, despite an explosive proliferation of diverse prizes over time and across the globe, a relatively small scientific elite wins most prizes. They found that 64.1% of the 10,455 recipients of the 3,062 prizes they examined  This study, however, shows that over the past 10 years, only 15.6% of the 1,067 award recipients have won two or more highly prestigious international awards and only 5.5%, 2.5%, and 1.2% have won three, four, and five or more awards, respectively. We treated the award winners without institutional affiliations (20 out of 1,067) as independent authors or engineers.
Of the 1,451 awards examined in this study, 1,184 (or 81.8%) went to 844 individuals from 257 higher education institutions in 35 countries. Nearly 58% of the awards went to researchers affiliated with institutions in the United States. The top 10% (or the 26) most frequent institutional recipients account for 40.5% of all awards (and 70.0% of the Nobel Prizes). Approximately 47% of the 844 recipients were, at some point, classified by Clarivate Analytics as HCRs (2001, 2014, 2015, 2016, 2017, 2018, and 2019 editions). Table 3 shows the country distribution of these awards, Table 4 shows the rankings of the 40 universities with the highest scores on awards, and Table 5 shows the rankings of the top 40 individual award recipients. Note that 26 of the top 40 universities are in the United States, 10 in Europe, and the remaining four are in Japan (2), Australia (1), and Canada (1).
We used the QS World University Rankings (QS) 13 , the Shanghai Ranking, and the Times Higher Education World University Rankings (THE) 14 as case studies to examine the benefits or the impact of the use of a comprehensive set of highly prestigious international academic  awards on university rankings. Each of these three ranking systems uses several indicators that are assigned specific weights out of 100. For example, the QS ranking gives 40% weight to academic reputation, 20% to faculty-student ratio, 20% to citations per faculty, 10% to employer reputation, 5% to international faculty ratio, and 5% to international student ratio. Here, we allocate 80% of the total score to the indicators used by each ranking system and 20% to awards. We then compare the difference between the original rankings and rankings with awards for each system. We decided on 20% for awards using the Shanghai Ranking as a model.
A two-tailed test at the 0.01 level shows that allocating 20% weight on highly prestigious international awards does not make any significant difference to the overall position of the top 100 universities in the QS and THE rankings or the top 50 universities in the Shanghai Ranking, with Spearman rank-order correlation coefficients at .988, .988, and .895, respectively. We found similar results even if we examined the top 50 or the top 200 ranked institutions in the QS and THE rankings. The results, however show significant differences in the positions of universities ranking 51-100 in the Shanghai Ranking, with a Spearman rank-order correlation coefficient of .358. These differences are largely a result of the fact that 15 of these 50 universities ranking 51-100 did not win any award and 10 universities won only one award, whereas all but four of the top 50 universities have won more than one award in the past 10 years. Figure 1 shows the variations in correlations between the ranking outcomes with and without the awards among the top 100 ranked universities in all three ranking systems. It is important to note that the ranking positions of the top 50 universities change by an average of 3.6, 3.1, and 4.9 places in the QS, THE, and the Shanghai rankings, respectively, whereas the ranking positions of the second top 50 universities change by an average of 3.0, 4.1, and 22.4 places, in the same order. Overall, the results show that adding awards as a variable affects university rankings in three important ways.
First, in the QS and THE rankings, awards considerably change the rankings of many of the top universities, both those with large numbers of highly prestigious international academic awards and those with few or no awards. An analysis of the top 25 ranked universities in the QS and THE rankings (30 universities together) shows that 14 of them have their rankings change by five or more places with the addition of awards as a variable: nine losing ground   Table 6). Other significant cases of improvement include those of Kyoto University of Japan, which improves from 65th to 48th as a result of winning 18 highly prestigious international awards (including two Nobel Prizes), and Rutgers, which improves by 40 places (from 168th to 128th) as a result of winning nine highly prestigious awards (see Table 6). The University of Oxford drops from 1st to 8th in the THE, largely because they have won far fewer awards than those seven universities ranking higher, as shown in Table 4. Whether these are important changes in rankings is left for university administrators and the developers and users of ranking systems to decide. In the Shanghai Rankings, only three of the top 25 universities have their rankings change by five or more places: Johns Hopkins University (+5), University of Chicago (−8), and University of Toronto (+5). This result was not surprising, because the universities with the largest number of awards are also the universities with the largest number of Nobel Prizes and Fields Medals-the two awards used by the Shanghai Ranking (see Table 4).
Second, if given substantial weight (20% in this case), awards help ranking systems set universities further apart from each other, making it easier to detect differences in the levels of performance. Without awards, the scores attained by universities descend marginally from one institution to the next. With the addition of awards, the scores decrease sharply among the top universities, especially in the case of the QS and THE rankings. For example, the differences in scores between the top-ranked and the 25th ranked institutions in the QS and the THE systems are 16.2 and 14.2, respectively. With the addition of awards as a variable, the difference markedly increases to approximately 30 points in both cases. In short, adding awards to these two ranking systems results in giving higher scores to a smaller number of universities, as shown in Table 7, allowing finer distinctions in classifying universities into different quality or performance levels.
Without awards, the QS gives a score of 80.0 or higher to 36 universities. This number drops to nine universities with the addition of the awards variable. In the THE, the number of universities drops from 34 to 11. These 12 universities (the union of the nine and   9 show the difference that awards make to the scores of universities totaling 80.0 or higher in the QS and THE rankings. Note that without awards, we could classify these universities into a maximum of two groups: universities with a score ranging from 90.0 to 100.0 and those with a score ranging from 80.0 to 90.0. With the addition of awards, however, we could classify the   universities in up to four tiers or groups, with the following ranges of scores: 90s, 80s, 70s, and 60s. Table 10 shows the minor differences that awards make to the scores of universities totaling 50.0 or higher in the Shanghai Ranking.
Although the differences created in scores because of awards may or may not affect student and faculty decision-making regarding which institutions to join, university administrators may find it valuable for planning and decision-making purposes if the differences in scores among universities are clearer and more accurate. After all, university administrators rely on rankings as indicators of improvement over time and as benchmarking tools against peer institutions. These administrators promote improvement in rankings as evidence of progress in the academic and research environments to justify expansion in programs, requests for additional funding, and management and strategic decision-making.
Third, awards have geographical coverage implications for rankings. According to Moed (2017), although most systems claim to produce rankings of world universities, the analysis of geographical coverage of five popular ranking systems reveals substantial differences between them as regards the distribution of covered institutions among geographical regions. He finds that U-Multirank has a preference towards Europe, the Shanghai Ranking towards North America, Leiden ranking towards emerging Asian countries, and QS and THE towards Anglo-Saxon countries. Upon examining the top 25 universities in the QS, Shanghai, and THE rankings, we find that institutions in the United States and Canada would benefit the most from adding the awards variable. The results show that all six of the universities that have their rankings improve by five places or more are from the United States or Canada, whereas the 10 universities that have their rankings drop by five or more places include five from Asian countries, three from Europe, and only two from the United States (the University of Chicago and the University of Michigan). The results also show that of the three universities whose ranking status improves considerably from the top 50 to top 25, two are from the United States-UC Berkeley and UCLA (see Table 6).

LIMITATIONS
We consider our list of awards comprehensive and reliable as a base for benchmarking and research assessment, but it should be periodically checked and revised (perhaps once every 5 years). Such revision is necessary because some prizes may cease to exist in the future, others may lose their significance, and still others may emerge. Allocating 20% of the total score for awards in the QS and THE may not be optimal or possible. Awards, however, are powerful marketing tools used to attract high-quality faculty and students, among others, making them important enough to deserve significant weight in university rankings. The fact that many universities in the United States and other countries list on their homepages major awards won by their faculty, students, and alumni provides further evidence of the awards' necessary inclusion in university rankings.

CONCLUSION
Highly prestigious international academic awards are too important to be ignored by university ranking systems, even if these awards have substantial overlap with other indicators used in the rankings, such as research output, citations, or reputation. This study uses the checklist method, survey studies, and HCRs to identify 100 of the world's most prestigious awards. It then examines the impact of using these awards on the QS World University Rankings, the Academic Ranking of World Universities (the Shanghai Ranking), and the Times Higher Education World University Rankings. The results show that awards considerably alter the ranking positions and scores of universities that annually win several awards and of those highly ranked universities that win few, or no, awards. Awards, especially if given substantial weight as we did here, help ranking systems set universities apart from each other, making it easier for users to detect differences in the levels of performance. The developers of ranking systems should consider adding awards as a variable in assessing the performance of universities. These developers should credit institutions for being home to individuals who have done, and continue to do, frontier, outstanding, excellent, groundbreaking, and transformative research. In the end, having such influential researchers on campus prepares future generations of outstanding academicians and researchers. International awards additionally measure universities' global academic and research impact, which could help ranking systems more fully capture this in their league tables.
Future studies should examine the impact of the 100 awards identified here on other ranking systems and study to what extent the weight given to awards makes a difference: 5%, 10%, 15%, n%. Future studies should also assess the prestige rating of these 100 awards, identify any new awards, or discover awards that we may have missed in this study. Given that the majority of the awards are established by entities located in the United States, Canada, Europe, Japan, and Australasia, it would be valuable if similar studies could be conducted focusing on awards considered highly prestigious regionally, such as Africa, the Arab world, and South America.
Finally, the value of adding awards to ranking systems is that highly prestigious international awards are a relatively "uncorrupted" indicator. Efforts should be made to keep them uncorrupted, because if awards are recognized as useful and valued indicators in university rankings, they might become targets for gaming.