Abstract
We examine the hypothesis that research collaboration has enabled a global research network to evolve, with self-organizing properties transcending national research policy. We examine research output, bilateral and multilateral collaboration, subject diversity, and citation impact over 40 years, in detail for the G7 and BRICK groups of countries and in summary for 26 other nations. We find that the rise in national output was strongly associated with bilateral collaboration until the 2000s but after that by multilateral partnerships, with the shift happening at much the same time across countries. There was a general increase in research subject diversity, with evenness across subjects converging on a similar index value for many countries. Similar diversity is not the same as actual similarity but, in fact, the G7 countries became increasingly similar. National average citation impact (CNCI) rose and groups converged on similar impact values. The impact of the largest economies is above world average, which is a phenomenon we discuss separately. The similarities in patterns and timing occur across countries despite variance in their research policies, such as research assessment. We suggest that the key agent facilitating global network self-organization is a shared concept of best practice in research.
PEER REVIEW
1. INTRODUCTION
Research is increasingly collaborative. For many large research economies, international collaboration now accounts for more academic publications than purely national (domestic) output. In this paper we discuss whether the effect of increasing overlap and commonality between leading research entities becomes a self-organizing force that drives convergence in the global research system and acts systemically, irrespective of local policy and management structures.
Price (1963, 1965) described “the world network of scientific papers” and the conventions of citation referencing that underpinned universal aspects of the global research system. This is linked to the concept of an “invisible college” in research such that the development of ideas is influenced by the structure in which it takes place (Crane, 1972). The idea that a global research network might have self-organizing properties has also been suggested before, in the context of collaboration (Wagner & Leydesdorff, 2005), and it has been argued that changes in patterns of international collaboration have taken research into a Fourth Age, a global network in which nations must engage but cannot own the outcomes, because the leading edge of impactful activity and innovative discovery has gone beyond national boundaries (Adams, 2013; Wagner, 2018).
Wagner and Leydesdorff (2005) discussed whether preferential attachment (more connected network nodes are more likely to receive new links: see Albert & Barabási, 2002; Price, 1965) might be the model for a structuring mechanism in scientific collaboration. This would facilitate the emergence of a self-organizing system where the selection of a partner and the location of the research rely upon choices made by the researchers themselves rather than national or institutional incentives or constraints.
What functionally would we mean by self-organization in the context of a global research system and what further evidence might there be of this? The “invisible college” implies that conventions in research practice cross other boundaries, such as national borders and—to some extent—disciplinary frontiers. The latter is more nuanced because some disciplinary differences are evident: Scientists publish primarily in journals, conference proceedings are widely used by technologists, and monographs are preferred among the humanities. Nonetheless, what we expect to see is a shared though largely unconscious pattern of behavior. This is the evolution of good research practice, long established and nurtured by leaders in each field as they train succeeding generations, creating a consensus as new communities join the global endeavor, leading to convergence.
The best (sensu more esteemed within their disciplinary networks) researchers would seek to work with one another. This connection might initially be local but improved communication would facilitate links over increasing distance (“the republic of creative minds: Each giant calling to another through the desolate intervals”: Nietzsche, 1873/1996) promoting international collaboration, subsequently emulated by the wider community.
We suggest two lines of inquiry to seek for evidence of this. First, in addition to the relative volume of international collaboration, what other trends in research publication can be seen and how similar are such trends in different countries? Second, noting the degree to which governmental intervention through policy and funding has become a major locally organizing influence on research, to what extent do national policy and practice differentiate the timing and direction of national responses?
The socioeconomic context for this analysis is important because we will argue that global societal shifts are central to the changes we have observed rather than the local and national research policy environment. The speed and efficiency of communication are key factors. Prior to the 1980s travel was relatively expensive and communication was either slow (by letter) or costly (by phone). For example, the data for the global economic models of that time were moved in boxes of punch cards between Boston, Cambridge (UK), and Paris (Adams, 2017). The research world of the 1980s was G8 centric and relatively stable from year to year, and its published medium was the lingua-Americana of the then dominant transatlantic research axis. Some of the best research output from Germany, France, and the Soviet Union was unavailable in English (Van Leeuwen, Moed et al., 2001) and the BRICK research economies were only just emerging (note that we use BRICK rather than BRIC because in addition to Brazil, Russia, India, and China we focus on South Korea).
The settled research world of 1980 was dominated by the G7 group of large economies (United States, Canada, United Kingdom, France, Germany, Italy, Japan), which authored over 60% of the papers indexed in the Web of Science in the 1980s. The Soviet Union separately authored somewhat more than 5% of output recorded in Anglophone journals and perhaps as much again in Russophone journals.
Through the 1990s, more people started traveling to more conferences; information in new digital forms became available and transferable; and the appearance of the Internet enabled rapid, low-cost communication. Research investment spread to more countries and informed, quantitative research performance analyses increased the confidence of governments that public-sector research would be a key investment in securing economic competitiveness through a highly trained workforce and technological innovation. Countries that previously had little research presence expanded and developed their research bases. As communication became faster, and access to literature became global, researchers in many countries shifted their best work into Anglophone journals to gain visibility. Two other changes influenced the evolution of global research networks: One was the transition in Europe marked in November 1989 by the fall of the Berlin Wall; the other was the exponential rise in the research status of China.
The global growth of research investment created an opportunity for new partnerships to develop. The US-EU axis consequently became less dominant and, since 2017, the G7 have collectively produced less than half (about 42% in 2020) of a globally expanding indexed volume of research articles and reviews that had grown fivefold in 40 years. The Soviet Union’s 5% share of world output in the 1980s fell markedly and did not expand after 1990; Russian researchers now author only around 2.5% of indexed journal papers. China, however, grew its published research output (excluding strictly Chinese language journals) from less than 1% prior to 1990 to an astonishing 25% of indexed global output in 2020, challenging the long-held scientific leadership of the United States (Johnson, Adams, & Grant, 2021; Johnson, Adams et al., 2022).
Changes in patterns within Europe also reflect the evolving philosophy of collaboration. Hoekman, Frenken, and Tijssen (2010) showed that proximity had become a more important determining factor in collaborative research than shared nationality for 33 countries and 313 regions within the EU. Wagner, Park, and Leydesdorff (2015) examined research interconnections across the globe over two decades and showed that while the global network has grown denser, it has not become more clustered, meaning the increased intensity of connections are not grouping into exclusive “cliques.” This reflects the increasingly “open” nature of fundamental scientific research and contributes to the spread of the “invisible college.”
In this paper we look across countries (38 in total, 12 in detail) by drawing on analyses that consider the extent to which national changes in research publication patterns were local or general. We suggest that the relevant test is whether there is synchronicity across countries and regions or whether distinct phases can be seen corresponding, for example, to the adoption of national research assessment. The conclusions from this address another key question raised by Wagner et al. (2015): the balance in roles between the policy initiatives of nations and the self-organization of researchers who are globally networked.
2. METHODS
The analysis covers the period from 1981 to 2018, thus including years prior to the appearance of the Internet. It refers solely to research articles and reviews (which are deemed to be substantive and original academic papers) published in the approximately 20,000 journals indexed in Clarivate’s Web of ScienceTM and for which the available publication data are comprehensive and consistently indexed.
These publication data were sourced for 38 relatively prolific research economies covering a broad geographical spread and including the G7 and BRICK groups of countries (Table 1). The selected countries accounted for 35,666,890 (92%) of the 38.8 million papers indexed in the database over the period (rising from 87% in 1981 to 95% in 2018).
Data in the core paper . | Data in the appendixes . | ||
---|---|---|---|
G7 . | BRICKs . | EU (other) . | Global (other) . |
Canada | Brazil | Austria | Argentina |
France | China | Belgium | Australia |
Germany | India | Denmark | Egypt |
Italy | Russia | Finland | Indonesia |
Japan | South Korea | Greece | Iran |
United Kingdom | Hungary | Israel | |
United States | Ireland | Mexico | |
Netherlands | New Zealand | ||
Norway | Saudi Arabia | ||
Poland | Singapore | ||
Portugal | South Africa | ||
Spain | Switzerland | ||
Sweden | Taiwan |
Data in the core paper . | Data in the appendixes . | ||
---|---|---|---|
G7 . | BRICKs . | EU (other) . | Global (other) . |
Canada | Brazil | Austria | Argentina |
France | China | Belgium | Australia |
Germany | India | Denmark | Egypt |
Italy | Russia | Finland | Indonesia |
Japan | South Korea | Greece | Iran |
United Kingdom | Hungary | Israel | |
United States | Ireland | Mexico | |
Netherlands | New Zealand | ||
Norway | Saudi Arabia | ||
Poland | Singapore | ||
Portugal | South Africa | ||
Spain | Switzerland | ||
Sweden | Taiwan |
Assignment to country was based on author addresses and whole counting of papers was used, where each paper is assigned once to each country given in an author affiliation. We recognize that there are sound arguments in favor of fractional partitioning of output and impact (Waltman & van Eck, 2015; Potter, Szomszor, & Adams, 2020), although these take interpretation away from the raw, source data, and we will address an alternative approach on these lines in a separate analysis.
Each country’s indexed papers were counted in total and then deconstructed by collaboration mode: domestic, with no international coauthor; bilateral, with at least one coauthor from just one additional country; and multilateral, with two or more coauthoring countries (see also Potter et al., 2020). Papers were counted and analyzed separately by collaboration mode.
The diversity of national output was indexed by
collating each country’s spread of papers across the 254 Web of Science subject-based journal categories, including null counts;
normalizing these counts against the world average for year and category to account for global variation in category size and content; and
calculating a Gini index of evenness (Gini actually indexes disparity so the index of diversity is shown as (1 − Gini)).
The methodology follows Egghe and Rousseau (1990) and Moed (2006) and has previously been described in detail in a paper that summarized part of these results for 11 of the countries (Adams, Rogers et al., 2020).
The annual average Category Normalized Citation Index (CNCI) was calculated for all countries. The citation count of each paper was normalized against the world average for its relevant document type, journal category, and publication year. As noted, whole counting was used throughout and no fractional attribution was applied. It has long been known that internationally coauthored papers have a higher average citation count than comparable domestic papers (Narin, Stevens, & Whitlow, 1991). There is thus an interaction between collaboration and indexed citation impact.
The principal illustrations that follow focus on the G7 and the BRICKs (see the caption to Table 1) to provide informative contrasts while avoiding the confusion of too many similar, overlapping, and cross-cutting graph lines.
Supplementary graphics including the other countries are shown in the appendixes and reference is made to these in discussion. Note that the graphics in the Supplementary information have been simplified to “gray out” the lines for countries other than the G7 and BRICK groups, as a color palette for 38 lines is problematic.
3. RESULTS
3.1. Publication Volume
Over the 38 years from 1981 to 2018 there was a four- to fivefold net global growth in the indexed number of research papers. The total annual number of papers indexed in the Web of Science was about 465,000 in 1981 and over 2.15 million in 2018. Some of that increase was an expansion in journal size (more papers) and some was due to additional journal titles. This commercial investment in journal indexing was a response to the output of academic material of sufficient quality.
Overall publication output increased for almost all countries in this study throughout the period from 1981 and it appeared to do so at a similar rate, with some indication of an acceleration for the already relatively productive G7 from the early 1990s. A widely recognized exception was that of China, but much of that country’s accelerated publication “growth” can be explained by a shift from a powerful R&D demand system underpinning domestic industry into a responsive, public-facing research base. Russia’s output, by contrast, reflects the signs of a traumatic post-Soviet shift in resources for academia and research institutes. India’s research output was much slower until 2000, while Japan’s output appears to have slowed since 2000. By the end of the period, the BRICK economies were generally catching up with G7 growth rates, while China was forging ahead (Figure 1).
As noted in Section 2, a graph of data for all 39 countries could be a mass of lines, but in fact a common trend across national data broadly appears because few lines actually cross, which implies that the growth rate of most countries followed a similar pattern (see Supplementary information). Outside the G7 and BRICK groups, Iran had a notably steep rise until 2010, when its “growth” rate slowed to that of other countries of a similar size, and this may be a further indication of a well-established research base shifting its output into Anglophone journals.
Gross national output is composed of both domestic papers (with no international coauthors) and papers produced through international collaboration. The next section explores how much of each country’s apparent growth was by an increase in domestic capacity and how much by collaboration.
3.2. International Collaboration
International research coauthorship in the 1980s was limited (5–10% for most medium and large countries) and was mostly bilateral (Adams, 2013). The data analyzed here indicate that this pattern started to change in the 1990s and that partnerships began to transform into networks. The evidence for this, within increased national publication capacity (Figure 1), is that after 2005 bilateral collaboration grew as an absolute count of papers but not relative to total output, while multilateral collaboration continued to expand in relative terms. These shifts occurred at similar times across multiple countries.
The counts for the bilateral papers of the G7 and BRICK economies were summed across the 38 partner countries for which data were analyzed. For the G7, there is a similar trajectory of rising bilateral coauthorship until around 2000, when it flattens for most countries at around 25–30% of total output. US bilateral collaborations started from a lower point and continued to grow until also reaching this band. The log-plot of the bilateral collaboration data makes clear the climb, the near-synchronous flattening of relative growth and the similar proportions that bilateral partnerships make up (Figure 2).
The collaboration patterns of the BRICK countries are initially more volatile, a likely consequence of the evolving research economies. South Korea’s exceptionally high bilateral collaboration in the 1980s was dependent on US research dependency and initially declines, whereas Brazil and India had rising levels of collaboration. As their research bases mature so these BRICK curves flatten in the 2000s but at a lower level (15–20%) than the G7 nations. Japan is the G7 exception and it too flattens out with bilateral collaboration at about 20% of total output.
The data for all 38 countries support this general picture, with a rise in bilateral collaboration into the early 2000s and then a general flattening. A swathe of countries had a level of bilateral collaboration that was similar to the G7, around 25–30% (available as Supplementary information).
Multilateral collaboration follows a trajectory that differs from bilateral collaboration (Figure 3). Multilateral collaboration rises throughout the period at a similar rate for the G7 countries, so the most collaborative in 1981 remain so in 2018. Although an inflection to a slightly slower growth rate after 2000 is evident across countries, the proportion of papers that have multinational authorship continues to rise and the curves never flatten. Unsurprisingly, the Western EU network is prominent.
The BRICKs take time to settle into a pattern, but after 2000/2005 they follow a similar trajectory to the G7. The growth rate in multilateral collaboration as a share of national output seems to be sustained for all these countries through to the present. The exception is China, which has the lowest level of multilateral collaboration and a much greater proportion of bilateral relationships, which may reflect nationally funded partnerships rather than collective initiatives such as the EU Framework Programs.
The general pattern is repeated in the analysis for the full group of 38 countries, many of which have rather higher rates of multilateral collaboration than the G7. These include Indonesia, the research profile of which is strongly dependent on such links, and the Scandinavian group (available as Supplementary information).
A detailed example for the United Kingdom (Figure 4) deconstructs the pattern by which purely domestic output falls and continues to decline as collaboration becomes pervasive. What this suggests is that progressively more complex and wide-ranging networks enabled the United Kingdom to expand its output, first through bilateral collaboration with G7 partners in North America and Europe, and later through multilateral networks beyond those historical and regional associations.
The United Kingdom’s most frequent and historically strongest research partner was the United States, the sole coauthor of around 3% of UK papers in 1981. The US coauthored share increased to 6% of UK papers by 2000 but did not then rise further. If the European Union is analyzed, for comparable “bilateral” analysis with the United States, as a single bloc, then its collaboration with the United Kingdom was about 6% in 1981, increasing to 17% in 2004 but then continuing at no more than 19% of UK papers every year since 2009. A further cross-section is provided analyzing “trilateral” coauthorship between the United Kingdom, European Union and United States. This was less than 1% of UK papers before 1990, rose to 10% by 2016 and then it too leveled off. Collaboration with the Rest of the World (RoW) expanded throughout the 38 years, however, and now accounts for about 32% of UK output and rising. Within this, 12% of UK total paper count has coauthors from the United States, the European Union and another country.
3.3. Research Subject Diversity
Collaboration increases a country’s capacity and makes additional financial, workforce and intellectual resources available. That enables not only expansion but also diversification of research activity. Research portfolios can become more wide-ranging because collaboration and networks increase effective national capacity and competency. An analysis of the publication counts across Web of Science journal-based subject categories using the Gini index confirms that research activity generally shifts towards a more even distribution. These counts are normalized against the global distribution of publications each year, because journal categories innately vary in volume. Gini measures inequality, so the index is displayed as (1 − Gini) to illustrate changing evenness in distribution (Adams et al., 2020).
The United States and United Kingdom had the most evenly distributed publication portfolios in 1981. As the output being indexed for other countries increased (Figure 1) so their portfolios became more even. The BRICKs were initially less diverse than the G7 in 1981 and so had capacity for a noticeable, rapid shift towards evenness, except for Russia and—recently—Brazil (Figure 5).
Not only did national research portfolios become more evenly spread across research subjects. They also began to converge on similar Gini values. This should not be interpreted as global homogeneity: it is multiple economies arriving at a similar spread of evenness. This is because different mixes can produce the same numeric value for a diversity indicator.
There is likely to have been a wide range of underlying specialization and diversity across the full set of 38 countries in 1981 and some of this will have been preserved in a general trend towards evenness that is reflected across all countries. The majority become more even until their research subject diversity levels off close to the same broad band ((1 − Gini) = 0.6–0.75) by the end of the period (see Supplementary information).
Diversity is not interdisciplinarity, but it enables interdisciplinarity. Was the change in evenness linked to international collaboration or promoted by domestic policies to support a more diverse research base given the emerging prominence of interdisciplinary needs and opportunities (Committee on Science, Engineering, and Public Policy, 2004)? The components of diversity associated with domestic and internationally collaborative papers can be distinguished by separating these groups of papers prior to calculating the diversity coefficient. When this is done, using the United States as an example from the G7 and India as an example from the BRICKs group, it becomes apparent that national research subject diversity in the 1980s was closely reflected in domestic diversity, that the diversity of international collaboration rose consistently over the next four decades, and that international collaborative diversity was a more important factor by the end of the period, while, in the case of the United States, domestic research diversity tended towards greater selectivity. This is most evident in the data for the United States and other G7 countries, but is still apparent in the data for India and other BRICKs (Figure 6).
3.4. Research Similarity
The research subject diversity of countries may trend towards greater evenness and converge on a similar index value, but, as noted, this does not necessarily indicate that they have increased the similarity of their portfolio content. Two countries with different distributions across subjects can generate the same index value of diversity. To make a complete analysis would require a large number of pairwise comparisons (over 700 for 38 countries) so only two examples (Germany and South Korea, which are both technology-oriented) are illustrated here and then only for national research similarity to the other G7 and BRICK countries.
The annual correlation between the relative frequency of publications across 250 journal categories for each pair of countries was calculated. We are not interested here in multivariate analysis but only the degree of pairwise similarity. This is a simplistic analysis of the degree to which the relative abundance of publications across subjects is the same. However, a high positive correlation indicates a similar spread, a high negative correlation indicates a different and complementary portfolio, and a lack of correlation indicates a random pattern of similarity and dissimilarity.
The overall pattern is of increasing similarity in research profiles among the G7 economies after the mid-1990s, while the BRICK economies remained much more distinctive. It should be considered that this may in part be influenced by a combination not only of the increased relative frequency of international collaboration but also of a shift to Anglophone publication in journals indexed in Web of Science as researchers looked for a more global audience.
For Germany, similarity with France and Italy was high throughout and increased with the United Kingdom and Canada, and with the United States, with which its indexed publication record was strongly dissimilar in the 1980s. German research output became increasingly dissimilar to that of China, India, and South Korea, as it did for Canada. However, Russia and Japan’s indexed research was relatively similar to Germany’s in the 1980s and continued to be no less similar than that of Canada and the United Kingdom (Figure 7, top panel).
South Korea was neither very similar nor dissimilar to other countries in the 1980s. It appears to be most similar to Japan and India and became increasingly similar to them after 2000. Surprisingly, it was relatively dissimilar to the United States (with which it had strong collaborative links in the 1980s) and increasingly diverged from the United States, United Kingdom, and Canada in the 1990s and from Brazil, Germany, and France after 2000. By contrast, it converged on and has stayed very similar to China’s portfolio over 30 years (Figure 7, bottom panel).
3.5. Citation Impact
Citation rates are widely inferred, for very large samples, to be a sound reflection of the significance of research papers (Garfield, 1954; Moed, De Bruin, & Van Leeuwen, 1995). It is likely that the systemic changes in output (Figure 1) and collaboration (Figures 2 and 3) will interact with citation impact.
CNCI rose for all economies at similar rates, but with most of the G7 (except Japan) converging on a common value around 1.3. Convergence is a very likely consequence of collaboration: If many papers are shared, then each national value is strongly influenced by its partners’ contributions, and they by itself (Figure 8).
Brazil, India, and Russia appear to have reached a different CNCI plateau below the world average, and their international collaboration is lower than that of the G7. However, collaboration is also relatively less for South Korea, which has a rising CNCI line. This is now above the world average and is on a continuing upwards trajectory.
Looking across the full set of 38 countries (available as Supplementary information), the only country that has a consistently declining CNCI is the United States. This apparent decline may be partly explained by a shift in the United States’ relative position as others improved, because index values are normalized against the pooled world average. As it starts with a high CNCI value, it can only decline as others become more frequently cited than in the past. Some smaller countries have extremely erratic year-to-year variation in CNCI, which can be explained by their collaboration opportunities (Potter et al., 2020). Some countries—notably in western Europe—have an average CNCI above that of the G7.
4. DISCUSSION
We cannot “prove” whether (or not) increasing commonality between leading research entities is a self-organizing force that has driven a convergence in global research and a rise in citation indicators, nor can we prove that this has occurred irrespective of national policy and management structures. The point of discussion is what, on balance, the data for a large number of countries suggest. Are their research trajectories similar or divergent? If they converged, did they do so at similar times or variously?
We argue that the data in this paper strongly support the idea of a global research network that has exhibited relatively synchronous convergence irrespective of differences in the timing or nature of national research policies. We believe that implicitly self-organizing properties are essentially built upon a consensus view of “good science” emerging from concepts of Crane’s (1972) “invisible college” in research.
The college is built upon shared ideas of how research should be practiced and what constitutes good research, in terms of methods, sharing, openness, and publication. International collaboration made possible by social and technological change has drawn the global research diaspora into a network, led by the giants in their field, in which nations not only exist but must also engage because leading, impactful activity and innovation now take place beyond national boundaries (Adams, 2013; Wagner, 2018).
This builds on and relates to ideas advanced by Wagner and Leydesdorff (2005). Our only point of difference from them is that they reference specific models of preferential attachment whereas we envisage a more generic model of research culture where disciplinary networks emulate the “good practice” of research leaders.
This is not only an academic observation but also a conclusion with significant policy implications. As Wagner et al. (2015) note, “The network features an open system, attracting productive scientists to participate … governments could gain efficiencies and influence by developing policies and strategies designed to maximize network benefits—a model different from [strategies] designed for national systems.”
Better information flow, mutual awareness, and enhanced communication meant that research output grew continuously for most countries over the four decades after 1981 (Figure 1). It did so initially through bilateral collaboration, but after the early 2000s bilateral coauthorships no longer expanded relative to total output for most countries (Figure 2). After this time, a rise in multilateral collaboration became dominant (Figure 3). There was a general increase in research subject diversity and evenness across subjects converged on a similar index value for many countries (Figure 5), which appears to be associated with and is perhaps an inevitable consequence of the rise in multilateralism (Figure 6). Similar diversity indices are not the same as actual similarity, but in fact the G7 countries did become increasingly similar (Figure 7, top) as did the evolving portfolios of the BRICK countries (Figure 7, bottom). With rising collaboration, so national average citation impact (CNCI) increased and groups converged on similar impact values (Figure 8), with the largest economies all settling at a common value well above the world average.
How can a large number of countries, including prolific publishers, have an average CNCI above world average? The apparent anomaly of many countries doing better than the world average they dominate is a consequence of shared publications. For both collaboration and impact indices, the well-cited collaborative papers are counted every time in each country total but only once in the world’s total. Less well-cited domestic papers are counted once in a country total and once in the world total. The net world CNCI is thus diluted by the aggregation of all the domestic papers. Because these data characteristics are not properly understood, this leads to misinterpretation and may mislead policy makers about the limitations of real achievement.
During the four decades covered by this analysis, research policies evolved and comprehensive research management interventions, such as national research assessment cycles, were initiated sporadically by some countries. But not all followed distinct national paths. For example, the United Kingdom started university research evaluation in 1986 and institutionalized this in 1992 via the Research Assessment Exercise (RAE). The UK policy was not followed by France and Germany until after 2000, while Australia followed a similar path to the United Kingdom only after 2008. The United States has never had a national system of institutional research assessment. China meanwhile followed a path of exceptional expansion, albeit with a lower rate of multilateral collaboration (Figure 3).
Despite such national differences in policy and management, we observe similarities in trajectory, in the timing of shifts between bilateral and multilateral collaboration, and in convergence upon similar values of diversity and citation impact across countries. No statistical test or conceptual model can replicate this, but the appearance is of a global system with common organizational properties across borders that override structural and policy differences between them. Although the United Kingdom initiated greater selectivity in and concentration of research resources in the 1980s, the subject diversity of the research portfolio throughout this period nonetheless remains stable (Figure 5). The conclusion might be that, for example, the United Kingdom’s improvement in research performance in the 1990s compared to a downturn in the 1980s (Adams, 2002) would have occurred whether or not the RAE had been introduced.
If there is a driver, a shared and systemic organizing factor, then it is most likely to be collaboration between like-minded research colleagues. Rising international collaboration, enabled by enhanced physical and electronic communication, led to a greater sharing of ideas and a consensus on both priorities and how they should be tackled and reported. It also enabled apparent growth with constrained resources, so growth and diversity of national research portfolios were seen to increase while domestic research volumes were actually static. The continuing shift from pairwise to group projects increased the overlap of output and the sharing of ideas and enabled a de facto resource pool that increased rather than reducing national diversity. Awareness of the significance of international research grew and CNCI progressively rose as a group of nations shared the same pool of research objectives and important, highly cited outputs, leading in turn to greater similarity and converging impact indices.
These are global data, analyzed across a large number of research-active countries, which spawn not only general observations about self-organizing networks outside national boundaries but a series of other intriguing scenarios.
For example, is the value at which the G7 are converging simply “about as diverse as you can afford”? The diversity of national research portfolios increased as their activity became more evenly spread across research subjects (Figure 5). This was not in itself global homogeneity: It is multiple economies arriving at a similar degree of evenness. There is some evidence in these data of a speculative balance point between selectivity and diversity. The United States and the United Kingdom seemed to “overshoot” in the 1990s, perhaps supporting an unsustainable portfolio, because after 2000 they increase in specialization again.
Is there one network or two groups? This is a subjective perspective but global tensions between open research networks and demand-driven bilateral partnerships may provoke some real bifurcation (Adams, Johnson, & Grant, 2022). The BRICK nations represent newer research agendas that focus on innovative technology and economic competitiveness; they have higher levels of bilateral research. The G7 represent a set of older research agendas that have moved from old technologies to societal priorities, such as health and environment, supported by open multilateral research. The details that differentiate national similarity in the German and South Korean outcomes (Figure 7) are reflected in other national analyses. Germany, with a strong industrial technology tradition, retains similarity with Russia and Japan but its evident difference from China and South Korea points to important differences in detail.
South Korea’s CNCI is rising and it also has the highest level of research investment in the world: Gross Expenditure on R&D (GERD) is already over 4% compared to an aspirational EU target of 3%. South Korea’s domestic research base is focused on innovative technology and strongly supported by domestic industry. CNCI for Brazil, India, and Russia seems to have plateaued: Brazil has suffered from financial and strategic disruption to its research base; India has yet to commit the investment required (its GERD is 0.65% of GDP and falling); and post-Soviet Russia has yet to re-establish a formerly excellent higher education and research base. The past appears to show a strong common trend, but whether that will continue for the future is a political rather than analytical policy issue.
ACKNOWLEDGMENTS
We are particularly grateful for the well-informed suggestions of two anonymous reviewers. The material in this paper was jointly analyzed and authored and the content has been informed by discussions with Caroline Wagner, Loet Leydesdorff and our colleagues at the Institute for Scientific Information.
AUTHOR CONTRIBUTIONS
Jonathan Adams: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization, Writing—original draft. Martin Szomszor: Data curation, Formal analysis, Methodology, Software, Validation, Writing—review & editing.
COMPETING INTERESTS
Jonathan Adams is employed by Clarivate, which owns the Web of Science. Martin Szomszor was employed by Clarivate at the time of analysis and is now an independent researcher.
FUNDING INFORMATION
No external funding was required for this research.
DATA AVAILABILITY
All the background data are available to academic researchers in institutions that subscribe to the Web of Science.
REFERENCES
Author notes
Handing Editor: Ludo Waltman