Peer review is crucial to the knowledge production system and publication quality control. However, limited research has been conducted on the characteristics of anonymous reviewers and the connections between journals and reviewers. Based on the journal–reviewer coupling relationship of 477,684 reviewers and 6,058 journals from Publons, we show a highly concentrated review network with a small number of journals relying on a disproportionately high share of reviewers. The skewness in reviewer distribution is evident at various levels: journal field, country of origin, and journal impact. Moreover, we revealed significant disparities in reviewer background: Women review for fewer journals and are underrepresented among reviewers, especially in fields such as physics and mathematics and in countries such as China and Japan. Journals in fields like psychology, health, and humanities tend to rely on reviewers from a limited geographic location pool, and those based in Brazil and Japan often connect with local countries’ reviewers. We also observed homophily effects, where journals within most fields and countries, and with higher journal impact tend to share reviewers mutually. Our study provides a more comprehensive understanding of the global peer review system and highlights the need for greater diversity and inclusion in the peer review process.

Peer review is an essential quality control mechanism in scientific research, as it upholds scholarly integrity and promotes public trust (Bornmann, 2011; Johnson, Watkinson, & Mabe, 2018; Tennant & Ross-Hellauer, 2020). It serves to evaluate the accuracy, significance, and novelty of research findings and provides valuable feedback for improving research quality (Garcıa-Costa, Squazzoni et al., 2021). Peer review is the standard practice of most credible scientific journals and is essential to assessing the credibility and quality of work submitted (Kelly, Sadeghieh, & Adeli, 2014; Zheng, Chen et al., 2023). A Publons survey (2018) highlighted that 13.7 million reviews were conducted for 2.9 million peer-reviewed articles published in Web of Science (WoS)-indexed journals in 2016. Moreover, over 98% of researchers out of 11,800 global researchers surveyed by Publons acknowledged the importance of peer review in maintaining the quality and integrity of scholarly communication (Publons, 2018). These statistics underline the significance of peer review as a critical component of the academic publication process.

Reviewers are the backbone of the peer review process. Their role is pivotal in selecting appropriate and robust manuscripts while detecting irreproducible and fraudulent research (Gerwing, Allen Gerwing et al., 2020; Hojat, Gonnella, & Caelleigh, 2003; Ortega, 2017). An essential responsibility of journals is to identify suitable reviewers for the submitted manuscripts (Kelly et al., 2014). Ideally, good reviewers should have relevant expertise, maintain a neutral stance, and consider peer review their professional responsibility (Publons, 2018). Additionally, a diverse range of perspectives and opinions that accurately reflect the scientific community is essential for conducting thorough and constructive peer review (Cell Editorial Team, 2021). Although a diversity of reviewers may result in less consistency in review opinions, it enhances the validity of the review process by providing a broad range of relevant information (Bornmann, 2011). The inclusion of reviewers from various backgrounds helps journals match manuscripts with the appropriate reviewers and consider opinions from various angles (Kelly et al., 2014).

However, it is often challenging to find ideal reviewers for manuscripts. Harnad (1998) commented that experts in specific fields are a “scarce resource” and are overharvested by peer review. Many journals struggle to recruit adequate reviewers for their received submissions. Studies have also shown that journals may invite reviewers based on their likelihood to accept the invitation rather than their credibility as reviewers (García, Rodriguez-Sánchez, & Fdez-Valdivia, 2015; Lee, Sugimoto et al., 2013). With the rapid increase in submitted manuscripts, qualified reviewers are overburdened and have to review more manuscripts, and many academics opt out of reviewing or limit their reviewing tasks (Kovanis, Porcher et al., 2016; Severin & Chataway, 2021). Stafford (2018) argued that the lack of sufficient incentives for researchers to be reviewers and the increasing workload added by journals on reviewers may exacerbate the limited pool of journal reviewers. Given the mismatch of ever-increasing academic publications and a limited, slow-growing pool of reviewers (Kovanis et al., 2016), competition for reviewer resources may be fierce, leading to an imbalanced consumption of reviewer resources across journals.

As a group, reviewers are also often observed to lack diversity, with a disproportionate number being men (Helmer, Schottdorf et al., 2017; Zhang, Shang et al., 2022) and from the United States (Gaston & Smart, 2018; Warne, 2016). As a result, the voices of women and researchers from underrepresented regions in global science are often marginalized, despite studies highlighting their significant contributions to peer review quality (Dumlao & Teplitskiy, 2025; Garcıa-Costa et al., 2021). The lack of diversity in reviewer groups’ backgrounds may suppress research from groups that differ from the reviewers (Dumlao & Teplitskiy, 2025; Severin & Chataway, 2021). Gerwing et al. (2020) found that some fields had review comments demeaning or attacking authors based on gender, race, place of origin, and language. The limited pool of reviewers mentioned above may further exacerbate the challenge of targeting suitable and diverse reviewers (Gaston & Smart, 2018).

One focus of this study is to discuss whether similar journals have a homophily effect, a common social phenomenon where similarity breeds connection (McPherson, Smith-Lovin, & Cook, 2001). In this study, homophily refers to the tendency for similar entities, such as journals with specific shared features, to have a higher likelihood of sharing reviewers than dissimilar entities. Previous research has implied that homophily may exist in various features of the peer review system. For example, it is common and legitimate that journals within the same field and focused on similar topics find the same reviewers with matching expertise and research backgrounds (Kelly et al., 2014). Additionally, the country of the journal and its editor-in-chief may be critical features that promote homophily as they relate to reviewers’ countries (Gaston & Smart, 2018; Publons, 2018). Another key feature is the journal’s rank and reputation, which can influence researchers’ decisions on whether to accept reviews (Breuning, Backstrom et al., 2015; Lei, 2022; Ortega, 2017; Warne, 2016). While homophily regarding journal fields might be logical, other types of homophily, if they exist, may limit the coverage and diversity of reviewers in the reviewer pool. Reviewers who do not belong to a journal’s preferred reviewer group will likely lose the opportunity to be matched with manuscripts.

Despite the extensive research on peer review, quantitative investigations into how journals utilize reviewer resources globally and whether homophily affects this process are scarce. We established a global, multidisciplinary journal peer-review network based on journal–reviewer coupling relationships to address this research gap. This study examines the (un)equal distribution of reviewer resources and potential homophily effects on the global journal system. Specifically, we analyze shared reviewers between journals and reviewer diversity based on gender and country of affiliation. We also categorized journals based on their disciplinary fields, countries of origin, and Journal Impact Factor (JIF) rankings to analyze the distribution of reviewers across these variables. Our study overcomes existing data limitations and provides fresh insights into the peer review system by offering a comprehensive overview of the global peer review network.

2.1. Data Sources

We obtained data from Publons, a cross-publisher peer review database launched in 2012 and owned and operated by Clarivate Analytics since 2017. Publons allows researchers to create personal profiles to keep track of their peer review activities for journals within its platform. We chose Publons because it is a platform that offers global coverage of peer review records along with comprehensive and representative data. As of 2018, Publons included data from over 15,000 journals, 2.2 million reviews, and 400,000 reviewers across numerous disciplines and countries (Publons, 2018). Clarivate Analytics’ unique validity verification system evaluates journals based on specific criteria to determine its official publishing partners. For partnered journals, Publons adds the reviews to reviewers’ profiles automatically via the journal’s editorial management system after obtaining reviewers’ consent, without any additional reviewer action required. For other journals, Publons requires reviewers to forward their review invitation email to Publons for manual verification. According to Publons, as of July 2022, 9,937 official publishing partners (including journals and conferences) cooperated with Publons (Publons, 2022).

Our access to Publons data was the outcome of a data usage agreement with the data provider (Clarivate) in 2021. We initially obtained a data set of 3,252,396 reviews recorded on Publons, with 522,589 individual reviewers conducting reviews for 25,917 publication venues, which include journals, conferences, and book series. Each reviewer was identified by Publons account information. Our analysis is based on the assumption that each account corresponds to a distinct reviewer, as we were unable to access the identity of the anonymous reviewers. Unlike previous studies that extract peer review comments from published manuscripts, our review data set includes both published and unpublished manuscripts, providing more comprehensive coverage. As our data set does not track the historical changes in reviewers’ affiliation, to minimize the impact of potential inaccuracies due to affiliation changes that are not documented, we focus our analysis on reviews completed over a relatively short time frame, from 2015 to 2020, which accounts for approximately 86.54% of all the recorded reviews. As a result, our analysis based on the review data set reflects more recent peer review practices.

We acknowledge the limitation regarding the representativeness of Publons data in capturing the overall population of peer reviewers globally (Teixeira da Silva & Nazarovets, 2022). To the best of our knowledge, there is a lack of direct evidence on the potential coverage bias of Publons regarding reviewer characteristics such as gender, language, and country. Yet, it is well documented that journals indexed by reputable bibliographic databases, such as Web of Science and Scopus, are largely English-based and overrepresent Western country research (Tennant, 2020). Given that most journals on Publons are also indexed by the Web of Science, it is likely that Publons suffers from the same issue. Moreover, as Publons is based on voluntary disclosure, our sample may have potential self-selection biases. Journals and publishers with agreements with Clarivate may be overrepresented in our data (Teixeira da Silva & Nazarovets, 2022). Nevertheless, this study still contributes to understanding peer review dynamics and informs future research in this area, as shown by previous studies using Publons (Lei, 2022; Ortega, 2017; Zhang et al., 2022). Moreover, as our results show (see Section 3), the unbalanced gender distribution among all reviewers in our data aligns with previous studies limited to specific disciplines and sample ranges (Fox, Burns, & Meyer, 2016; Zhang et al., 2022). The relative overrepresentation of reviewers from North America and Europe also aligns with previous studies based on a specific discipline (Gaston & Smart, 2018).

We matched the Publons publication venues with the journal list indexed by WoS and Journal Citation Report (JCR), which reduced the risk of including nonscientific, scam, or predatory journals and allowed us to obtain additional journal attributes. During this step, 9,220 journals were matched initially. To improve the data representativeness, we removed 3,115 journals with fewer than five reviewers recorded by Publons. As our analysis involved network analysis, we further pruned the network by removing 47 journals that shared no reviewers with the rest and were thus isolated from the other journals in the network. According to this rule, our final analytical sample included 6,058 journals and 477,684 (91.4%) reviewers, covering 2,845,499 reviews.

Our study focused on three journal variables—disciplinary field, country of origin, and JIF. We used the discipline classification provided by the Observatoire des Sciences et des Technologies (OST) at the Université du Québec à Montréal, which reacategorized WoS journals into 14 broad fields and 144 low-level disciplines. The taxonomy was adapted from those used by the U.S. National Science Foundation in its Science and Engineering Indicators series since the 1970s. OST completes this taxonomy with the addition of its own scheme, as the NSF classification does not cover arts, literature, and humanities (Observatoire des Sciences et des Technologies, 2016). The classification schema has been adopted in multiple previous studies (Archambault, Campbell et al., 2009; Kozlowski, Larivière et al., 2022; Larivière, Gingras et al., 2015, Larivière, Ni et al., 2013; Ni, Smith et al., 2021; Siler & Larivière, 2022). We conducted our analysis at the field level, which includes Arts, Biology, Biomedical Research, Chemistry, Clinical Medicine, Earth & Space, Engineering, Health, Humanities, Mathematics, Physics, Professional Fields, Psychology, and Social Sciences. The country of origin was identified according to Clarivate Analytics’ JCR. JCR provides a “region” for each journal indexed in the Web of Science, which is not necessarily the same as the publisher’s location. For instance, Science Bulletin, although published by Elsevier in the Netherlands, is categorized as being in China Mainland. This regional classification is considered a reliable proxy for a journal’s country of origin (Mueller, Murali et al., 2006; Zhou, Li et al., 2018). The country of origin of a journal helps account for potential differences in editorial policies, peer review practices, and academic cultures that can influence the dissemination and reception of research (Gehanno, Ladner et al., 2011; Lin & Li, 2023). Journals in different countries and regions may prioritize research topics or methodologies reflective of their regional academic communities because their associated scholars often focus on themes and topics of interest to communities that reside in their respective regions (Yan, Bao et al., 2024). Based on the discipline classification adopted in this study, we ranked each journal’s JIFs in descending order and categorized journals into four quartile groups (labeled as Q1–Q4) based on their rank compared to other journals in the same discipline. We used these quartile groups as proxies of journal rank. Our benchmark year for JIF was 2019, right before the COVID-19 pandemic outbreak, which inflated journals’ JIFs (Zheng & Ni, 2024).

2.2. Methods

2.2.1. Network analysis

We conducted a network analysis by constructing a journal–reviewer coupling network based on shared reviewers, which reflects the closeness between two journals in terms of reviewer sharing (Arroyo-Machado, Torres-Salinas et al., 2020). This network captures the relationships between different journals and identifies connections based on shared reviewers, with the edges weighted by the number of shared reviewers. We then analyzed the journal–reviewer coupling network using centrality measures. The network analysis approach allowed us to uncover the underlying patterns of reviewer sharing and the relationships between different journals, providing insights into the global peer review system’s structure and characteristics.

2.2.2. Statistics for reviewers

While Publons houses detailed information for reviewers, the data used in this study were anonymized for privacy concerns. Nevertheless, we requested Publons to infer the reviewers’ gender and country of affiliation information based on the imputation algorithm developed by Larivière et al. (2013). Specifically, we first sent Publons a gender classification algorithm, which contains information about the probability of a particular name belonging to a female or male in each country (the same name may have different gender classifications in different countries). The algorithm was developed using U.S. census data and country-specific name lists and has been applied in multiple previous studies (Kozlowski et al., 2022; Larivière et al., 2013; Larivière, Vignola-Gagné et al., 2011). More details and method validation are available in Larivière et al. (2013). Publons used our algorithm for their database, where the reviewer’s name and country of affiliation information are available. Publons only engaged with the primary country of affiliation in case of multiple affiliations for reviewers. If the reviewer’s name and country of affiliation information in the Publons database match with our algorithm, Publons provides us with the gender and country information for the reviewer. In other words, we have the country of affiliation information for reviewers for whom we were able to identify their gender categories. We excluded reviewers who were unable to be assigned one binary gender using this method. As the results inferred by the algorithm would only be included in the returned data set when the result agrees with the reviewer’s profile data, the accuracy of the reviewer information is likely to be high. By using these two features of reviewers, we were able to explore their relationship with the current global peer review network. To examine the country diversity of reviewers, we measured the Gini index based on the number of reviewers across representative reviewer countries with at least 500 reviewers in our sample (52 countries) by using Eq. 1.
(1)
where xi is country i’s number of reviewers normalized by the total number of reviewers for country i in our sample, n is the total number of countries, and x¯ is the average number of reviewers across countries. A higher Gini index indicates a greater inequality in the country profiles of reviewers for the field.

2.2.3. Homophily analysis

We analyzed whether journals in the journal–reviewer coupling network tend to group based on features of the nodes/journals (i.e., the tendency towards homophily). We first analyzed how journals share reviewers with journals within and outside their home fields by calculating the average percentage of within-group degrees (i.e., the weighted degrees for a journal connecting to journals in the same journal group). Formally, for each journal group, we calculate Eq. 2:
(2)
where wi,within is journal i’s total weighted degrees with journals within the same group, wi is journal i’s total weighted degrees with all connected journals, and n is the number of journals for that group. A higher average percentage for a journal indicates a higher tendency for this group to share reviewers with journals within the same field.
To exemplify the varying degrees of homophily among distinct journal groups, we calculated the mixing matrix, which displays the distribution of edge weights between each journal pair affiliated with two different journal groups (Farine, 2014). Each value in the matrix represents the proportion of edge weights corresponding to the edges between two different journal groups out of the total edge weights in the network. Based on the mixing matrix, we then calculated the assortativity coefficient, which represents an overall measure of a node’s propensity to connect with other nodes that share similar attributes within the journal–reviewer coupling network concerning journal field, publication country, and JIF-based ranking (Farine, 2014; Newman, 2003). The assortativity coefficient varies between −1 and 1 and quantifies the degree of association between nodes with alike or distinct features. A positive value approaching 1 signifies that nodes with similar characteristics are more likely to connect (homophily), while a negative value nearing –1 implies that nodes with differing attributes are more prone to establish connections. As our network is undirected, the equation can be written as follows:
(3)
where eij is the proportion of total edge weights for all journal pairs between journal groups i and j, and n is the total number of journal groups.

We also used the block permutation test (Fredrickson & Chen, 2019; Kirch, 2007) to test whether statistics such as the assortativity coefficient and edge weight proportions in the mixing matrix deviated significantly from the baseline values obtained from null distributions that assumed no homophily effects. First, to generate null distributions, we randomly swapped the nodes while keeping the network structure and edge weights constant. We controlled for homophily derived from other features by constraining the swapping within journals with similar attributes (“blocks”). For instance, when testing journal field homophily, we only allowed node swaps with other journals published in the same country and ranked in the same JIF quartile. Then, we compared the statistics derived from the observed network to the corresponding value in each random network. We considered a statistic to be significantly higher or lower than the baseline values in the null distributions if the proportion of random networks showing a more extreme value is below the significance level. We infer homophily effects if there is a higher edge weight proportion than the baseline within a journal group, while a lower edge weight proportion suggests that fewer within-group connections existed.

All computations were conducted in Python. Network analysis was performed with the Python library NetworkX 2.8.4 (Hagberg, Swart, & Schult, 2008). Gephi 0.10.1 was utilized for graphical representations (Bastian, Heymann, & Jacomy, 2009).

3.1. Response Rates Overview of the Global Peer Review Network

We created a global peer review network that encompasses the review activities of 477,684 individual reviewers across 6,058 journals (Figure 1(A)). The network is based on the journal–reviewer coupling relationship, with two journals being connected through an edge if they share reviewers. The weight of each edge represents the number of reviewers shared by the two connected journals. On average, each reviewer in this network reviewed 2.11 (±2.39) journals, with a median of 1. Each journal shared reviewers with an average of 196.52 (±219.77) other journals in the network, with a median of 131. The mean number of reviewers a journal shares with others is 118.99 (±351.02), with a median of 30. The large standard deviation reflects the significant variability in the number of reviewers shared among journals.

Figure 1.

(A) Peer review network based on the journal-reviewer-coupling relationship. Node colors reflect the disciplinary fields of journals. We used the ForceAtlas2 algorithm in Gephi 0.10.1 for the network layout. Journals ranked in the top 10 by at least one centrality (degree, closeness, betweenness, and eigenvector) are labeled. (B) Cumulative distribution of connected reviewers. The x-axis shows the percentage of journals arranged in descending order by the number of reviewers. The y-axis plots the percentage of unique reviewers for the corresponding journals.

Figure 1.

(A) Peer review network based on the journal-reviewer-coupling relationship. Node colors reflect the disciplinary fields of journals. We used the ForceAtlas2 algorithm in Gephi 0.10.1 for the network layout. Journals ranked in the top 10 by at least one centrality (degree, closeness, betweenness, and eigenvector) are labeled. (B) Cumulative distribution of connected reviewers. The x-axis shows the percentage of journals arranged in descending order by the number of reviewers. The y-axis plots the percentage of unique reviewers for the corresponding journals.

Close modal

Our analysis shows that there exists an unequal distribution of reviewers among journals. The top 5% (302) of journals with the highest number of reviewers are connected with 54.66% (261,104) of the total reviewers in our data set. Conversely, 21.11% (1279) of journals have fewer than 10 reviewers and are connected with only 1.31% (6234) of total reviewers (Figure 1(B)). The top three journals with the most reviewers are IEEE Access (26,278 reviewers), BMJ Open (13,043 reviewers), and Scientific Reports (11,006 reviewers) (Table S1 in the Supplementary material). Scientific Reports emerges as the most central journal in the network, exhibiting the highest weighted degree centrality, closeness centrality, and betweenness centrality. Other journals ranking high in centrality measures include PLOS ONE, IEEE Access, Nature Communications, and BMJ Open. Most of them are multidisciplinary open-access journals, which often publish more articles and are usually connected to researchers from various fields and locations. This helps the journals sustain a rich reviewer pool for reviewer selection. The network highlights the importance of considering disciplinary expertise when selecting reviewers and suggests that the peer review process may also benefit from collaboration and communication across fields.

3.1.1. Disciplinary field

Our analysis of the journal–reviewer coupling network confirmed that same-field journals tend to be closer in the network graph layout based on their disciplinary fields, which indicates that journals within the same field share more common reviewers and have stronger connections due to the overlap in their reviewer pools (Figure 1(A)). The network also highlighted distinct boundaries between fields, with some multidisciplinary journals bridging different fields. While it is unsurprising that the peer review system is organized by fields, this arrangement indirectly supports the validity of our network, as selecting reviewers with expertise that matches the subject area is crucial in the review process. Additionally, we found that Clinical Medicine is the largest field in terms of both the number of journals (1,539) and reviewers (155,039), followed by Engineering (Figures 2(A) and 2(B)). After accounting for the number of journals, we still observe that the distribution of reviewers is skewed (Table S2 in the Supplementary material). Clinical Medicine (73.55%) and Engineering (73.94%) also appear to have the highest shares of reviewers who only review for journals within their respective fields. We refer to reviewers who have conducted reviews exclusively for a single journal—pertaining to a specific discipline or a particular nation—as field-specific and country-specific reviewers at the respective levels. This classification is independent of the reviewers’ country of affiliation. In contrast, other fields have much lower percentages of field-specific reviewers (Figure 2(B)).

Figure 2.

Number of journals and reviewers. (A), (C), (E): Number of journals by journal field, journal country, and JIF rank. (B), (D), (F): Number of reviewers and reviewers who exclusively review for a specific journal group by journal field, journal country, and JIF rank. Line plots show percentages (%) of specific reviewers out of all reviewers.

Figure 2.

Number of journals and reviewers. (A), (C), (E): Number of journals by journal field, journal country, and JIF rank. (B), (D), (F): Number of reviewers and reviewers who exclusively review for a specific journal group by journal field, journal country, and JIF rank. Line plots show percentages (%) of specific reviewers out of all reviewers.

Close modal

3.1.2. Journal country

We also analyzed the reviewers of journals based on the countries where the respective journals are published, as documented by Clarivate Analytics. We focused on the top 10 countries in terms of the number of journals published: the United States, United Kingdom, Netherlands, Germany, Switzerland, Australia, China, Brazil, Denmark, and Japan, which together account for 90.41% of the journals in our sample (Figure 2(C)). We found that U.S. journals (2,122) and U.K. journals (2,031) had the largest number of reviewers, with U.S. journals hosting 138,063 reviewers and U.K. journals hosting 150,796 reviewers (Figure 2(D) and Table S2 in the Supplementary material). Some reviewers only review journals located in a specific country, known as country-specific reviewers. We observed that 57.09%, 56.38%, and 53.87% of reviewers for U.S., U.K., and Brazil-based journals, respectively, were country-specific reviewers. Conversely, Switzerland (18.36%) had the smallest share of country-specific reviewers.

3.1.3. JIF rank

Figures 2(E) and (F) show that across the quartile ranks from Q1 to Q4, there is a decrease in the total number of journals (Q1: 1860 ∼ Q4: 821), the total number of reviewers (Q1: 223,773 ∼ Q4: 81,564), and the rank-specific reviewer share (Q1–Q4: 49.71%, 44.86%, 44.06%, 43.56%). We identify rank-specific reviewers as those individuals who conduct reviews solely for journals classified within a specific tier, as determined by JIF. The network has a lower coverage of Q3–Q4 journals due to the shortage of available reviewers. For instance, 40.34% of the excluded journals with fewer than five reviewers are ranked in Q3–Q4. Additionally, the mean number of reviewers per journal showed a decreasing trend by JIF quantiles, with Q1 journals having the highest number of reviewers and Q4 journals having the lowest (Table S2 in the Supplementary material). This implies that journals with higher JIFs tend to attract more reviewers. However, we acknowledge that the network has a lower coverage of Q3–Q4 journals, possibly due to the varying levels of Publons’ coverage of different journals or publishers, as we acknowledged in Section 2.1.

3.2. Reviewer Gender Distribution

Our analysis revealed a notable gender disparity in the reviewer pool, with men (141,728) comprising approximately 69.48% of the reviewers whose gender could be identified. On average, women reviewed 2.33 (±2.37) journals, while men reviewed 2.75 (±3.24) journals. Of the 5,064 journals with five or more reviewers, 90.52% (4,584) had less than half of their reviewers being women. Specifically, women represented less than 20% of all reviewers in Physics (15.63%), Mathematics (16.64%), and Engineering (17.16%), while accounting for more than 40% in Health (53.50%), Psychology (46.15%), and Professional Fields (40.86%) (Figure 3(A)). Moreover, women’s share in field-specific reviewers was higher relative to all reviewers across all fields. For instance, women represented 62.80% and 51.85% of field-specific reviewers in Health and Psychology, respectively, compared to 53.50% and 46.15% of all reviewers in these fields. Our findings suggest that women’s peer review experiences are more likely to be confined to a single field compared to men’s.

Figure 3.

Women reviewers’ proportion in the reviewer population. (A), (C), (E): Average percentage of women reviewers by journal field, journal country, and JIF rank. Journals with fewer than five gender-identified reviewers were excluded. (B), (D), (F): Percentage of women reviewers by journal field, journal country, and JIF rank.

Figure 3.

Women reviewers’ proportion in the reviewer population. (A), (C), (E): Average percentage of women reviewers by journal field, journal country, and JIF rank. Journals with fewer than five gender-identified reviewers were excluded. (B), (D), (F): Percentage of women reviewers by journal field, journal country, and JIF rank.

Close modal

The proportion of women reviewers varies depending on the country in which journals are published. The highest share of women reviewers was found in journals published in Brazil (34.37%), Australia (33.90%), Denmark (32.43%), and the United Kingdom (30.61%) (Figure 3B). Conversely, the lowest share of women reviewers was observed in journals published in China (20.55%) and Japan (23.17%). Upon examining country-specific reviewers, we observed that the percentages of women reviewers are over 5% higher for Switzerland (+10.45%), Denmark (+8.00%), Brazil (+6.17%), Netherlands (+5.56%), and Germany (+5.30%), whereas for China (+1.85%), the United States (+0.97%), and Japan (−0.52%), the percentages are less than 2% higher or even lower.

Grouping the data by the JIF rank of journals revealed that the percentage of women reviewers was the lowest (28.82%) for Q1 journals, with the smallest gap (1.18%) between the total and rank-specific reviewers (Figure 3(C)). Conversely, Q2 journals had the highest percentage of women reviewers (30.34%). When analyzing Q4 journals, we found that they had the highest percentage of women among rank-specific reviewers (33.65%) and the largest differences between the percentages of women among all reviewers and rank-specific reviewers (4.24%).

As it is likely that journal editors tend to select reviewers from authors who have published in their own or similar journals, the distribution of reviewers might be close to the author distribution. To test this, we collected the publications of each journal in our sample from 2015 to 2020 and recorded the first and last authors’ names and affiliations. We disambiguated the authors to keep unique authors. Using the same gender and country assignment methods, we identified 4,722,875 authors’ gender and affiliation countries and obtained the characteristics of journals’ author groups. The reviewer and author groups have high Spearman’s rank correlations of gender and country distributions (gender: ρ = 0.632, p < 0.001; country: ρ = 0.663, p < 0.001).

We further normalized the reviewer’s gender and country distributions by calculating the following relative ratio with Eq. 4.
(4)
where Rij and Aij are the numbers of reviewers and authors with characteristic j (such as being women or in the United States) for journal i, and TRi and TAi are the total numbers of reviewers and authors for journal i, respectively. We compared the relative ratio with the women’s proportion and found that the distribution is similar to the original distribution displayed in the main analysis (Figure S1 in the Supplementary material). Additionally, to test the differences, we conducted Chi-square tests between the percentages of women reviewers with relative ratios. The results indicate that there is no significant difference (for journal fields, χ2 = 0.030, p > 0.05; for journal country, χ2 = 0.008, p > 0.05; for journal rank, χ2 = 0.000, p > 0.05). Therefore, although the author’s characteristics are associated with the reviewer’s characteristics, it does not change the conclusion of our analysis.

3.3. Reviewer Country Distribution

Of the 251,806 reviewers whose country of affiliation was identified, the majority (16.44%) were from the United States, followed by China (10.58%) and the United Kingdom (6.05%) (Table S3 in the Supplementary material). The average number of reviewed journals varied by reviewer country, with reviewers from Germany (3.22 ± 3.98), Australia (3.08 ± 3.35), and Italy (2.98 ± 3.56) reviewing the most journals on average. At the continent level, most reviewers came from Europe (37.18%), followed by Asia (28.57%) and North America (20.02%), while Africa had the smallest share (2.89%). When analyzing reviewer behavior by global region, we found that reviewers from the Global North, including Oceania (3.08 ± 3.34), Europe (2.76 ± 3.23), and North America (2.75 ± 2.95), reviewed more journals than their counterparts in other regions on average. Overall, the United States, the United Kingdom, Italy, and Australia are high in both the average number of reviewed journals and the number of reviewers (Figure S2 in the Supplementary material). The number of reviewers is correlated with the number of journals published in a country (Pearson r = 0.69, p < 0.001). Nevertheless, even accounting for the number of journals, we still observe that the distribution of reviewers is different across journal groups (Table S2 in the Supplementary material).

Our research findings highlight that the level of inequality regarding the reviewer countries varies based on the field, as indicated by the Gini index measurement. As depicted in Figure 4(A), Psychology, Health, and Humanities have high Gini indices of 0.35, 0.32, and 0.29, respectively, indicating a greater imbalance in the distribution of reviewer countries among these fields. In regard to the Gini indices for field-specific reviewers, they generally showed higher levels of inequality compared to those based on all reviewers across all fields, except for Psychology. These findings provide insights for understanding the diversity and representation of reviewers across fields.

Figure 4.

Gini index of reviewers’ country by (A) journal field, (B) journal country, and (C) JIF rank. The Gini index was calculated based on the number of reviewers for each representative reviewer’s country, which has at least 500 reviewers in our sample.

Figure 4.

Gini index of reviewers’ country by (A) journal field, (B) journal country, and (C) JIF rank. The Gini index was calculated based on the number of reviewers for each representative reviewer’s country, which has at least 500 reviewers in our sample.

Close modal

We also examined the diversity of the reviewer country by journal country. As shown in Figure 4(B), the Gini indices of two journal countries, Brazil (0.61) and Japan (0.47), are comparatively high. All journal countries have an increase in the Gini index after restricting to country-specific reviewers, whereas Brazil and Japan’s Gini indices for country-specific reviewers increase the most (Brazil: 0.61∼0.79; Japan: 0.47∼0.70). The significant increase may be related to most of their reviewers’ local background: 65.48% and 44.79% of reviewers serving Brazil and Japan-published journals are Brazilian and Japanese reviewers, respectively (Figure S3 in the Supplementary material for each journal country’s top five reviewer countries with the highest reviewer proportions). When considering country-specific reviewers, those two percentages rise to 81.61% and 63.52%, indicating a large proportion of these countries’ reviewers only review for their local journals. In addition, Australian and Chinese journals also invited their local reviewers the most (Australia: 29.8%; China: 31.0).

Reviewer countries are also disproportionately represented when based on JIF ranks, and the Gini indices are mostly low across the ranks. Upon dividing the journals into quartiles according to the JIF ranks, the Gini indices for Q2 (0.07) and Q3 (0.07) were found to be lower than Q1 (0.12) and Q4 (0.14) (Figure 4(C)). This finding suggests reviewer countries are more diverse in the middle two JIF ranks. However, the Gini indices for JIF rank-specific reviewers are higher compared to the Gini indices for all reviewers, especially for Q3 and Q4. This indicates that the rank-specific reviewers in these two quartiles exhibit less diversity.

Considering the characteristics of journals’ author groups, we calculated the relative ratio between reviewers and authors for each journal country group using Eq. 4. Based on the relative ratio, we calculated a new Gini index for each journal country group. Comparing the original and updated Gini indices, we also found similar distributions (Figure S4 in the Supplementary material). It suggests that the above observations still hold after considering the author-related factors.

3.4. Homophily: Do Similar Journals Tend to Share Reviewers?

We analyzed the degree to which journals in the network of journal–reviewer couplings share reviewers with other journals in the group based on field, publishing country, and JIF rank. To measure this, we first calculated the average percentage of weighted degrees for a journal linking to other journals in the same group. Aggregating by field, Clinical Medicine (65.95%), Engineering (62.15%), Social Science (61.12%), and Biology (56.46%) journals have higher mean percentages of within-field weighted degrees, indicating a greater likelihood for such journals to share reviewers with others within their field (Figure 5(A)). By journal country, U.S. (41.71%) and U.K.-based (46.71%) journals have overwhelmingly higher within-country weighted degrees than other countries (Figure 5(C)). Finally, within-rank weighted degrees decreased as JIF rank declined from Q1 (43.60%) to Q4 (11.27%), implying that high-ranking journals are more likely to share their reviewers with other journals ranking similarly (Figure 5(E)).

Figure 5.

Homophily analysis results. (A), (C), (E): Average percentage of within-journal-group degrees by journal field, journal country, and JIF rank. (B), (D), (F): Mixing matrices and block permutation test results by journal field, journal country, and JIF rank. Assortativity coefficients and corresponding p-values are included on the upper side.

Figure 5.

Homophily analysis results. (A), (C), (E): Average percentage of within-journal-group degrees by journal field, journal country, and JIF rank. (B), (D), (F): Mixing matrices and block permutation test results by journal field, journal country, and JIF rank. Assortativity coefficients and corresponding p-values are included on the upper side.

Close modal

We also used the assortativity coefficient to determine the correlation between nodes that share similar attributes. The assortativity coefficients based on journal field, country, and rank are 0.480 (p = 0.000), 0.135 (p = 0.000), and 0.085 (p = 0.000), respectively. The positive tendency for homophily is statistically significant across all three journal characteristics, suggesting that journals in the same group (based on field, country, or JIF quartile) are more likely to share reviewers with other journals in the same group. However, note that the strength of homophily for these features varies: The journal field exhibits the strongest homophily effect, while the journal country and JIF rank are weaker.

The mixing matrices, which show the proportions of edge weights relative to the total weight, visualize the tendency of homophily. Concerning the journal field, the edge weights for same-field journals (the diagonal) appear dense, while the weights for different-field journals are relatively lower (Figure 5(B)). Regarding journal country and JIF rank, most journals show higher edge weights with U.S./U.K. and Q1/Q2 than with their respective countries or ranks (Figures 5(D) and 5(F)). This could be attributed to the large proportion of U.S. and U.K.-published and Q1/Q2 journals in the sample. This aligns with our previous finding that most journals are placed closer to journals within their field, indicating the disciplinary boundaries for selecting reviewers.

However, the block permutation test demonstrated that the homophily effect still exists for the three journal attributes. As Figure 5(B) shows, apart from Arts and Humanities, the edge weights between same-field journals are significantly denser than the baseline values (as indicated by the red cell borders along the diagonal). In contrast, the edge weights for most different-field journals are lower than expected, suggesting that these journals share fewer reviewers (as shown by the green cell borders). Similarly, seven out of 10 major journal countries and Q1 and Q2 journal ranks exhibit the homophily effect, which conforms to the assortativity coefficients’ findings (Figures 5(D) and 5(F)).

Nevertheless, the homophily effect may not fit with all cases of journal features. The results show that journals published in the Netherlands and those with low JIF ranks (Q3 and Q4) have less overlap in their reviewer pools, in contrast to the homophilic tendencies observed in journals from other countries and JIF ranks. Despite ranking third in terms of the number of journals published (436) in our sample, only 11,992 reviewers have reviewed these journals published in the Netherlands, which is 60.7% lower than the fourth-ranked country, Germany. Furthermore, the median weighted degree for Netherlands-published journals is the lowest among the 10 major countries, and the median weighted degrees for low-rank journals are also lower than those for Q1–Q2 journals. These findings indicate a more dispersed distribution of reviewers among journals, which may reduce the likelihood of encountering the same reviewers.

This study examines both the skewness and homophily of the global peer review network by analyzing the relationships between journals and reviewers using a large data set from Publons, encompassing various disciplines. We specifically focused on exploring the diversity of reviewers across different journal groups and whether journals tend to select reviewers with similar attributes, which is known as the “homophily effect.” The study aims to deepen our understanding of the effectiveness and operation of the universal peer review system in scientific publishing and to provide insights into potential areas for improvement. This research seeks to shed light on the complex dynamics of the global peer review network and to contribute to ongoing discussions about the academic peer review process.

Our study has shown an uneven distribution of peer reviewers across various journals. We found that approximately 5% of journals assigned review tasks to more than half of the total reviewers in our sample. A small group of multidisciplinary open-access journals have emerged as central “hubs” in the journal–reviewer coupling network, suggesting that they are critical and significant consumers of the reviewer resources and may exacerbate the unevenness of the reviewer distribution among journals. The skewness of the reviewer distribution is also notable at the journal field, journal country, and JIF rank levels, even after accounting for the journal number distribution.

Our research also looked at a subgroup of reviewers who only review for journals in one field, country, or JIF rank, also known as the field, country, or rank-specific reviewers, respectively, or “loyal” reviewers. We found that the proportion of these “loyal” reviewers varied by the journal groups, with some having higher shares than others. Biomedical Research journals, Switzerland-based journals, and journals in JIF Q4 had the lowest shares of “loyal” reviewers among fields, countries, and JIF ranks, respectively. Although the overall shares of “loyal” reviewers are not extremely low, these findings suggest that some journals may need to broaden their reviewer pool to ensure a more diverse and representative peer review process.

Reviewer background diversity also exhibits skewness. We found that women are disproportionally underrepresented among reviewers of journals in almost all fields. Furthermore, women reviewers tend to review fewer journals than male reviewers, which aligns with previous findings about the underrepresentation of women in the peer review process (Helmer et al., 2017; Zhang et al., 2022). Specifically, Physics, Mathematics, and Engineering have the lowest share of women reviewers, while Health, Psychology, and Professional Fields have a relatively higher share. Journals published in countries like Brazil, Australia, and Denmark have a larger share of women reviewers. Journals in the highest JIF rank, Q1, have the lowest share of women reviewers among all journal ranks. While women are considered to engage in less visible and less rewarding academic activities, peer review may not be the case: It can still carry significant prestige, be recognized during research evaluations, and inform the reviewers of peers’ recent research (Zaharie & Osoian, 2016). These rewards can benefit reviewers’ professional and career development. Moreover, peer review is also time consuming and can disproportionately burden women due to their heavier teaching, administrative, and familial responsibilities (Xie & Shauman, 2005; Zheng, Yuan, & Ni, 2022). In addition, biases in the selection and invitation process for peer reviewers may hinder women’s participation. Women researchers are known to be disadvantaged in academic productivity, impact, reputation, and credit (Larivière et al., 2013; Ni et al., 2021), which can limit their opportunities to be invited as reviewers. The predominance of men in editorial roles may contribute to a preference towards male reviewers due to homophily (Helmer et al., 2017).

Moreover, women’s share among the “loyal” reviewers is higher than all reviewers across most journal groups, suggesting that women’s peer review profiles tend to be more constrained in a specific field or country of journals than men. A possible explanation is that women may be less represented in interdisciplinary fields, especially in biology and medical sciences (Pfirman & Laubichler, 2023). However, this explanation is not completely supported in every discipline. For example, a recent study found that the presence of women in scientific publications is positively associated with interdisciplinarity (Pinheiro, Durning, & Campbell, 2022). Further study may be needed on this matter.

The global peer review network exhibits skewness based on the country of affiliation of reviewers (reviewer country), with a large proportion of reviewers originating from the United States. The Gini index indicates that Psychology, Health, and Humanities have the least diverse reviewer country among all fields, Brazil and Japan have the least diverse reviewer country among all journal countries, while Q1 and Q4 have the least diverse reviewer country among all JIF ranks. The diversity of reviewers decreases when examining “loyal” reviewers, indicating that certain countries tend to provide a disproportionate number of reviewers for specific journal groups. For example, Brazil and Japan-published journals tend to have more country-specific reviewers from their respective countries.

Moreover, our homophily analysis confirms that journals within the same field tend to connect with denser edges by constructing mixing matrices. After accounting for the network size and features, the analysis based on assortativity coefficients and block permutation tests further reveals that the homophily effect is significant for most fields, countries, and Q1–Q2 rank journals. This finding suggests that journals sharing certain attributes are more likely to exchange reviewers compared to random cases.

Our findings on gender and country-specific reviewer participation highlight deeper cultural issues within academia. The underrepresentation of women and reviewers from non-Western countries may shape the cultural values reflected in journal publications. Moreover, the dominance of journals based in countries like the United States and United Kingdom within the global peer review network raises the risk of cultivating an academic culture that favors certain regions and disciplines over others. These patterns show the need to promote cultural inclusivity and acknowledge the critical role of diverse perspectives in evaluating scholarly work.

The study’s focus on homophily also relates to reputational considerations. Journals from similar countries or ranking tiers are more likely to share reviewers, potentially reinforcing cycles of prestige and exclusivity. This has important implications for how academic reputations are constructed and maintained. It suggests that the reputational dynamics of journals may influence reviewer selection in ways that restrict diversity in the reviewer pool and perpetuate hierarchies within academic publishing.

Our research shows the critical importance of fostering diversity and inclusion within the reviewer selection process, highlighting the significance of enriching the evaluation of scientific research through a wider lens of perspectives and expertise. The current skewed structure of the reviewer population and homophily can threaten the integrity of the peer review process, particularly when assessing interdisciplinary work. Addressing the disparities in reviewer distribution across different geographical locations and journal tiers is beneficial for a more equitable and high-caliber peer review system. Moreover, our findings encourage a diverse array of voices in the review process to champion fairness and inclusivity within the scientific community and ensure that the invaluable insights of underrepresented groups are heard and integrated into the evaluation of scientific research. In the contemporary academic environment, where the challenges of recruiting a balanced and connected pool of reviewers are more evident, it is more critical than ever to consider such diversity in peer review and the production and dissemination of knowledge (Gaston & Smart, 2018; Helmer et al., 2017).

To the best of our knowledge, this study constitutes the first exploratory analysis of the global journal-reviewer-coupling network using Publons’ data set. However, because of the potential biases in Publons’ data, limited identification of reviewer gender and country, and undocumented affiliation changes of reviewers, our analysis should be regarded as preliminary, and the results need more rigorous testing. Possible future research includes tracking the temporal changes in reviewers’ affiliations and constructing a publication network connected by common reviewers, which may better capture reviewer network patterns and explore their relationships with more detailed publication-level attributes, such as publication topics and texts. Testing the robustness of our study’s conclusions using alternative publication-index data sets, field classifications, and journal evaluation results as proxies for journal ranks may also be valuable. Moreover, our research does not include individual reviews and their contents in the analysis. While exploring the network relationships among journals, reviewers, and reviews presents a compelling research avenue, its scope exceeds the confines of a single paper. Therefore, we leave this concept for future investigation.

We thank Clarivate Analytics for providing the Publons peer review data set and Observatoire des Sciences et des Technologies at the University of Quebec in Montreal for access to the Web of Science data. We also thank Jiajing Chen for her assistance with data processing at the early stages of the project.

Xiang Zheng: Conceptualization, Formal analysis, Investigation, Methodology, Visualization, Writing—original draft, Writing—review & editing. Chaoqun Ni: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Validation, Writing—original draft, Writing—review & editing.

The authors have no competing interests.

No funding has been received for this work.

Due to restrictions imposed by the data usage agreement with Publons, we are unable to share the data set. All analysis code can be accessed on our GitHub repository at https://github.com/MetascienceLab/ReviewerNet.

Archambault
,
É.
,
Campbell
,
D.
,
Gingras
,
Y.
, &
Larivière
,
V.
(
2009
).
Comparing bibliometric statistics obtained from the Web of Science and Scopus
.
Journal of the American Society for Information Science and Technology
,
60
(
7
),
1320
1326
.
Arroyo-Machado
,
W.
,
Torres-Salinas
,
D.
,
Herrera-Viedma
,
E.
, &
Romero-Frías
,
E.
(
2020
).
Science through Wikipedia: A novel representation of open knowledge through co-citation networks
.
PLOS ONE
,
15
(
2
),
e0228713
. ,
[PubMed]
Bastian
,
M.
,
Heymann
,
S.
, &
Jacomy
,
M.
(
2009
).
Gephi: An open source software for exploring and manipulating networks
.
Proceedings of the International AAAI Conference on Web and Social Media
,
3
(
1
),
361
362
.
Bornmann
,
L.
(
2011
).
Scientific peer review
.
Annual Review of Information Science and Technology
,
45
(
1
),
197
245
.
Breuning
,
M.
,
Backstrom
,
J.
,
Brannon
,
J.
,
Gross
,
B. I.
, &
Widmeier
,
M.
(
2015
).
Reviewer fatigue? Why scholars decline to review their peers’ work
.
PS: Political Science & Politics
,
48
(
4
),
595
600
.
Cell Editorial Team
. (
2021
).
Building and supporting identity in peer review
.
Cell
,
184
(
20
),
5071
5072
. ,
[PubMed]
Dumlao
,
J. M. Z.
, &
Teplitskiy
,
M.
(
2025
).
Lack of peer reviewer diversity advantages scientists from wealthier countries
.
SocArXiv
.
Farine
,
D. R.
(
2014
).
Measuring phenotypic assortment in animal social networks: Weighted associations are more robust than binary edges
.
Animal Behaviour
,
89
,
141
153
.
Fox
,
C. W.
,
Burns
,
C. S.
, &
Meyer
,
J. A.
(
2016
).
Editor and reviewer gender influence the peer review process but not peer review outcomes at an ecology journal
.
Functional Ecology
,
30
(
1
),
140
153
.
Fredrickson
,
M. M.
, &
Chen
,
Y.
(
2019
).
Permutation and randomization tests for network analysis
.
Social Networks
,
59
,
171
183
.
García
,
J. A.
,
Rodriguez-Sánchez
,
R.
, &
Fdez-Valdivia
,
J.
(
2015
).
Bias and effort in peer review
.
Journal of the Association for Information Science and Technology
,
66
(
10
),
2020
2030
.
Garcıa-Costa
,
D.
,
Squazzoni
,
F.
,
Mehmani
,
B.
, &
Grimaldo
,
F.
(
2021
).
Measuring the developmental function of peer review: A multi-dimensional, cross-disciplinary analysis of peer review reports from 740 academic journals
(SSRN Scholarly Paper No. 3912607)
.
Gaston
,
T.
, &
Smart
,
P.
(
2018
).
What influences the regional diversity of reviewers: A study of medical and agricultural/biological sciences journals
.
Learned Publishing
,
31
(
3
),
189
197
.
Gehanno
,
J.-F.
,
Ladner
,
J.
,
Rollin
,
L.
,
Dahamna
,
B.
, &
Darmoni
,
S. J.
(
2011
).
How are the different specialties represented in the major journals in general medicine?
BMC Medical Informatics and Decision Making
,
11
,
3
. ,
[PubMed]
Gerwing
,
T. G.
,
Allen Gerwing
,
A. M.
,
Avery-Gomm
,
S.
,
Choi
,
C.-Y.
,
Clements
,
J. C.
, &
Rash
,
J. A.
(
2020
).
Quantifying professionalism in peer review
.
Research Integrity and Peer Review
,
5
(
1
),
9
. ,
[PubMed]
Hagberg
,
A.
,
Swart
,
P. J.
, &
Schult
,
D. A.
(
2008
).
Exploring network structure, dynamics, and function using NetworkX
(No. LA-UR-08-05495; LA-UR-08-5495)
.
Los Alamos National Laboratory (LANL)
,
Los Alamos, NM
. https://www.osti.gov/biblio/960616
Harnad
,
S.
(
1998
).
The invisible hand of peer review
.
Nature
,
November 5
.
Helmer
,
M.
,
Schottdorf
,
M.
,
Neef
,
A.
, &
Battaglia
,
D.
(
2017
).
Gender bias in scholarly peer review
.
eLife
,
6
,
e21718
. ,
[PubMed]
Hojat
,
M.
,
Gonnella
,
J. S.
, &
Caelleigh
,
A. S.
(
2003
).
Impartial judgment by the “gatekeepers” of science: Fallibility and accountability in the peer review process
.
Advances in Health Sciences Education
,
8
(
1
),
75
96
. ,
[PubMed]
Johnson
,
R.
,
Watkinson
,
A.
, &
Mabe
,
M.
(
2018
).
The STM report: An overview of scientific and scholarly publishing 1968–2018
.
International Association of Scientific, Technical and Medical Publishers
.
Kelly
,
J.
,
Sadeghieh
,
T.
, &
Adeli
,
K.
(
2014
).
Peer review in scientific publications: Benefits, critiques, & a survival guide
.
EJIFCC
,
25
(
3
),
227
243
.
[PubMed]
Kirch
,
C.
(
2007
).
Block permutation principles for the change analysis of dependent data
.
Journal of Statistical Planning and Inference
,
137
(
7
),
2453
2474
.
Kovanis
,
M.
,
Porcher
,
R.
,
Ravaud
,
P.
, &
Trinquart
,
L.
(
2016
).
The global burden of journal peer review in the biomedical literature: Strong imbalance in the collective enterprise
.
PLOS ONE
,
11
(
11
),
e0166387
. ,
[PubMed]
Kozlowski
,
D.
,
Larivière
,
V.
,
Sugimoto
,
C. R.
, &
Monroe-White
,
T.
(
2022
).
Intersectional inequalities in science
.
Proceedings of the National Academy of Sciences
,
119
(
2
),
e2113067119
. ,
[PubMed]
Larivière
,
V.
,
Gingras
,
Y.
,
Sugimoto
,
C. R.
, &
Tsou
,
A.
(
2015
).
Team size matters: Collaboration and scientific impact since 1900
.
Journal of the Association for Information Science and Technology
,
66
(
7
),
1323
1332
.
Larivière
,
V.
,
Ni
,
C.
,
Gingras
,
Y.
,
Cronin
,
B.
, &
Sugimoto
,
C. R.
(
2013
).
Bibliometrics: Global gender disparities in science
.
Nature
,
504
(
7479
),
211
213
. ,
[PubMed]
Larivière
,
V.
,
Vignola-Gagné
,
E.
,
Villeneuve
,
C.
,
Gélinas
,
P.
, &
Gingras
,
Y.
(
2011
).
Sex differences in research funding, productivity and impact: An analysis of Québec university professors
.
Scientometrics
,
87
(
3
),
483
498
.
Lee
,
C. J.
,
Sugimoto
,
C. R.
,
Zhang
,
G.
, &
Cronin
,
B.
(
2013
).
Bias in peer review
.
Journal of the American Society for Information Science and Technology
,
64
(
1
),
2
17
.
Lei
,
Y.
(
2022
).
Is a journal’s ranking related to the reviewer’s academic impact? (An empirical study based on Publons)
.
Learned Publishing
,
35
(
2
),
149
162
.
Lin
,
Z.
, &
Li
,
N.
(
2023
).
Contextualizing gender disparity in editorship in psychological science
.
Perspectives on Psychological Science
,
18
(
4
),
887
907
. ,
[PubMed]
McPherson
,
M.
,
Smith-Lovin
,
L.
, &
Cook
,
J. M.
(
2001
).
Birds of a feather: Homophily in social networks
.
Annual Review of Sociology
,
27
,
415
444
.
Mueller
,
P. S.
,
Murali
,
N. S.
,
Cha
,
S. S.
,
Erwin
,
P. F.
, &
Ghosh
,
A. K.
(
2006
).
The association between impact factors and language of general internal medicine journals
.
Swiss Medical Weekly
,
136
(
27–28
),
441
443
. ,
[PubMed]
Newman
,
M. E. J.
(
2003
).
Mixing patterns in networks
.
Physical Review E
,
67
(
2
),
026126
. ,
[PubMed]
Ni
,
C.
,
Smith
,
E.
,
Yuan
,
H.
,
Larivière
,
V.
, &
Sugimoto
,
C. R.
(
2021
).
The gendered nature of authorship
.
Science Advances
,
7
(
36
),
eabe4639
. ,
[PubMed]
Observatoire des Sciences et des Technologies
. (
2016
,
August
).
Stat@OST WEB Interface—Methodological note
. https://www.uregina.ca/research/assets/docs/pdf/methodologicalnotes.pdf
Ortega
,
J. L.
(
2017
).
Are peer-review activities related to reviewer bibliometric performance? A scientometric analysis of Publons
.
Scientometrics
,
112
(
2
),
947
962
.
Pfirman
,
S.
, &
Laubichler
,
M.
(
2023
).
Interdisciplinarity, gender, and the hierarchy of the sciences
.
Quantitative Science Studies
,
4
(
4
),
898
901
.
Pinheiro
,
H.
,
Durning
,
M.
, &
Campbell
,
D.
(
2022
).
Do women undertake interdisciplinary research more than men, and do self-citations bias observed differences?
Quantitative Science Studies
,
3
(
2
),
363
392
.
Publons
. (
2018
).
Publons’ global state of peer review 2018
.
Publons
. https://publons.com/static/Publons-Global-State-Of-Peer-Review-2018.pdf
Publons
. (
2022
).
Journals and conferences
. https://publons.com/journal/?partner=1&order_by=reviews
Severin
,
A.
, &
Chataway
,
J.
(
2021
).
Overburdening of peer reviewers: A multi-stakeholder perspective on causes and effects
.
Learned Publishing
,
34
(
4
),
537
546
.
Siler
,
K.
, &
Larivière
,
V.
(
2022
).
Who games metrics and rankings? Institutional niches and journal impact factor inflation
.
Research Policy
,
51
(
10
),
104608
.
Stafford
,
T.
(
2018
).
Reviews, reviewers, and reviewing: The “tragedy of the commons” in the scientific publication process
.
Communications of the Association for Information Systems
,
42
(
1
),
25
.
Teixeira da Silva
,
J. A.
, &
Nazarovets
,
S.
(
2022
).
The role of Publons in the context of open peer review
.
Publishing Research Quarterly
,
38
(
4
),
760
781
.
Tennant
,
J. P.
(
2020
).
Web of Science and Scopus are not global databases of knowledge
.
European Science Editing
,
46
,
e51987
.
Tennant
,
J. P.
, &
Ross-Hellauer
,
T.
(
2020
).
The limitations to our understanding of peer review
.
Research Integrity and Peer Review
,
5
,
6
. ,
[PubMed]
Warne
,
V.
(
2016
).
Rewarding reviewers—Sense or sensibility? A Wiley study explained
.
Learned Publishing
,
29
(
1
),
41
50
.
Xie
,
Y.
, &
Shauman
,
K. A.
(
2005
).
Women in science: Career processes and outcomes
.
Cambridge, MA
:
Harvard University Press
.
Yan
,
X.
,
Bao
,
H.
,
Leppard
,
T.
, &
Davis
,
A.
(
2024
).
Cultural ties in American sociology
.
SocArXiv
.
Zaharie
,
M. A.
, &
Osoian
,
C. L.
(
2016
).
Peer review motivation frames: A qualitative approach
.
European Management Journal
,
34
(
1
),
69
79
.
Zhang
,
L.
,
Shang
,
Y.
,
Huang
,
Y.
, &
Sivertsen
,
G.
(
2022
).
Gender differences among active reviewers: An investigation based on Publons
.
Scientometrics
,
127
(
1
),
145
179
.
Zheng
,
X.
,
Chen
,
J.
,
Tollas
,
A.
, &
Ni
,
C.
(
2023
).
The effectiveness of peer review in identifying issues leading to retractions
.
Journal of Informetrics
,
17
(
3
),
101423
.
Zheng
,
X.
, &
Ni
,
C.
(
2024
).
The significant yet short-term influence of research covidization on journal citation metrics
.
Journal of the Association for Information Science and Technology
,
75
(
9
),
1002
1017
.
Zheng
,
X.
,
Yuan
,
H.
, &
Ni
,
C.
(
2022
).
Meta-Research: How parenthood contributes to gender gaps in academia
.
eLife
,
11
,
e78909
. ,
[PubMed]
Zhou
,
X.
,
Li
,
Z.
,
Zheng
,
T.
,
Yan
,
Y.
,
Li
,
P.
, …
Uddin
,
S. M. N.
(
2018
).
Review of global sanitation development
.
Environment International
,
120
,
246
261
. ,
[PubMed]

Author notes

Handling Editor: Li Tang

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data