Skip to Main Content
Table 1.

Selected examples of selective citing and biased reporting in Strumia’s paper

Cited reference in questionStrumia’s interpretationsProblems with Strumia’s interpretations
Caplar et al. (2017) “For example, Caplar, Tacchella, and Birrer (2017) claim (consistent with my later findings) that papers in astronomy written by F authors are less cited than papers written by M authors, even after trying to correct for some social factors.” (p. 233). This is an example of imprecise reporting: In five astronomy journals, papers first-authored by males, on average, were cited approximately 6% more than papers first-authored by women. 
Milkman et al. (2015) “[L]ooking at gender in isolation (rather than at “women and minorities”), female students received slightly more responses from public schools (the majority of the sample) with respect to men in the same racial group.” (p. 226). This is an example of selective reporting. Milkman et al. (2015) report that “faculty were significantly more responsive to White males than to all other categories of students, collectively, particularly in higher-paying disciplines and private institutions.” Private universities accounted for 37% percent of the sample. 
Witteman et al. (2019) “found that female grant applications in Canada are less successful when evaluations involve career-level elements” (p. 226) This is an example of selective reporting. Witteman and colleagues (2019) also found that the sex differences in success rates (in grant obtainment) were marginal when reviewers were asked to rate the proposals independent of track record. 
Xie and Shauman (1998), Levin and Stephan (1998), Abramo et al. (2009), Larivière et al. (2013), Way et al. (2016), Holman et al. (2018) “Bibliometric attempts to recognize higher merit […] found that male faculty members write more papers.” (p. 226). This is an example of imprecise reporting. Xie and Shauman (1998) observe a 20% gap in research productivity in the late 1980s and early 1990s. However, they also find that “most of the observed sex differences in research productivity can be attributed to sex differences in personal characteristics, structural positions, and marital status.” 
Levin and Stephan (1998) investigate gender differences in publication rates in four disciplines (Physics, Earth science, Biochemistry, and Physiology) and conclude that “in every instance‚ except the earth sciences‚ women published less than men‚ although the difference is statistically significant only for biochemists employed in academe and physiologists employed at medical schools” (p. 1056). The study did not adjust for scientific rank. 
In Abramo and colleagues’ (2009) study of Italian researchers, female professors and associate professors in the physical sciences had higher publication rates than their male counterparts, while male assistant professors had higher publication rates than female counterparts (see Tables 7–9 in Abramo et al., 2009). 
Larivière et al. (2013) do not compare the average publication rates of women and men. 
Way et al. (2016) study publication productivity in computer science from 1970 to 2010 and find that “Productivity scores do not differ between men and women. This is true even when we consider only men and women who moved up the ranks and, separately, men and women who moved down (p > 0.05, Mann–Whitney)” (see Table 2 in Way et al., 2016). However, they find that in the cohort hired after 2002 men have higher average publication rates than women. 
Holman and colleagues’ (2018) data set does not allow them to directly compare the publication rates of women and men. 
Aycock et al. (2019) “Various studies focused on discrimination as a possible source of gender differences. Small samples of female physics students were interviewed by Barthelemy, McCormick, and Henderson (2016) and Aycock, Hazari et al. (2019).” (p. 225). This is an example of biased reporting: Aycock et al. (2019) report results from a survey of 455 undergraduate women in physics. Seventy-five percent of these had experienced at least one type of sexual harassment in a context associated with physics. 
Thelwall, Bailey et al. (2018) “Large gender differences along the people/things dimension are observed in occupational choices and in academic fields: Such differences are reproduced within sub-fields (Thelwall et al., 2018). In particular, female participation is lower in sub-fields closer to physics, even within fields with their own cultures, such as ‘physical and theoretical chemistry’ within chemistry (Thelwall et al., 2018). This suggests that the people/things dimension plays a more relevant role than the different cultures of different fields.” (p. 248). The analysis by Thelwall and colleagues (2018) does not offer any substantial evidence that interest plays a greater role than culture. 
Gibney (2017), Guarino and Borden (2017)  “Furthermore, psychology finds that females value careers with positive societal benefits more than do males: (…). Indeed Gibney (2017) finds that women in UK academia report dedicating 10% less time than men to research and 4% more time to teaching and outreach, and Guarino and Borden (2017) finds that women in U.S. non-STEM fields do more academic service than men.” (p. 248). Here, Strumia links women’s extra burdens with respect to teaching obligations and academic service to an argument about a female propensity to value careers with positive societal benefits. However, none of these factors are highlighted or examined as potential confounders in his own gender comparisons of publication and citation rates. 
Handley et al. (2015) “Furthermore, fields that study bias might have their own biases: Stewart-Williams, Thomas et al. (2019) and Winegard, Clark et al. (2018) found that scientific results exhibiting male-favoring differences are perceived as less credible and more offensive. Handley, Brown et al. (2015) found that men (especially among STEM faculty) evaluate gender bias research less favorably than women.” (p. 247). This is an example of biased reporting. Handley et al. (2015) also found that men evaluated an abstract showing gender bias in research evaluations less favorably than a moderated version of the same abstract indicating no gender bias. This latter result (left out of Strumia’s paper) counters his argument on this matter. 
Ceci et al. (2014), Su et al. (2009), Lippa (2010), Hyde (2014), Su et al. (2015), Thelwall (2018b), Stoet et al. (2018) “An important clue is that a similar gender difference already appears in surveys of occupational plans and first choices of high-school students (Ceci, Ginther et al., 2014; Xie & Shauman, 2003). This is possibly mainly due to gender differences in interests (Ceci et al., 2014; Hyde, 2014; Lippa, 2010; Stoet & Geary, 2018; Su & Rounds, 2015; Su, Rounds, & Armstrong, 2009; Thelwall, Bailey et al., 2018).” (p. 226). This is an example of selective citing. Here, Strumia leaves out a vast literature on how prevalent gendered assumptions at play in cultural socialization and upbringing operate to divert men towards and women away from STEM careers. See, for example, Zwick and Renn (2000), Eccles and Jacobs (1990), Jacobs and Eccles (1992), and Jones and Wheatley (1990). 
Su et al. (2009), Diekman et al. (2010), Lippa (2010), Su et al. (2015), Thelwall (2018) “This suggests extending my considerations from possible sociological issues to possible biological issues. It is interesting to point out that the gender differences in representation and productivity observed in bibliometric data can be explained at face value (one does not need to assume that confounders make things different from what they seem), relying on the combination of two effects documented in the scientific literature: differences in interests (Diekman, Johnson, & Clark, 2010; Lippa, 2010; Su, Rounds, & Armstrong, 2009; Su & Rounds, 2015; Thelwall, Bailey et al., 2018)” … (p. 247–248). This is an erroneous interpretation of the literature. With the exception of Lippa (2010), none of the studies listed here directly relate their findings to biological sex differences. Indeed, Su and Rounds (2015) argue that “while the literature has consistently shown the influence of social contexts (e.g., parents, schools) on students' interest development, particularly the development of differential interests for boys and girls (…), little is known about the link between biological factors (e.g., brain structure, hormones) and interest development.” 
Cited reference in questionStrumia’s interpretationsProblems with Strumia’s interpretations
Caplar et al. (2017) “For example, Caplar, Tacchella, and Birrer (2017) claim (consistent with my later findings) that papers in astronomy written by F authors are less cited than papers written by M authors, even after trying to correct for some social factors.” (p. 233). This is an example of imprecise reporting: In five astronomy journals, papers first-authored by males, on average, were cited approximately 6% more than papers first-authored by women. 
Milkman et al. (2015) “[L]ooking at gender in isolation (rather than at “women and minorities”), female students received slightly more responses from public schools (the majority of the sample) with respect to men in the same racial group.” (p. 226). This is an example of selective reporting. Milkman et al. (2015) report that “faculty were significantly more responsive to White males than to all other categories of students, collectively, particularly in higher-paying disciplines and private institutions.” Private universities accounted for 37% percent of the sample. 
Witteman et al. (2019) “found that female grant applications in Canada are less successful when evaluations involve career-level elements” (p. 226) This is an example of selective reporting. Witteman and colleagues (2019) also found that the sex differences in success rates (in grant obtainment) were marginal when reviewers were asked to rate the proposals independent of track record. 
Xie and Shauman (1998), Levin and Stephan (1998), Abramo et al. (2009), Larivière et al. (2013), Way et al. (2016), Holman et al. (2018) “Bibliometric attempts to recognize higher merit […] found that male faculty members write more papers.” (p. 226). This is an example of imprecise reporting. Xie and Shauman (1998) observe a 20% gap in research productivity in the late 1980s and early 1990s. However, they also find that “most of the observed sex differences in research productivity can be attributed to sex differences in personal characteristics, structural positions, and marital status.” 
Levin and Stephan (1998) investigate gender differences in publication rates in four disciplines (Physics, Earth science, Biochemistry, and Physiology) and conclude that “in every instance‚ except the earth sciences‚ women published less than men‚ although the difference is statistically significant only for biochemists employed in academe and physiologists employed at medical schools” (p. 1056). The study did not adjust for scientific rank. 
In Abramo and colleagues’ (2009) study of Italian researchers, female professors and associate professors in the physical sciences had higher publication rates than their male counterparts, while male assistant professors had higher publication rates than female counterparts (see Tables 7–9 in Abramo et al., 2009). 
Larivière et al. (2013) do not compare the average publication rates of women and men. 
Way et al. (2016) study publication productivity in computer science from 1970 to 2010 and find that “Productivity scores do not differ between men and women. This is true even when we consider only men and women who moved up the ranks and, separately, men and women who moved down (p > 0.05, Mann–Whitney)” (see Table 2 in Way et al., 2016). However, they find that in the cohort hired after 2002 men have higher average publication rates than women. 
Holman and colleagues’ (2018) data set does not allow them to directly compare the publication rates of women and men. 
Aycock et al. (2019) “Various studies focused on discrimination as a possible source of gender differences. Small samples of female physics students were interviewed by Barthelemy, McCormick, and Henderson (2016) and Aycock, Hazari et al. (2019).” (p. 225). This is an example of biased reporting: Aycock et al. (2019) report results from a survey of 455 undergraduate women in physics. Seventy-five percent of these had experienced at least one type of sexual harassment in a context associated with physics. 
Thelwall, Bailey et al. (2018) “Large gender differences along the people/things dimension are observed in occupational choices and in academic fields: Such differences are reproduced within sub-fields (Thelwall et al., 2018). In particular, female participation is lower in sub-fields closer to physics, even within fields with their own cultures, such as ‘physical and theoretical chemistry’ within chemistry (Thelwall et al., 2018). This suggests that the people/things dimension plays a more relevant role than the different cultures of different fields.” (p. 248). The analysis by Thelwall and colleagues (2018) does not offer any substantial evidence that interest plays a greater role than culture. 
Gibney (2017), Guarino and Borden (2017)  “Furthermore, psychology finds that females value careers with positive societal benefits more than do males: (…). Indeed Gibney (2017) finds that women in UK academia report dedicating 10% less time than men to research and 4% more time to teaching and outreach, and Guarino and Borden (2017) finds that women in U.S. non-STEM fields do more academic service than men.” (p. 248). Here, Strumia links women’s extra burdens with respect to teaching obligations and academic service to an argument about a female propensity to value careers with positive societal benefits. However, none of these factors are highlighted or examined as potential confounders in his own gender comparisons of publication and citation rates. 
Handley et al. (2015) “Furthermore, fields that study bias might have their own biases: Stewart-Williams, Thomas et al. (2019) and Winegard, Clark et al. (2018) found that scientific results exhibiting male-favoring differences are perceived as less credible and more offensive. Handley, Brown et al. (2015) found that men (especially among STEM faculty) evaluate gender bias research less favorably than women.” (p. 247). This is an example of biased reporting. Handley et al. (2015) also found that men evaluated an abstract showing gender bias in research evaluations less favorably than a moderated version of the same abstract indicating no gender bias. This latter result (left out of Strumia’s paper) counters his argument on this matter. 
Ceci et al. (2014), Su et al. (2009), Lippa (2010), Hyde (2014), Su et al. (2015), Thelwall (2018b), Stoet et al. (2018) “An important clue is that a similar gender difference already appears in surveys of occupational plans and first choices of high-school students (Ceci, Ginther et al., 2014; Xie & Shauman, 2003). This is possibly mainly due to gender differences in interests (Ceci et al., 2014; Hyde, 2014; Lippa, 2010; Stoet & Geary, 2018; Su & Rounds, 2015; Su, Rounds, & Armstrong, 2009; Thelwall, Bailey et al., 2018).” (p. 226). This is an example of selective citing. Here, Strumia leaves out a vast literature on how prevalent gendered assumptions at play in cultural socialization and upbringing operate to divert men towards and women away from STEM careers. See, for example, Zwick and Renn (2000), Eccles and Jacobs (1990), Jacobs and Eccles (1992), and Jones and Wheatley (1990). 
Su et al. (2009), Diekman et al. (2010), Lippa (2010), Su et al. (2015), Thelwall (2018) “This suggests extending my considerations from possible sociological issues to possible biological issues. It is interesting to point out that the gender differences in representation and productivity observed in bibliometric data can be explained at face value (one does not need to assume that confounders make things different from what they seem), relying on the combination of two effects documented in the scientific literature: differences in interests (Diekman, Johnson, & Clark, 2010; Lippa, 2010; Su, Rounds, & Armstrong, 2009; Su & Rounds, 2015; Thelwall, Bailey et al., 2018)” … (p. 247–248). This is an erroneous interpretation of the literature. With the exception of Lippa (2010), none of the studies listed here directly relate their findings to biological sex differences. Indeed, Su and Rounds (2015) argue that “while the literature has consistently shown the influence of social contexts (e.g., parents, schools) on students' interest development, particularly the development of differential interests for boys and girls (…), little is known about the link between biological factors (e.g., brain structure, hormones) and interest development.” 
Close Modal

or Create an Account

Close Modal
Close Modal