In 2009 Journal of Neurophysiology did something brave, honest and, I hope, pioneering. It was nothing to do with the latest in brain biology. Researchers John A. Lane and David J Linden, also the editor in chief, looked at the fate of submissions to the journal over a 6-month period in 2007 and related the fate of papers (accepted/rejected) to the presumed gender of the first and last authors and of the associate editors and peer reviewers who dealt with the articles.
The gender audit was relatively simple: score 0 for a man and 1 for a woman, then divide by the number of people involved at each stage of processing a paper. If the paper was assigned to a woman associate editor, it had a gender index of 1 at this stage. A paper that needed peer review by 2 men and 2 women had a gender index score of 0.5. It may have been time-consuming to assign genders to names by internet searches while sensitively allowing for gender definitions that do not fit strictly into the male/female dichotomy. The results were published with open access and encouragement to use the data. Reassuringly for the journal, there seemed to be no gender bias in deciding the overall outcome of papers, corroborating evidence from other widely discussed papers.
However results of the audit can be presented in another way (see pie chart above) according to who was involved in peer review. Out of 706 submissions sent out for peer review, 2.1% were reviewed only by women compared to 69.5% reviewed only by men, a greater than 30-fold difference. The average gender index (GI) per paper reviewed is 0.15. This means that for every time a woman reviewed a paper, 5 or 6 men reviewed papers.
It is quite possible that there is no explicit gender bias for individual papers or by individual associate editors. However it is hard to see how overall the practice of peer reviewer selection at the level of individual papers is not biased. The GI for peer review is below the GI scores for first authors (0.27) or last authors (0.17) and for assignment to journal associate editors (0.59).
Are women particularly rare in the neurosciences? Not according to a 2009 Society of Neuroscience report of a US survey:
Women represent 50 percent of undergraduate neuroscience majors, 52 percent of predoctoral trainees, 44 percent of postdoctoral trainees, and 44 percent of nontenure-stream faculty members. In contrast, women represent only 26 percent of tenure-stream faculty members and 21 percent of full professors.
Even though the numbers and drop-off rate are similar to those seen in other STEM subjects and in other countries, with fewer women in the more senior posts, there are clearly very many women working in the neurosciences at levels of doctorate and above. When considering this talent pool, is it just chance that more than 30 times more men-only panels were chosen than women-only panels? If women are trusted as researchers, why do male-dominant panels outnumber female-dominant panels 15 to 1? Are only full professors chosen to be peer reviewers, and if so, is this a known criterion for selecting reviewers?
There is another fascinating aspect to the audit results. Most of the Journal of Neurophysiology editorial board in 2007 were women with a woman as editor in chief. I wondered whether the board had somehow sought to balance this by requesting more reviews from men. The average GI for peer review panels chosen by women associate editors was marginally higher (0.16) than for men associate editors (0.13). Although that might be small, it is equivalent to 5.3 men to 1 woman instead of 6.4 men to 1 woman. (Sorry about expressing men to one decimal place.) Statistically significant or not, it might be very significant in the career of the extra women chosen as reviewers.
Peer review is a situation where the individuals on panels do not interact. It doesn’t matter whether you are shy or are wearing high heels, a miniskirt or lipstick (that goes for men and women), everything passes by the written word. That is why the choice of peer reviewer and what criteria are used to select them (not what criteria we think are used) are so important. Scientists and editors may be objective when judging science, but what is the evidence they are objective when judging the relative technical expertise of an international talent pool?
Journal editors, is your journal brave enough to carry out a gender audit? If so what does your peer review pie chart look like? Why not sign up for the APEER survey today?
Read the original paper here:
John A. Lane and David J Linden. (2009) Is There Gender Bias in the Peer Review Process at Journal of Neurophysiology? Journal of Neurophysiology 101 (5), 2195-2196