The Gender Policy Committee is working to advance and gender-sensitive reporting and communication in science. The goal is not only better science, whether in the life, natural or social sciences, but also enhanced evidence-based practices, interventions and opportunities, for both women and men.
Founded in 1869 with a mission to aid and inform scientific men (as we are reminded every week), yesterday Nature published an editorial on how women contribute to the content of this general science magazine. Nature is ever popular and well read, or at least well cited as its latest impact factor of 36.28 shows. Researchers still aspire to publish in such an august periodical and many careers have been built on succeeding to do just that. Being a reviewer for Nature adds even more distinction to a scientist’s track record.
Unlike many journals, Nature does provides a statement on how reviewers are chosen.
Reviewer selection is critical to the publication process, and we base our choice on many factors, including expertise, reputation, specific recommendations and our own previous experience of a reviewer’s characteristics. For instance, we avoid using people who are slow, careless, or do not provide reasoning for their views, whether harsh or lenient.
There is no mention of avoiding women although essentially that is what has been happening. Everyone would agree that a reviewer needs to be expert and efficient, but the ‘reputation’ and ‘specific recommendations’ criteria need to be examined in more detail. What do editors look for in a scientist’s reputation? Who do they ask for recommendations?
Nature will now include a gender loop in decisionmaking in the editorial office by asking “Who are the five women I could ask?” There will be internal targets, but no quotas. There seems to be some apprehension about the extra work that may be incurred. The Nature editorial does not vouch for NPG stablemates like Nature Neuroscience and Nature Cell Biology, journals that may be failing researchers too.
Nature editors were involved in the GenSet Gender in Science Consensus Report, which should be compulsory reading for all scientists. The candidness now from Nature is certainly a step in the right direction, but more forceful leadership and reparation is needed to correct more than 140 years of hidden sexism in science.
The original editorial can be found here: Nature’s sexism. Nature. Vol. 491, 495
Editors, authors, and reviewers are influential in shaping science.
So starts a 1998 JAMA paper by Dickersin et al. who asked “Is there Sex Bias in Choosing Editors?” The five authors decided to focus on four US journals in one field. By hook and by crook they found out how many women were editors, reviewers or authors for American Journal of Epidemiology, Journal of Clinical Epidemiology, Annals of Epidemiology, and the succinctly named Epidemiology in 1982, 1987, 1992 and 1994. The field of epidemiology and public health research is an interesting sector as it has traditionally been one that attracts a high proportion of women, many of whom have become remarkable role models.
To support this, the authors cited statistics on membership of learned societies (46% of Society for Epidemiology Research members were women in 1996), enrollment in US graduate courses (women outnumbered men in 1984) and increasing representation on faculties (from 23% in 1976 to 36% in 1991).
The value of this approach is that it compares journals within one field. That means that all journals were dealing with broadly the same pool of researchers as readers, authors, reviewers and editors. So differences in the proportions of women acting as reviewers is likely to directly reflect differences in journal policy and practice in any one year.
For Journal of Clinical Epidemiology, the number of men and women reviewers could not be estimated as reviewers’ names were not published. Could this lack of transparency be in any way related to the low proportion of women who served as editors at this journal, rising from none in 1982 to just 7.8% in 1994?
During the same period, the number of women reviewers at American Journal of Epidemiology doubled from 14.3% in 1982 to 31.3% in 1994. The number of women editors doubled too reaching just 15% in 1994.
Two of the journals were established in 1990 in the middle of the 12-year span. Annals of Epidemiology did not list reviewers’ names in 1992, but did in 1994 when 37.4% of reviewers were women chosen by a corps of editors, 30% of whom were women. Meanwhile at Epidemiology, 5% more women were reviewers in 1994 than in 1992, a rapid change bringing it on a par with the older American Journal of Epidemiology.
As the authors say
If women in general are not perceived to have the same stature as men in a field or are not part of the existing informal or informal networks involved in the nomination process, there may be selection bias against them.
So what has happened in the last 18 years? How many women today review for journals in epidemiology or other fields? If enough journal editors within a sector take the APEER survey we may be able to assess whether bias is caused by particular journal practices, old or new.
Is peer review a little rusty? Photo by Claudio Matsuoka
Sense about Science, a UK charitable trust, recently published a document that rounds up the mysterious world of science peer review for insiders and outsiders alike. It is called “Peer review – the nuts and bolts” and you can download it from this page. It follows on from the detailed 2009 international survey that Sense About Science carried out on peer review. The survey came up with facts like:
Playing an active role in the community is top of reasons to review: 90% of reviewers say they review because they believe they are playing an active role in the community.
61% of reviewers have rejected an invitation to review an article in the last year, citing lack of expertise as the main reason – this suggests that journals could better identify suitable reviewers.
It is also well timed after the UK House of Commons Science & Technology Select Committee report on peer review in 2011 and the current rethinking of the value, quality and methods of peer review by scientists and editors, especially as technology enables communication in all sorts of new ways and journals rework business models to make access to research more open. The booklet makes many details of peer review more transparent especially for those new to or unfamiliar with science and puts faces to many of the contributions from the worlds of publishing, research, research funding and journalism, like this from Jamie McClelland, who was involved in the Voice of Young Science workshops on which the report is based:
“One of the reasons I like to review papers is that it makes me feel like an important part of the academic community, and that my opinion about what is (or isn’t) good science actually matters.”
The main impressions after reading the report is how diverse practices and opinions are in peer review, how peer review is really a simple process of giving one’s expert opinion and need not be so obscure, and how enthusiastic everyone is about participating in peer review. It is an uplifting read and a good starting point for training and discussion.
If you don’t laugh, you’ll cry.
If you have been on the giving or receiving end of peer review, no doubt you’ve experienced your own frustrations and emotions. Kate Cross is a research fellow at St Andrew’s University in Scotland. Not only is Kate an expert on evolutionary sex differences in human social behaviour, she is also a stand-up comedian and a talented science communicator. In this routine at the Bright Club in Edinburgh, Kate explains why peer review so often evokes that strange feeling of déjà vu.
You can watch Kate’s full Bright Club set here. Hopefully we’ll find out what Kate, in both her scientific and comedic capacities, makes of the results of the APEER survey next year.
George Bush is US president and storming the desert. John Major is UK Prime Minister. You hear Bryan Adams sing ‘Everything I do’ for the first time (lovely for a wedding) and even ‘Sit down’ to James. Pity the poor students graduating in the middle of a recession. Never mind, they can go and see Silence of the Lambs at the cinema.
Meanwhile something called a website goes online at CERN. Fast forward to the next century for more.
The US president is still called George Bush, but with a W between the George and the Bush (sounds like a pub crawl). Tony Blair is re-elected in the UK. ‘You’re Beautiful’ is now the must-play wedding track thanks to James Blunt.
George W is still in the White House and Gordon Brown moves to 10 Downing Street. The phrase ‘in the current economic climate’ is limbering up quietly in the wings. Nature Cell Biology is encouraged that 25.1% of 346 reviewers are women. It’s not all bad news though – Twitter is launched this year.
Barack Obama is completing his first term as US president and David Cameron is the British PM. iPads and iPhones are nothing new like recession and graduate unemployment. Are science journals just as biased when choosing peer reviewers as two decades ago?
Help us find out and sign up for the APEER survey.
In 2009 Journal of Neurophysiology did something brave, honest and, I hope, pioneering. It was nothing to do with the latest in brain biology. Researchers John A. Lane and David J Linden, also the editor in chief, looked at the fate of submissions to the journal over a 6-month period in 2007 and related the fate of papers (accepted/rejected) to the presumed gender of the first and last authors and of the associate editors and peer reviewers who dealt with the articles.
The gender audit was relatively simple: score 0 for a man and 1 for a woman, then divide by the number of people involved at each stage of processing a paper. If the paper was assigned to a woman associate editor, it had a gender index of 1 at this stage. A paper that needed peer review by 2 men and 2 women had a gender index score of 0.5. It may have been time-consuming to assign genders to names by internet searches while sensitively allowing for gender definitions that do not fit strictly into the male/female dichotomy. The results were published with open access and encouragement to use the data. Reassuringly for the journal, there seemed to be no gender bias in deciding the overall outcome of papers, corroborating evidence from other widely discussed papers.
However results of the audit can be presented in another way (see pie chart above) according to who was involved in peer review. Out of 706 submissions sent out for peer review, 2.1% were reviewed only by women compared to 69.5% reviewed only by men, a greater than 30-fold difference. The average gender index (GI) per paper reviewed is 0.15. This means that for every time a woman reviewed a paper, 5 or 6 men reviewed papers.
It is quite possible that there is no explicit gender bias for individual papers or by individual associate editors. However it is hard to see how overall the practice of peer reviewer selection at the level of individual papers is not biased. The GI for peer review is below the GI scores for first authors (0.27) or last authors (0.17) and for assignment to journal associate editors (0.59).
Are women particularly rare in the neurosciences? Not according to a 2009 Society of Neuroscience report of a US survey:
Women represent 50 percent of undergraduate neuroscience majors, 52 percent of predoctoral trainees, 44 percent of postdoctoral trainees, and 44 percent of nontenure-stream faculty members. In contrast, women represent only 26 percent of tenure-stream faculty members and 21 percent of full professors.
Even though the numbers and drop-off rate are similar to those seen in other STEM subjects and in other countries, with fewer women in the more senior posts, there are clearly very many women working in the neurosciences at levels of doctorate and above. When considering this talent pool, is it just chance that more than 30 times more men-only panels were chosen than women-only panels? If women are trusted as researchers, why do male-dominant panels outnumber female-dominant panels 15 to 1? Are only full professors chosen to be peer reviewers, and if so, is this a known criterion for selecting reviewers?
There is another fascinating aspect to the audit results. Most of the Journal of Neurophysiology editorial board in 2007 were women with a woman as editor in chief. I wondered whether the board had somehow sought to balance this by requesting more reviews from men. The average GI for peer review panels chosen by women associate editors was marginally higher (0.16) than for men associate editors (0.13). Although that might be small, it is equivalent to 5.3 men to 1 woman instead of 6.4 men to 1 woman. (Sorry about expressing men to one decimal place.) Statistically significant or not, it might be very significant in the career of the extra women chosen as reviewers.
Peer review is a situation where the individuals on panels do not interact. It doesn’t matter whether you are shy or are wearing high heels, a miniskirt or lipstick (that goes for men and women), everything passes by the written word. That is why the choice of peer reviewer and what criteria are used to select them (not what criteria we think are used) are so important. Scientists and editors may be objective when judging science, but what is the evidence they are objective when judging the relative technical expertise of an international talent pool?
Journal editors, is your journal brave enough to carry out a gender audit? If so what does your peer review pie chart look like? Why not sign up for the APEER survey today?
I was in the psychologist’s consulting room the other day playing Qui est-ce?, the French version of the classic children’s game Guess who?. Do you remember all the old favourites? Max, Anita, Herman, Susan et al. with their big-mouthed/small-mouthed, red-lipped/pink-lipped, up-turned/down-turned grins.
How many scientists have honed their powers of deduction by analysing facial hair, eye colour, and the facial vasodilation response? It seemed perfect preparation for life in the lab – scoring phenotypes, calculating probabilities, following dichotomous keys, not to mention improvised matchmaking (Maria & Tom forever).
You don’t have to stretch your faculties too far to work out that the skewed representation of women on the Guess Who? board resembles the lab too. In the original game by Theora Design, there were only 5 women out of 24 people. Asking ‘Is your person a woman?’ gave you the same chance of elimination as asking ‘Does your person have a moustache?’ or ‘Is your person bald?’. Whether that probability worked in your favour would depend on the luck of the draw. Of course, updated versions are now available; the current French version now has 8 women out of 24 characters.
What if all the characters on the Guess Who? board were researchers working in your field? Who would you choose to review a paper? What exactly divides Alfred and Anita apart from Alfred’s defiant flouting of Health and Safety rules? Claire might have all the right technical knowledge and just be wearing that hat for a bet with Eric and Bernard.
Editors, what questions do you ask when you choose a peer reviewer?
Rather appropriately in the city that dreamt up Skype, the theme of the conference at Tallinn University of Technology is Editing in the Digital World. Plenary and parallel sessions will cover topics like open access and digital models, regional and national journals, dealing with data, social media in academic publishing, language editing, detecting misconduct, editorial office management, and bibliometrics.
Iain Chalmers, whose Life Scientific recently featured on BBC Radio 4, will be chairing a session on bias in medical and health carepublications. A discussion on gender equality in science has been added to the agenda when I will have the chance to meet Mirjam Curno, who was also awarded a grant from the Biochemical Society this year to study gender inclusion in scholarly literature.
It is very encouraging that some journal editors have already enrolled for the APEER survey and I will be drumming up more support with our Who’s who? badges and flyers. In keeping with the forward-looking theme, Tweets, Facebook posts and Youtube videos will be positively encouraged during the conference alongside the traditional poster sessions and newsletter.
We will be celebrating the 30th anniversary of EASE in Tallinn Town Hall, with a gala dinner the following evening in the rather forebodingly named House of the Brotherhood of the Black Heads! These are both landmarks on the tourist map so it will be a great privilege to have this opportunity to chat about science and editing in such remarkable surroundings.
Peer review is a mainstay of science publishing. When a research manuscript is submitted to a journal for publication, editors ask a few of the authors’ peers, people working on similar research topics, to independently evaluate the quality and pertinence of the work.
A peer reviewer is an expert. Considered an honour, a duty or sometimes a burden, acting as a peer reviewer combines recognition by and contribution to a wider community. The work is mostly unpaid but that does not mean it is of no value. Peer review usually adds value to the reviewed research, to the journal and to the reviewer. Learning to evaluate research objectively is a key skill. Demonstrating expertise in a particular field is an important factor in progressing to more senior research positions.
Who are peer reviewers?
How do journal editors choose reviewers?
Do men and women have an equal chance in vetting where, when and how research is published?
Scientific journals are so numerous that there are likely to be a wide range of policies and methods for finding peer reviewers. The APEER survey will take a snapshot of current practice in selecting peer reviewers and reveal whether women and men can contribute equally as peers.