News

Strategies to sidestep selfish peer reviewers

Researchers point to options for tackling bias against competitors.

  • Jon Brock

James O'Dwyer (left) and Rafael D'Andrea.

Strategies to sidestep selfish peer reviewers

Researchers point to options for tackling bias against competitors.

20 November 2017

Jon Brock

James O'Dwyer (left) and Rafael D'Andrea.

For scientists, getting ahead often means publishing more papers in higher-ranked journals than their competitors. But publication requires peer review — the scientist’s work is evaluated by experts in their field who are often his or her direct competitors. The concern for many scientists is that reviewers acting selfishly can prevent or delay publication of high-quality research that puts their own work in the shade.

Can journal editors take steps to save peer review from these ‘selfish’ reviewers?

That was the question Rafael D’Andrea and James O’Dwyer of the University of Illinois addressed in a study published in October in PLoS ONE. They built a computational model to simulate the peer review process. Their model, based on a 2011 study, assumed that each paper was evaluated by two reviewers randomly drawn from a pool of scientists. Impartial reviewers recommended acceptance of the paper on condition that it exceeded a certain quality threshold. Selfish reviewers would, however, recommend rejection of papers that were of higher quality than their own papers — and accept all inferior papers.

D’Andrea and O’Dwyer then ran a series of simulations, progressively increasing the number of selfish reviewers in the pool. In their initial simulations, the editor simply passed on the reviewers’ recommendations, accepting the paper 50% of the time if reviewers disagreed. As the number of selfish reviewers increased, the quality of accepted papers fell. Just 10% of reviewers acting selfishly could bring quality down by a full standard deviation.

Next, D’Andrea and O’Dwyer considered a number of alternative editorial strategies. Consulting a third reviewer in cases of reviewer disagreement improved the quality of accepted papers, although this became less effective the more selfish reviewers there were in the pool.

Quality was also improved by ‘desk rejecting’ the weakest papers and accepting the very best papers without sending them to reviewers. However, as D’Andrea and O’Dwyer acknowledge, this strategy relies on editors being able to judge the quality of a paper as well as an expert reviewer could.

A third potentially useful strategy was to blacklist reviewers who consistently disagreed with other reviewers. This assumes optimistically that reviewers will tend to agree with one another if they are acting impartially. It would also require reviewer history to be shared across journals, which does not currently happen.

Flaminio Squazzoni from the University of Brescia, has adopted a similar approach in his own work and acknowledges that models are only as good as their assumptions. “Most attempts, including this article, assume that quality is easily definable and detectable by scientists,” he says. “This is an approximation, which is even more problematic in fields, such as social sciences, for example, in which multiple approaches and standards coexist.”

The next step, says Squazzoni, is to test these models against real-world peer review data. To this end, he’s leading the PEERE initiative, a collaboration with publishers and scientific societies aimed at opening up peer review data.

In the meantime, D’Andrea and O’Dwyer argue that computational modelling is a useful first step, highlighting potential vulnerabilities in peer review and suggesting where data gathering should focus. “We view this as continuing an ongoing discussion on the future and value of peer review,” they say. “This is a time where various different models of scientific publishing are playing out, and so careful study of peer review and the publishing process more generally may help shape how we do and communicate science.”

Sign up to the Nature Index newsletter

Get regular news, analysis and data insights from the editorial team delivered to your inbox.

Nature Index mail icon