PEER REVIEW: PAST, PRESENT AND FUTURE

An imperfect but still relevant pillar of scientific publishing 

A short history

Peer review is a relatively recent innovation in the history of scientific publication. The first journal (which is still in print!) was launched in 1665 by the Royal Society in London, (Phil Trans R Soc B), while peer review as we know of began in the mid 1970s. So how did we get here?

Essentially, the scientific community of the 17th, 18th and 19th centuries was a small, select group of men and women (mostly men, though), who would regularly communicate via short articles, personal letters, public and private presentations, and, of course, full length books (e.g. On the Origin of Species, C. Darwin). As the community expanded and professionalized, the ways in which published communications were disseminated slowly began to change. Journal editors would publish work that they deemed interesting, or that arose from members of the learned society associated with the journal.

It’s fascinating to me that when Watson and Crick submitted their famous ‘double helix’ paper to Nature in 1953, the letter accompanying the article pretty much said something like “we and our colleagues at the Laboratory for Molecular Biology deem this appropriate for Nature” and the journal editors complied (a subsequent editor of Nature, John Maddox, wrote that “the Crick and Watson paper could not have been refereed: its correctness is self-evident. No referee working in the field (Linus Pauling?) could have kept his mouth shut once he saw the structure…”)

These early ways of assessing whether a scientific communication should be published were inherently ad hominem, justified by the fact that the hominem in question was an expert (or at least “well-versed”) in the relevant field. This process invested a lot of power in the hands of a distinguished few, hoping for objectivity that would most likely have been lacking if, for example, a communication was considered in competition with the expert’s own work.

It seems common sensible to us now that a chummy network of pals recommending each other’s work for publication is no way to conduct science objectively. As the scientific enterprise boomed in the 20th century, and subspecialties blossomed, it became increasingly difficult for journal editors to make informed decisions about what to publish by themselves. The process of seeking input from knowledgeable peers began to gain currency, and by the time of the launch of the journal Cell in 1974 by Ben Lewin, it was common practice to send papers for review prior to publication.

Modern peer review

In brief, this entails a series of communications between the handling editor for a submitted manuscript and a small number (typically 3-4) of experts in the field who are asked to comment on the scientific integrity of the study and to put forward their opinion of the importance or “significance”) of the advance reported. The editor will then send the reviewers’ comments back to the author with a decision on the manuscript – either “accept”, “revise” or “reject”.

The move to peer review by a broader range of anonymous reviewers was a move toward greater equity in the likelihood of less well-known scientists having their work published in prestigious publications. Unsurprisingly, though, the transition to regular peer review was not without some resistance, especially from those who benefited most from the old system: in a review of Melinda Baldwin’s recent book “Making nature”, former deputy editor of Nature Peter Newmark wrote that “in 1974 .. I was invited — or rather summoned — to meet Max Perutz at the Medical Research Council’s Laboratory of Molecular Biology in Cambridge. … Perutz came straight to the point. Why, he asked, had Nature started to peer review papers from the laboratory when previously they were published without peer review? Indeed, he continued, as all papers sent to Nature are checked by members of the board, peer review is unnecessary. “

So, peer review eventually took off. From about the 1970s and well into the early 1990s — that is, prior to the age of email — reviewers would be sent paper copies of the manuscript with high quality glossy photographs (or later, CD-ROMs) of the data/figures. Reviewers were not asked whether they wanted to review, they were simply summarily assigned the job (checking obituaries would sometimes be useful before deciding to send a paper to a certain expert in the field). Reviewers would send in their type-written (occasionally even handwritten) comments by snailmail or fax and eventually the editors would send back an annotated manuscript to the authors to revise. Editorial assistants would cut up the reviews (yes, with scissors!) and photocopy an assembled set of paragraphs after editors had bracketed the points that they wished to be passed on to the author. Interestingly a return to this sort of ‘editing’ of reviewer reports and compiling into one document is now in practice at eLife, though this now also involves re-engagement of the reviewers themselves to agree with the overall recommendation.

Now of course, online review – and indeed online publication – is disrupting the publishing enterprise all over again. Papers are longer and more interdisciplinary than ever. It is becoming an impossibility to actually review a paper with terabytes of data in any meaningful sense. Meanwhile the growing sense that peer review takes too long, isn’t transparent, and is an unfair division of labor and costs means that other publishing models are taking hold. Pre-print publication in repositories such as arXiv and bioRxiv are allowing scientists to release their work to the public as soon as they wish although readers must be wary that the claims have not necessarily been evaluated by impartial judges.

The future of peer review?

So it behooves us to ask, what value does peer review still have? Is it outdated and we need to replace it with something else? Or can we improve it to fit the needs of the 21st century?

At its best, peer review provides a measure of quality control for the scientific literature, helping authors to make the best case for their exciting new discoveries. Readers, funders, industry, and the public can read the paper and trust the results, turning those insights into the building blocks for the next advance for society. And frankly, the other reason it’s important is that the journals that are most prestigious and most likely to result in job offers, promotions and grants should you publish in them, still require it. So being adept at writing reviews and interpreting them are important skills to hone from a pragmatic standpoint.

It’s this issue of improving peer review that we aim to address with the accompanying CommKit article [Peer Review – Best Practices]. Right now, it’s still an important requirement of being a scientist, and many of the issues we can cite with peer review as currently practiced boil down to badly written reviews, badly-behaved peer reviewers, and editors who are unable to parse the results of this badly-done process. If we could re-set the system so that reviews are constructive, timely and objective, then the scientific community would be better off for it.

Nonetheless, there are many issues that remain, so I would posit that while being good practitioners of peer review, we can also think about ways to disrupt the system. Much has been written on this topic so I will end here with links to some other interesting viewpoints (of particular note: whether peer review can improve the reproducibility of research; increasing transparency of the process; the unfair business models of publishing; and, of course, the importance of training peer reviewers). But reminding ourselves that scientific publishing has had many guises in the last 400 years, and peer review has only been part of the equation for the past 40-odd years should give us hope that we can pursue new ways of evaluating the scientific literature that better address the needs of the scientific community and society at large.

But at the very least, let’s commit to being as conscientious and constructive as possible when agreeing to review a manuscript. You can start by checking out the CommKit!

 

With many thanks to Diana Chien, MIT CommLab, Mike Orella, MIT MechE CommLab, Akshata Sonni & Chris Gerry, Broad CommLab, members of the scipub working group at Broad, and to Geoffrey North, Editor of Current Biology, for useful discussions.