For about the last thirty years in the scientific writing community, September annually heralds roughly a month of heightened awareness in peer review. Understanding peer review means gaining a grasp at how it is defined, what is the “generic” or classical process, how that classical process has been changed since the Internet became widely available in roughly 1990, and why research on peer review suggests few solutions to its problems. This is the first installment of three blogs on the subject; I’ve broken the blogs into three topics: peer review process, research, ongoing issues.
Peer review is the classical method scientists use to “referee” or determine the quality of both their research protocols and their research results. Ultimately, the purpose of peer review is to improve the quality of science because published papers become the body of knowledge, the primary source in science. When I teach scientific writing, I have defined for students two types of peer review based on the “peer”: informal peer review and formal peer review. A peer is a person who shares disciplinary education, training, and expertise; more than one reviewer usually is involved.
Informal peer review comprises reading and commentary by colleagues in proximity on the content of draft research proposals or journal articles -in-progress, according to an outline or agreed-upon protocol. Typically, the author(s) provides a set of questions with the manuscript then solicits disciplinary colleagues to be the peer reviewers. Informal peer review thereby can be described as open (authors and peers know each other) and consistent (peers are familiar with the research questions, methods, data analysis, published literature); discerned because bias and reputation of the authors and peers usually are known or noted. Informal peer review is based on trust of the colleagues involved and on mutual respect for each participant’s time, expertise, and work. The purposes for informal peer review may be to improve the quality of the writing, validate the research method, confirm the analysis of the data, or support the conclusions made—or any combination thereof. These purposes may or may not be shared in writing; often one or more discussions rather than written corrections ensue. Informal peer review is critical feedback, a way for authors and readers to mutually mentor in the disciplinary community, especially in the early stages of research or the preparation of a manuscript. Valuable insight about how the particular discipline has adapted its use of the IMRAD (Introduction-Materials & Methods, Results, and Discussion) form can be gained by authors through informal peer review. Similarly, working knowledge of the disciplinary community’s literature, manuals, and styles is gained by students who engage in informal peer review while developing their manuscripts.
Formal peer review involves informed reading and written commentary on the content of a manuscript submitted to an outlet for publication. Manuscripts are expected to conform to the outlet’s guidelines for content and style. Outlets as well as grant-giving institutions have targeted audiences with communities of discipline-specific participants. Reviewers write a persuasive argument to publish or not to publish directed to the editor, not necessarily to the authors. Typically, a journal editor solicits pools of disciplinary specialists who then are selected through an editorial process as the qualified, willing, and able peer(s) to read and criticize an assigned manuscript, keeping in mind the journal’s timeline and guidelines, to ultimately then recommend whether to publish—and the process can be visually presented in a flowchart of the work as below (now typically called a “workflow”). Formal peer review is called “blind” when the authors do not know the reviewers but the editors and reviewers know the authors; “double-blind” when neither the authors nor the reviewers know each other but the editors do know them; and “triple-blind” when apparently neither the authors, reviewers, nor editors know who did what. Because journals tend to be defined by discipline(s), editors hold primary responsibility to vet peers for integrity, expertise, knowledge, reliability, and suitability. More recently, editors have asked authors to recommend reviewers, a step which has seriously challenged ethical peer review by introducing additional bias or conflict. Ideally, formal peer review is based on advancing and preserving the body of knowledge in a discipline, that is, to evaluate whether the hypothesis, design, methodology, analysis, and conclusions of an experimental study should be placed into the body of knowledge through its publication. Formal peer review should help the editor decide the worth, the contribution, of the article to science.
Agencies or institutions (and recently, businesses) that fund science may also follow a formal peer review process very similar to that of a publisher. Instead of the outlet guidelines, agencies will produce a criteria and a protocol for the review process. Again, these communities are disciplinary-specific with reviewers chosen to be highly qualified and experienced, especially in experimental design and the ability to realistically judge whether a proposal can be accomplished given the resources—people, supplies, equipment, laboratories, and reputations—available to the scientific team to accomplish that which they proposed. Scientific merit typically leads recommendations to fund. Also, the potential to successfully meet the funding criteria, innovate, and produce results for publication, which is the last step in the research process, is also considered. Consequently, funded research typically is peer reviewed in the initial proposal process and in its final journal publication.
The Birth of Modern Peer Review
Sense About Science (SAS) and Voice of Young Science (VOYS). 2012. Peer review: the nuts and bolts (.pdf download). Available at https://senseaboutscience.org/activities/peer-review-the-nuts-and-bolts/ . This guide is written by junior researchers for junior researchers.
For a review of a VOYS workshop on peer review see Heidi Gardner’s 2015 (September 25) blog on the Students 4 best website at https://www.students4bestevidence.net/voice-young-science-peer-review-nuts-bolts-workshop-review/
Video: NIH Peer Review Revealed