You are using an outdated browser . Please upgrade your browser today !

What Is Peer Review and Why Is It Important?

It’s one of the major cornerstones of the academic process and critical to maintaining rigorous quality standards for research papers. Whichever side of the peer review process you’re on, we want to help you understand the steps involved.

This post is part of a series that provides practical information and resources for authors and editors.

Peer review – the evaluation of academic research by other experts in the same field – has been used by the scientific community as a method of ensuring novelty and quality of research for more than 300 years. It is a testament to the power of peer review that a scientific hypothesis or statement presented to the world is largely ignored by the scholarly community unless it is first published in a peer-reviewed journal.

It is also safe to say that peer review is a critical element of the scholarly publication process and one of the major cornerstones of the academic process. It acts as a filter, ensuring that research is properly verified before being published. And it arguably improves the quality of the research, as the rigorous review by like-minded experts helps to refine or emphasise key points and correct inadvertent errors.

Ideally, this process encourages authors to meet the accepted standards of their discipline and in turn reduces the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views.

If you are a researcher, you will come across peer review many times in your career. But not every part of the process might be clear to you yet. So, let’s have a look together!

Types of Peer Review

Peer review comes in many different forms. With single-blind peer review , the names of the reviewers are hidden from the authors, while double-blind peer review , both reviewers and authors remain anonymous. Then, there is open peer review , a term which offers more than one interpretation nowadays.

Open peer review can simply mean that reviewer and author identities are revealed to each other. It can also mean that a journal makes the reviewers’ reports and author replies of published papers publicly available (anonymized or not). The “open” in open peer review can even be a call for participation, where fellow researchers are invited to proactively comment on a freely accessible pre-print article. The latter two options are not yet widely used, but the Open Science movement, which strives for more transparency in scientific publishing, has been giving them a strong push over the last years.

If you are unsure about what kind of peer review a specific journal conducts, check out its instructions for authors and/or their editorial policy on the journal’s home page.

Why Should I Even Review?

To answer that question, many reviewers would probably reply that it simply is their “academic duty” – a natural part of academia, an important mechanism to monitor the quality of published research in their field. This is of course why the peer-review system was developed in the first place – by academia rather than the publishers – but there are also benefits.

Are you looking for the right place to publish your paper? Find out here whether a De Gruyter journal might be the right fit.

Besides a general interest in the field, reviewing also helps researchers keep up-to-date with the latest developments. They get to know about new research before everyone else does. It might help with their own research and/or stimulate new ideas. On top of that, reviewing builds relationships with prestigious journals and journal editors.

Clearly, reviewing is also crucial for the development of a scientific career, especially in the early stages. Relatively new services like Publons and ORCID Reviewer Recognition can support reviewers in getting credit for their efforts and making their contributions more visible to the wider community.

The Fundamentals of Reviewing

You have received an invitation to review? Before agreeing to do so, there are three pertinent questions you should ask yourself:

  • Does the article you are being asked to review match your expertise?
  • Do you have time to review the paper?
  • Are there any potential conflicts of interest (e.g. of financial or personal nature)?

If you feel like you cannot handle the review for whatever reason, it is okay to decline. If you can think of a colleague who would be well suited for the topic, even better – suggest them to the journal’s editorial office.

But let’s assume that you have accepted the request. Here are some general things to keep in mind:

Please be aware that reviewer reports provide advice for editors to assist them in reaching a decision on a submitted paper. The final decision concerning a manuscript does not lie with you, but ultimately with the editor. It’s your expert guidance that is being sought.

Reviewing also needs to be conducted confidentially . The article you have been asked to review, including supplementary material, must never be disclosed to a third party. In the traditional single- or double-blind peer review process, your own anonymity will also be strictly preserved. Therefore, you should not communicate directly with the authors.

When writing a review, it is important to keep the journal’s guidelines in mind and to work along the building blocks of a manuscript (typically: abstract, introduction, methods, results, discussion, conclusion, references, tables, figures).

After initial receipt of the manuscript, you will be asked to supply your feedback within a specified period (usually 2-4 weeks). If at some point you notice that you are running out of time, get in touch with the editorial office as soon as you can and ask whether an extension is possible.

Some More Advice from a Journal Editor

  • Be critical and constructive. An editor will find it easier to overturn very critical, unconstructive comments than to overturn favourable comments.
  • Justify and specify all criticisms. Make specific references to the text of the paper (use line numbers!) or to published literature. Vague criticisms are unhelpful.
  • Don’t repeat information from the paper , for example, the title and authors names, as this information already appears elsewhere in the review form.
  • Check the aims and scope. This will help ensure that your comments are in accordance with journal policy and can be found on its home page.
  • Give a clear recommendation . Do not put “I will leave the decision to the editor” in your reply, unless you are genuinely unsure of your recommendation.
  • Number your comments. This makes it easy for authors to easily refer to them.
  • Be careful not to identify yourself. Check, for example, the file name of your report if you submit it as a Word file.

Sticking to these rules will make the author’s life and that of the editors much easier!

Explore new perspectives on peer review in this collection of blog posts published during Peer Review Week 2021

why is peer reviewed literature important

[Title image by AndreyPopov/iStock/Getty Images Plus

David Sleeman

David Sleeman worked as a Senior Journals Manager in the field of Physical Sciences at De Gruyter.

You might also be interested in

Academia & Publishing

Academic Librarians on Intellectual Freedom and Change, Part 6: An Interview with Jochen Johannsen

Academic librarians on intellectual freedom and change, part 5: an interview with marcela rivera-cornejo, academic librarians on intellectual freedom and change, part 4: an interview with fiona greig, visit our shop.

De Gruyter publishes over 1,300 new book titles each year and more than 750 journals in the humanities, social sciences, medicine, mathematics, engineering, computer sciences, natural sciences, and law.

Pin It on Pinterest

Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide

Affiliations.

  • 1 Clinical Biochemistry, Department of Pediatric Laboratory Medicine, The Hospital for Sick Children, University of Toronto , Toronto, Ontario, Canada.
  • 2 Clinical Biochemistry, Department of Pediatric Laboratory Medicine, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada; Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada; Chair, Communications and Publications Division (CPD), International Federation for Sick Clinical Chemistry (IFCC), Milan, Italy.
  • PMID: 27683470
  • PMCID: PMC4975196

Peer review has been defined as a process of subjecting an author's scholarly work, research or ideas to the scrutiny of others who are experts in the same field. It functions to encourage authors to meet the accepted high standards of their discipline and to control the dissemination of research data to ensure that unwarranted claims, unacceptable interpretations or personal views are not published without prior expert review. Despite its wide-spread use by most journals, the peer review process has also been widely criticised due to the slowness of the process to publish new findings and due to perceived bias by the editors and/or reviewers. Within the scientific community, peer review has become an essential component of the academic writing process. It helps ensure that papers published in scientific journals answer meaningful research questions and draw accurate conclusions based on professionally executed experimentation. Submission of low quality manuscripts has become increasingly prevalent, and peer review acts as a filter to prevent this work from reaching the scientific community. The major advantage of a peer review process is that peer-reviewed articles provide a trusted form of scientific communication. Since scientific knowledge is cumulative and builds on itself, this trust is particularly important. Despite the positive impacts of peer review, critics argue that the peer review process stifles innovation in experimentation, and acts as a poor screen against plagiarism. Despite its downfalls, there has not yet been a foolproof system developed to take the place of peer review, however, researchers have been looking into electronic means of improving the peer review process. Unfortunately, the recent explosion in online only/electronic journals has led to mass publication of a large number of scientific articles with little or no peer review. This poses significant risk to advances in scientific knowledge and its future potential. The current article summarizes the peer review process, highlights the pros and cons associated with different types of peer review, and describes new methods for improving peer review.

Keywords: journal; manuscript; open access; peer review; publication.

Logo for Portland State University Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Why Use Peer Reviewed Articles in your Research?

What are peer reviewed articles.

Peer reviewed, or scholarly, articles are written by experts and are read and reviewed by other scholars in the field.  Their comments and suggestions are incorporated into the article before it is published in a scholarly journal. This process is known as peer review and helps check the accuracy and validity of the research as well as the article’s overall quality.

These articles are the main way scholars communicate with each other about their research findings.  They are like a conversation between scholars on a particular topic.  Sometimes scholars agree, and sometimes they don’t!  And just like a conversation, it is all about finding the pertinent information and moving the conversation forward.

How do articles get peer reviewed? What role does peer review play in scholarly research and publication?

Why use Peer Reviewed Articles?

Peer reviewed articles are considered the gold standard source type for academic research.  They are written by researchers, or experts, on the topic, so it takes some of the guess work out of wondering if you should use them or not.

In peer reviewed articles, you will find:

  • findings from original research
  • detailed information about a narrow aspect of your topic
  • expert language
  • charts or graphs
  • a list of references, or sources, the author used

Your Research Journey at Portland State University Library Copyright © 2020 by Amy Stanforth is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Advertisement

Issue Cover

  • Previous Issue
  • Previous Article
  • Next Article

The Peer Review Imperative

Threats to peer review, public trust in science and medicine, peer review matters: research quality and the public trust.

Michael M. Todd, M.D., served as Handling Editor for this article.

This article has a related Infographic on p. 17A.

Accepted for publication October 13, 2020.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Cite Icon Cite
  • Get Permissions
  • Search Site

Evan D. Kharasch , Michael J. Avram , J. David Clark , Andrew J. Davidson , Timothy T. Houle , Jerrold H. Levy , Martin J. London , Daniel I. Sessler , Laszlo Vutskits; Peer Review Matters: Research Quality and the Public Trust. Anesthesiology 2021; 134:1–6 doi: https://doi.org/10.1097/ALN.0000000000003608

Download citation file:

  • Ris (Zotero)
  • Reference Manager
“Peer review grounds the public trust in the scientific and medical research enterprise…”

Image: Adobe Stock.

Image: Adobe Stock.

In an era of evidence-based medicine, peer review is an engine and protector of that evidence. Such evidence, vetted by and surviving the peer review process, serves to inform clinical decision-making, providing practitioners with the information to make diagnostic and therapeutic decisions. Unfortunately, there is recent and growing pressure to prioritize the speed of research dissemination, often at the expense of careful peer review. It is timely to remind readers and the public of the value brought by peer review, its benefits to patients, how much the public trust in science and medicine rests upon peer review, and how these have become vulnerable.

Peer review has been the foundation of scholarly publishing and scientific communication since the 1665 publication of the Philosophical Transactions of the Royal Society. The benefits and advantages of peer review in scientific research, and particularly medical research, are manifold and manifest. 1   Journals, editors, and peer reviewers hold serious responsibility as stewards of valid information, with accountability to the scientific community and an obligation to maintain the public trust. Anesthesiology states its aspiration and its responsibility on the cover of every issue: Trusted Evidence. Quality peer review (more specifically, closed or single-blind peer review, in which the identity of reviewers is confidential) is a foundational tenet of Anesthesiology.

Peer review grounds the public trust in the scientific and medical research enterprise, as well as the substantial public investment in scientific research. Peer review affords patients some degree of comfort in placing their trust in practitioners, knowing that they should be informed by the best possible, vetted evidence.

Quality peer review enriches and safeguards the scientific content, transparency, comprehensibility, and scientific integrity of published articles. It can enhance published research importance, originality, authenticity, scientific validity, adherence to experimental rigor, and correctness of results and interpretations and can identify errors in research execution. Peer review can help authors improve reporting quality, presentation clarity, and transparency, thereby enhancing comprehension and potential use by clinicians and scientists. Careful scrutiny can identify whether research has appropriate ethical principles, regulatory approvals, compliance, and equitable inclusion of both sexes. Peer review should consider the appropriateness of authorship and can detect duplicate publication, fabrication, falsification, plagiarism, and other misconduct.

Peer review should serve as a tempering factor on overenthusiastic authors and overstated conclusions, unwarranted extrapolations, conflation of association with causality, unsupported clinical recommendations, and spin. Spin is a well known, unfortunately common, and often insidious bias in the presentation and interpretation of results that seeks to convince readers that the beneficial effect of an experimental treatment exceeds what has actually been found or that minimizes untoward effects. 2–4  

Manuscripts often change substantially between the initial submission and the revised and improved published version. Improvement during the peer review process is not apparent to readers, who only see the final, published article, but is well known to authors, reviewers, and editors. Peer review is a defining difference in an era of proliferating predatory journals and other forms of research dissemination. Anesthesiology reviewers and editors devote considerable effort in service to helping authors improve their scientific communications, whether published in this journal or if ultimately elsewhere.

In the domain of clinical research, peer review does not change the scientific premise of an investigation, the hypothesis, or the study design, although it frequently improves their communication. Peer review does not change clinical research data, although it often corrects, enhances, or strengthens the statistical analysis of those data and can markedly improve their presentation and clarity. More importantly, peer review can assess, correct, and improve the interpretation, meaning, importance, and communication of research results—and importantly, confirm that conclusions emanate strictly from those results. Peer review may occasionally fundamentally revise or even reverse clinical research interpretations and recommendations. Each of these many functions enhances reader understanding and should ultimately improve patient care.

Peer review is not a guarantee of truth, and it can be imperfect. Medical history provides many examples of peer-reviewed research that was later found to be incorrect, typically through error or occasionally from misconduct. However, peer review certainly was and remains an essential initial check and quality control that has weeded out, or corrected before publication, innumerable reports of research of insufficient quality or veracity that otherwise would have been published and thereby become publicly accessible. Additionally, science should be “self-correcting,” and peer review is one of the most important factors responsible for such correction. Peer review remains an element by which medical science achieves the “self-correction” that drives progress.

Quality peer review does take time. So also do the initial preparation of manuscripts and the modifications made by authors in response to peer review. Anesthesiology endeavors to provide both quality and timely peer review. Our time to first decision averages only 16 days.

The increasing emphasis on fast research dissemination, often absent quality peer review, comes mostly but not exclusively because of the immediacy of the internet and broader media and societal trends. In an era in which the companies whose major product is the immediacy of information are the economic leaders (Facebook, Twitter, Google, and Apple), it is unsurprising that the immediacy of information is challenging that of quality as the value proposition in the research marketplace. Nevertheless, fast is not synonymous with good. We believe that sacrificing quality on the altar of speed is unwise, benefits no one (except perhaps authors), and may ultimately diminish trust in medical research and possibly even worsen clinical care.

Another recent societal problem is the growing spillover of political and media communication trends into scientific communication. Almost half of Americans believe that science researchers overstate the implications of their research, and three in four think “the biggest problem with news about scientific research findings is the way news reporters cover it.” 5   Scientific conclusions may be perverted through internet-based campaigns of disinformation and misinformation and dissemination of misleading and biased information. 6   This threatens the public trust in the scientific enterprise and scientific knowledge. 7   Social media has made science and health vulnerable to strategic manipulation. 7 , 8   It is also “leaving peer-reviewed communication behind as some scientists begin to worry less about their citation index (which takes years to develop) and more about their Twitter response (measurable in hours).” 8   Peer-reviewed journals cannot reverse these trends, but they can at least ensure that scientific conclusions when presented are correct and clearly stated.

In addition to the premium on dissemination speed versus peer review quality, a new variant of rapid clinical research dissemination has emerged that abrogates peer review entirely: preprints. Preprints are research reports that are posted by authors in a publicly accessible online repository in place of or before publication in a peer-reviewed scholarly journal. The preprint concept is decades old, rooted in physics and mathematics, in which authors traditionally sent their hand- or typewritten manuscript draft to a few colleagues for feedback before submitting it to a journal for publication. With the advent of the internet, this process was replaced by preprint servers and public posting. With the creation of a preprint server for biology and the life sciences (bioRxiv.org), the posting of unreviewed manuscripts by basic biomedical scientists has exploded in popularity and practice. Next came the creation of medRxiv.org, a publicly accessible preprint server for disseminating unpublished and unreviewed clinical research results in their “preliminary form” 9   and more so a call for research funders to require mandatory posting of their grantees’ research reports first on preprint servers before peer-reviewed publication. 10   Lack of peer review is the hallmark of preprints.

The main arguments offered by proponents of preprints are the free and near-immediate access to research results, claimed acceleration of the progress of research by immediate dissemination without peer review, and the assumption that articles will be improved by feedback from a wider group of readers alongside formal review by a few experts. Specifically claimed advantages of preprints are that they bypass the peer review process that adversely delays the dissemination of research results and “lifesaving cures” and “the months-long turnaround time of the publishing process and share findings with the community more quickly.” 11   In addition it is claimed that preprints address “researchers recently becoming vocally frustrated about the lengthy process of distributing research through the conventional pipelines, numerous laments decrying increasingly impractical demands of journals and reviewers, complicated dynamics at play from both authors and publishers that can affect time to press” and enable “sharing papers online before (or instead of) publication in peer-reviewed journals.” 11  

Preprints for clinical research have been justifiably criticized. 2 , 12–15   Most importantly, medical preprints lack safeguards afforded by peer review and increase the possibility of disseminating wrong or incorrectly interpreted results. Related concerns are that preprints are unnecessary for and potentially harmful to scientific progress and a significant threat with potential consequence to patient health and safety. Preprint server proponents “assume that most preprints would subsequently be peer reviewed,” 10   possibly before or after formal publication (if published), thus enabling correction or improvement (before or after publication). However, it is estimated that careful peer review of a manuscript takes 5 to 6 h. 1 , 16   It seems highly unlikely that busy scientists will surf the web in search of preprints on which to spend half a day providing concerted informative peer review.

Preprint enthusiasts claim that peer review after posting will provide scholarly input, facilitate preprint improvement, and enhance research quality. In fact, such peer review has been scant with biologic preprints, and it seems naïve to expect it with medical preprints. In reality, most preprints receive few comments, even fewer formal reviews, and many comments that are “counted” to support the notion that preprints do undergo peer review actually come through social media; a tweet is hardly a substantive review. The idea that comments on servers will replace quality peer review is not happening now and seems unlikely to transpire. Moreover, a survey found that the lack of peer review was an important reason why authors deliberately choose to post via preprint. 17   Additionally, postdissemination peer review takes longer than traditional prepublication peer review, and there remains concern by authors who do value peer review about the quality of the post-preprint peer review process and the quality of posted preprints. 17  

Preprint server proponents state “the work in question would be available to interested readers while these processes (peer review) take place, which is more or less what happens in physics today.” 10   The lives of patients are different than the lives of subatomic particles. Preprints deliberately “decouples the dissemination of manuscripts from the much slower process of evaluation and certification.” 10   However, it is exactly that coupling that validates clinical research, benefits patients, improves health, and engenders public trust.

The potential for free and unfettered distribution of raw, unvetted, and potentially incorrect information to be consumed by clinicians and patients cannot be called a medical advance. Use of such information by news outlets and online web services to promote “new” and “latest” research further misinforms the public and patients and is a disservice.

Relegating peer review to the realm of option and afterthought is not in the interest of research quality and integrity or of patients and public health. There is no apparent value in abrogating peer review of clinical research and all its many attendant benefits in ensuring the quality of clinical research available to practitioners and patients. Practitioners and patients have historically not seen the unreviewed manuscript submissions that eventually become revised peer-reviewed publications. Doing so now, given the sizable fraction of clinical research manuscripts that are rejected for publication and the substantial changes in most that are published, by providing the public with unreviewed preprints seems to carry considerable risk.

An additional problem is that the same research report can be posted on several preprint servers or websites or multiple versions may exist on the same preprint site. Various versions may be the same or different, and the final peer-reviewed published article (if it ever exists) may bear little semblance to the various posted versions, which remain freely available. Which version is correct? Availability of various differing reports of the same research risks competing or incorrect information and can only generate confusion. Scientific publishing decades ago banned publication of the same research in multiple journals owing to concerns about data integrity and inappropriate reuse. Restarting this now, via preprints, seems unwise—especially in medicine.

The public cannot and should not be expected to differentiate between posting and peer-reviewed publication. Unfortunately, and worse, even some practitioners do not understand the difference. Posting is often referred to erroneously as publication. Indeed, even the world’s most prestigious scientific journals refer to posting as publication. 18   Such conflation blurs the validity of information. That peer-reviewed publications and preprints both receive digital object identifiers further blurs their distinction and may give the latter more apparent credibility in the eyes of the lay public. The preprint community (servers and scientists) continues to claim simultaneously that preprints are and are not publications, depending on how such claims meet their proclivities. Although the bioRxiv server contains the disclaimer “readers should be aware that articles on bioRxiv have not been finalized by authors, might contain errors, and report information that has not yet been accepted or endorsed in any way by the scientific or medical community” on a web page, 19   it is not on the preprint itself for readers to see (perhaps this disclaimer, and the one below, should appear on the cover page of every preprint and as a footnote on every page). Fortunately, the medRxiv home page ( http://www.medrxiv.org ) states the following disclaimer: “Preprints are preliminary reports of work that have not been certified by peer review. They should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.” Then why bother?

The popularity of preprints in the basic science world has exploded in the last 5 yr, with the number of documents posted to preprint servers increasing exponentially. 20   While acknowledging the noble reasons given by preprint servers and authors for the dissemination of research by posting, three other apparent reasons are less noble. The first is competition for research funding. Major research funders ( e.g. , the National Institutes of Health) do not allow citation of unpublished manuscripts in grant applications but do allow citation of preprints. 21 , 22   The second is the preoccupation of authors with the speed of availability. There is a growing (and disappointing) trend of authors perceiving a need to claim priority (“we are the first to report…”), grounded perhaps on fear of being “scooped.” The third is the pursuit of academic promotion, which is based largely on the number of peer-reviewed publications listed on a curriculum vitae . We now see faculty listing preprints in the peer-reviewed research publications section of their curriculum vitae. All these drivers (priority, science advancement, reputational reward, and financial return) 7   are investigator centric. They are neither quality-centric nor patient-centric.

Who benefits if clinical research quality is sacrificed at the altar of speed? Certainly, it is not patients, public health, or the public trust in science, medicine, and the research enterprise. Enthusiasm for preprints seems to be emanating mostly from investigators, presumably because of academic or other incentives, 23   including the desire for prominence and further funding. Is this why we do medical research? Should we be investigator- or patient-centric?

Little in the argumentation espoused by proponents of clinical preprints attends to their benefit to patients. Indeed, posted preprints without all the scrutiny and benefits of peer review may lack quality and validity and may report flawed data and conclusions, which may hurt patients. 17 , 23   As stated previously, “clinical studies of poor quality can harm patients who might start or stop therapy in response to faulty data, whereas little short-term harm would be expected from an unreviewed astronomy study.” 12  

The importance of peer review in clinical research and the downside of its absence in posted preprints is illuminated by the COVID-19 pandemic. As of this date (October 1, 2020), there are 9,222 unreviewed COVID-19 SARS–CoV-2 preprints posted: 7,257 on medRxiv and 1,965 on bioRxiv. 24   To date, 33 COVID-19 articles have been retracted (0.37%), and 5 others have been temporarily retracted or have expressions of concern. 25   Of the 33 retractions, 11 (33%) were posted on an Rxiv server. The overall retraction rate in the general peer-reviewed literature is 0.04%. 26  

Based upon one of the unreviewed COVID-19 medical preprints, 27   the Commissioner of the U.S. Food and Drug Administration (the government agency entrusted more than any other to protect public health) and the President of the United States announced that convalescent plasma from COVID-19 survivors was “safe and very effective” and had been “proven to reduce mortality by 35%.” 28   Although the Commissioner later, after scientific uproar over that misinformation, “corrected” his comment in a tweet (a back page retraction to a front page headline), 29   the preprint was used to justify a Food and Drug Administration decision to issue an emergency use authorization for convalescent plasma to treat severe COVID-19. Would these errors have been prevented by peer review? We will never know.

Even if priority in clinical (and basic) research is valued, compared to the unquestionable value of quality, clinical preprints have questionable necessity in establishing precedence in contemporary times. Clinical trials registration, which makes fully public the existence of all such research, establishes both who is doing what and when. Some investigators may even publish their entire clinical protocol, to further make their studies known and by whom and when.

For hundreds of years, patent medicines (exotic concoctions of substances, often addicting and sometimes toxic) were claimed to prevent or cure a panoply of illnesses, without any evidence of effectiveness or safety or warning of potential harm. These medical elixirs, the magic potions of snake oil salesmen and charlatans, were heavily advertised and promoted to ailing, sometimes desperate, and thoroughly unsuspecting citizens—all without any oversight, regulation, quality control, or peer review. It was not until the 20th century that medical peer review and the requirement for evidence of effectiveness and safety reigned in the “Wild West” and launched the modern era of medicine, yielding the scientific discovery, progress, and improvement in human health seen today. This era rests on the bedrock of peer review, the quality ideal, and the evidence that constitutes the foundation for evidence-based medicine.

Will clinical preprints become the patent medicines of the new millennium? Do they portend the unrestricted and unregulated spillage of anything claimed as research, by anyone, and absent the quality control afforded by peer review? Like the patent medicines of a bygone era, which were heavily promoted by the newly developed advertising industry, will “posted” clinical research become fodder for the medical advertising industry and media at large, pushing who knows what information and claims on practitioners and a public already deluged with endless promotions and claims with which they cannot keep up or verify? An unsuspecting public is incapable of differentiating between the “posting” of any research observation by anyone with access to a computer and proper scholarly “publication” of peer-reviewed results and conclusions. This is particularly true of vulnerable patients with severe and/or incurable diseases, who may grasp at anything. Moreover, continuous claims of “breakthroughs” and “proven treatments” based on preprints, followed by backpedaling after challenges and outcries, further reduces public confidence in the scientific endeavor as a whole. This can create the perception that clinical science is unreliable and might be a matter of turf wars and politics instead of reliable valid evidence.

Over the past century and throughout the world, legislation has been passed and government agencies have been created to protect the public and maintain their trust in the medicines they take. Few would advocate dismantling the protections against patent medicines. Why now consider dismantling the peer review process in clinical research?

In 2019, the editors of several journals expressed a well articulated principle that they will not accept clinical research manuscripts that had been previously posted to a preprint server. 30   Their rationale was that the benefit of preprint servers in clinical research did not outweigh the potential harm to patients and scientific integrity. Major specific concerns included: “1) Preprints may be perceived by some (and used by less scrupulous investigators) as evidence even though the studies have not gone through peer review and the public may not be able to discern an unreviewed preprint from a seminal article in a leading journal; 2) It seems unlikely that the kind of prepublication dialogue that has taken place in other academic disciplines (e.g. mathematics and physics) will take place in medicine or surgery because the incentives are very different; 3) Preprints may lead to multiple competing, and perhaps even conflicting, versions of the ‘same’ content being available online at the same time, which can cause (at least) confusion and (at most) grave harm; and 4) For the vast majority of medical diagnoses, a few months of review of a study’s findings do not make a difference; the pace of discovery and dissemination generally is adequate.” These editors’ concerns and approach merit consideration if not more widespread adoption.

The potential for practitioner and public confusion regarding the difference between unregulated preprints and peer-reviewed publication is substantial. Indeed, the posting of preprints is often incorrectly termed “publication.” Peer-reviewed publications versus posted “publications” will soon become a difference without a distinction. Moreover, authors cannot have it both ways. They cannot claim a preprint as a publication for purposes of a grant (and now in some universities potentially for purposes of a degree, appointment, and/or promotion), yet claim it is not a publication for the purposes of submission to a peer-reviewed journal that does not allow prior publication. More importantly, the peer review imperative in clinical research and the role it plays in research quality, the evidence base, and patient care, constitutes an obligation to patient safety that cannot and should not be abrogated.

Peer review, clinical research quality, and the public trust in clinical research all now face an unprecedented assault. Quality peer review is a foundational tenet of Anesthesiology and underlies the Trusted Evidence we publish. Quality, timely, and unpressured peer review will continue to be a hallmark of Anesthesiology , in service to readers, patients, and the public trust.

Acknowledgments

We thank Ryan Walther, Managing Editor, and Vicki Tedeschi, Director of Digital Communications, for their valuable insights.

Competing Interests

Dr. Clark has a consulting agreement with Teikoku Pharma USA (San Jose, California). Dr. Levy reports being on Advisory and Steering Committees for Instrumentation Laboratory (Bedford, Massachusetts), Merck & Co. (Kenilworth, New Jersey), and Octapharma (Lachen, Switzerland). Dr. London reports financial relationships with Wolters Kluwer UptoDate (Philadelphia, Pennsylvania) and Springer (journal honorarium; New York, New York). The remaining authors declare no competing interests.

Citing articles via

Most viewed, email alerts, related articles, social media, affiliations.

  • ASA Practice Parameters
  • Online First
  • Author Resource Center
  • About the Journal
  • Editorial Board
  • Rights & Permissions
  • Online ISSN 1528-1175
  • Print ISSN 0003-3022
  • Anesthesiology
  • ASA Monitor

Silverchair Information Systems

  • Terms & Conditions Privacy Policy
  • Manage Cookie Preferences
  • © Copyright 2024 American Society of Anesthesiologists

This Feature Is Available To Subscribers Only

Sign In or Create an Account

A middle aged man sits at a computer against a wall full of books.

Peer review isn’t perfect − I know because I teach others how to do it and I’ve seen firsthand how it comes up short

why is peer reviewed literature important

Director of the Center for Teaching and Learning, Quinnipiac University

Disclosure statement

JT Torres does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Quinnipiac University provides funding as a member of The Conversation US.

View all partners

When I teach research methods, a major focus is peer review . As a process, peer review evaluates academic papers for their quality, integrity and impact on a field, largely shaping what scientists accept as “knowledge.” By instinct, any academic follows up a new idea with the question, “Was that peer reviewed?”

Although I believe in the importance of peer review – and I help do peer reviews for several academic journals – I know how vulnerable the process can be. Not only have academics questioned peer review reliability for decades, but the retraction of more than 10,000 research papers in 2023 set a new record.

I had my first encounter with the flaws in the peer review process in 2015, during my first year as a Ph.D. student in educational psychology at a large land-grant university in the Pacific Northwest.

My adviser published some of the most widely cited studies in educational research. He served on several editorial boards. Some of the most recognized journals in learning science solicited his review of new studies. One day, I knocked on his office door. He answered without getting up from his chair, a printed manuscript splayed open on his lap, and waved me in.

“Good timing,” he said. “Do you have peer review experience?”

I had served on the editorial staff for literary journals and reviewed poetry and fiction submissions, but I doubted much of that transferred to scientific peer review.

“Fantastic.” He smiled in relief. “This will be real-world learning.” He handed me the manuscript from his lap and told me to have my written review back to him in a week.

I was too embarrassed to ask how one actually does peer review, so I offered an impromptu plan based on my prior experience: “I’ll make editing comments in the margins and then write a summary about the overall quality?”

His smile faded, either because of disappointment or distraction. He began responding to an email.

“Make sure the methods are sound. The results make sense. Don’t worry about the editing.”

Ultimately, I fumbled my way through, saving my adviser time on one less review he had to conduct. Afterward, I did receive good feedback and eventually became a confident peer reviewer. But at the time, I certainly was not a “peer.” I was too new in my field to evaluate methods and results, and I had not yet been exposed to enough studies to identify a surprising observation or to recognize the quality I was supposed to control. Manipulated data or subpar methods could easily have gone undetected.

Effects of bias

Knowledge is not self-evident. A survey can be designed with a problematic amount of bias , even if unintentional.

Observing a phenomenon in one context, such as an intervention helping white middle-class children learn to read, may not necessarily yield insights for how to best teach reading to children in other demographics. Debates over “the science of reading” in general have lasted decades, with researchers arguing over constantly changing “recommendations ,” such as whether to teach phonics or the use of context cues.

A correlation – a student who bullies other students and plays violent video games – may not be causation . We do not know if the student became a bully because of playing violent video games. Only experts within a field would be able to notice such differences, and even then, experts do not always agree on what they notice.

Four researchers look at an open notebook.

As individuals, we can very often be limited by our own experiences. Let’s say in my life I only see white swans. I might form the knowledge that only white swans exist. Maybe I write a manuscript about my lifetime of observations, concluding that all swans are white. I submit that manuscript to a journal, and a “peer,” someone who also has observed a lot of swans, says, “Wait a minute, I’ve seen black swans.” That peer would communicate back to me their observations so that I can refine my knowledge.

The peer plays a pivotal role evaluating observations, with the overall goal of advancing knowledge. For example, if the above scenario were reversed, and peer reviewers who all believed that all swans were white came across the first study observing a black swan, the study would receive a lot of attention as researchers scrambled to replicate that observation. So why was a first-year graduate student getting to stand in for an expert? Why would my review count the same as a veteran’s review? One answer: The process relies almost entirely on unpaid labor .

Despite the fact that peers are professionals, peer review is not a profession.

As a result, the same overworked scholars often receive the bulk of the peer review requests. Besides the labor inequity, a small pool of experts can lead to a narrowed process of what is publishable or what counts as knowledge, directly threatening diversity of perspectives and scholars .

Without a large enough reviewer pool, the process can easily fall victim to politics, arising from a small community recognizing each other’s work and compromising conflicts of interest. Many of the issues with peer review can be addressed by professionalizing the field, either through official recognition or compensation.

Value despite challenges

Despite these challenges, I still tell my students that peer review offers the best method for evaluating studies and advancing knowledge. Consider the statistical phenomenon suggesting that groups of people are more likely to arrive at “right answers” than individuals.

In his book “ The Wisdom of Crowds ,” author James Surowiecki tells the story of a county fair in 1906, where fairgoers guessed the weight of an ox. Sir Francis Galton averaged the 787 guesses and arrived at 1,197 pounds. The ox weighed 1,198 pounds.

When it comes to science and the reproduction of ideas, the wisdom of the many can account for individual outliers. Fortunately, and ironically, this is how science discredited Galton’s take on eugenics, which has overshadowed his contributions to science .

As a process, peer review theoretically works. The question is whether the peer will get the support needed to effectively conduct the review.

  • Peer review
  • Retractions
  • Academic journal
  • Scholarship
  • Higher ed attainment

Want to write?

Write an article and join a growing community of more than 178,500 academics and researchers from 4,879 institutions.

Register now

Banner

Peer Reviewed Literature

What is peer review, terminology, peer review what does that mean, what types of articles are peer-reviewed, what information is not peer-reviewed, what about google scholar.

  • How do I find peer-reviewed articles?
  • Scholarly vs. Popular Sources

Research Librarian

For more help on this topic, please contact our Research Help Desk: [email protected] or 781-768-7303. Stay up-to-date on our current hours . Note: all hours are EST.

why is peer reviewed literature important

This Guide was created by Carolyn Swidrak (retired).

Research findings are communicated in many ways.  One of the most important ways is through publication in scholarly, peer-reviewed journals.

Research published in scholarly journals is held to a high standard.  It must make a credible and significant contribution to the discipline.  To ensure a very high level of quality, articles that are submitted to scholarly journals undergo a process called peer-review.

Once an article has been submitted for publication, it is reviewed by other independent, academic experts (at least two) in the same field as the authors.  These are the peers.  The peers evaluate the research and decide if it is good enough and important enough to publish.  Usually there is a back-and-forth exchange between the reviewers and the authors, including requests for revisions, before an article is published. 

Peer review is a rigorous process but the intensity varies by journal.  Some journals are very prestigious and receive many submissions for publication.  They publish only the very best, most highly regarded research. 

The terms scholarly, academic, peer-reviewed and refereed are sometimes used interchangeably, although there are slight differences.

Scholarly and academic may refer to peer-reviewed articles, but not all scholarly and academic journals are peer-reviewed (although most are.)  For example, the Harvard Business Review is an academic journal but it is editorially reviewed, not peer-reviewed.

Peer-reviewed and refereed are identical terms.

From  Peer Review in 3 Minutes  [Video], by the North Carolina State University Library, 2014, YouTube (https://youtu.be/rOCQZ7QnoN0).

Peer reviewed articles can include:

  • Original research (empirical studies)
  • Review articles
  • Systematic reviews
  • Meta-analyses

There is much excellent, credible information in existence that is NOT peer-reviewed.  Peer-review is simply ONE MEASURE of quality. 

Much of this information is referred to as "gray literature."

Government Agencies

Government websites such as the Centers for Disease Control (CDC) publish high level, trustworthy information.  However, most of it is not peer-reviewed.  (Some of their publications are peer-reviewed, however. The journal Emerging Infectious Diseases, published by the CDC is one example.)

Conference Proceedings

Papers from conference proceedings are not usually peer-reviewed.  They may go on to become published articles in a peer-reviewed journal. 

Dissertations

Dissertations are written by doctoral candidates, and while they are academic they are not peer-reviewed.

Many students like Google Scholar because it is easy to use.  While the results from Google Scholar are generally academic they are not necessarily peer-reviewed.  Typically, you will find:

  • Peer reviewed journal articles (although they are not identified as peer-reviewed)
  • Unpublished scholarly articles (not peer-reviewed)
  • Masters theses, doctoral dissertations and other degree publications (not peer-reviewed)
  • Book citations and links to some books (not necessarily peer-reviewed)
  • Next: How do I find peer-reviewed articles? >>
  • Last Updated: Feb 12, 2024 9:39 AM
  • URL: https://libguides.regiscollege.edu/peer_review

Frequently asked questions

Why is peer review important.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is also powered by Turnitin software and includes the Turnitin AI Writing Report.

Note that Scribbr’s free AI Detector is not powered by Turnitin, but instead by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

  • Alzheimer's & Dementia
  • Asthma & Allergies
  • Atopic Dermatitis
  • Breast Cancer
  • Cardiovascular Health
  • Environment & Sustainability
  • Exercise & Fitness
  • Headache & Migraine
  • Health Equity
  • HIV & AIDS
  • Human Biology
  • Men's Health
  • Mental Health
  • Multiple Sclerosis (MS)
  • Parkinson's Disease
  • Psoriatic Arthritis
  • Sexual Health
  • Ulcerative Colitis
  • Women's Health
  • Nutrition & Fitness
  • Vitamins & Supplements
  • At-Home Testing
  • Men’s Health
  • Women’s Health
  • Latest News
  • Medical Myths
  • Honest Nutrition
  • Through My Eyes
  • New Normal Health
  • 2023 in medicine
  • Why exercise is key to living a long and healthy life
  • What do we know about the gut microbiome in IBD?
  • My podcast changed me
  • Can 'biological race' explain disparities in health?
  • Why Parkinson's research is zooming in on the gut
  • Health Hubs
  • Find a Doctor
  • BMI Calculators and Charts
  • Blood Pressure Chart: Ranges and Guide
  • Breast Cancer: Self-Examination Guide
  • Sleep Calculator
  • RA Myths vs Facts
  • Type 2 Diabetes: Managing Blood Sugar
  • Ankylosing Spondylitis Pain: Fact or Fiction
  • Our Editorial Process
  • Content Integrity
  • Conscious Language
  • Health Conditions
  • Health Products

What to know about peer review

why is peer reviewed literature important

Peer review is a quality control measure for medical research. It is a process in which professionals review each other’s work to make sure that it is accurate, relevant, and significant.

Scientific researchers aim to improve medical knowledge and find better ways to treat disease. By publishing their study findings in medical journals, they enable other scientists to share their developments, test the results, and take the investigation further.

Peer review is a central part of the publication process for medical journals. The medical community considers it to be the best way of ensuring that published research is trustworthy and that any medical treatments that it advocates are safe and effective for people.

In this article, we look at the reasons for peer review and how scientists carry them out, as well as the flaws of the process.

Reasons for peer review

men looking at report

Peer review helps prevent the publication of flawed medical research papers.

Flawed research includes:

  • made-up findings and hoax results that do not have a proven scientific basis.
  • dangerous conclusions, recommendations, and findings that could harm people.
  • plagiarized work, meaning that an author has taken ideas or results from other researchers.

Peer review also has other functions. For example, it can guide decisions about grants for medical research funding.

For medical journals, peer review means asking experts from the same field as the authors to help editors decide whether to publish or reject a manuscript by providing a critique of the work.

There is no industry standard to dictate the details of a peer review process, but most major medical journals follow guidance from the International Committee of Medical Journal Editors.

The code offers basic rules, such as, “Reviewers’ comments should be constructive, honest, and polite.”

The Committee on Publication Ethics (COPE) are another association that offer ethical guidelines for medical peer reviewers. COPE also have a large membership among journals.

These associations do not set out rules for individual journals to follow, and they regularly remind reviewers to consult journal editors.

The code summarizes the role of a peer reviewer as follows:

“ The editor is looking to them for subject knowledge, good judgment, and an honest and fair assessment of the strengths and weaknesses of the work and the manuscript.”

The peer review process is usually “blind,” which means that the reviewers do not receive any information about the identity of the authors. In most cases, the authors also do not know who carries out the peer review.

Making the review anonymous can help reduce bias. The reviewer will evaluate the paper, not the author.

For the sake of transparency, some journals, including the BMJ, have an open system, but they discourage direct contact between reviewers and authors.

Peer review helps editors decide whether to reject a paper outright or to ask for various levels of revision before publication. Most medical journals ask authors for at least minor changes.

Quality, relevance, and importance

The exact tasks of a peer reviewer vary widely, depending on the journal in question.

All peer reviewers help editors decide whether or not to publish a paper, but each journal may have different criteria.

A peer review generally addresses three common areas:

  • Quality: How well did the researchers conduct their study, and how reliable are its conclusions? These points test the credibility and accuracy of the science under evaluation.
  • Relevance: Is the paper of interest to readers of this journal and appropriate to this field of work?
  • Importance: What clinical impact could the research have? Do the findings add a new element to existing knowledge or practice?

The editor will need to decide whether a paper is relevant, whether they have space for it, and if it might be more suitable for a different journal.

If the editor decides that it is relevant, they may seek peer reviewers’ opinions on the finer points of scientific interest.

The journal editors make the final decision when it comes to publishing a study. Peer-review processes exist to inform the editor’s decision, but the editor is not under any obligation to accept the recommendations of peer reviewers.

Different methods

Different journals have different aims, and it is possible to see individual titles as “brands.”

The editorial position and best practices of the journal influence its criteria for publishing a paper.

The BMJ , for example, focus on relevant findings that are important to current disease management. They say, “nearly all of the issues we research have relevance for journal editors, authors, peer reviewers and publishers working across biomedical science.”

The Lancet state that they prioritize “reports of original research that are likely to change clinical practice or thinking about a disease.” However, they also place some emphasis on papers that are easy to understand for the “general reader” outside the medical specialty of the author.

The editors of medical journals may publish detailed information about the particular form of review that they use. This information usually appears in guidelines for authors. These policies are another way of setting standards for research quality.

Read about randomized controlled trials, the most reliable method for conducting a study, by clicking here .

What do reviewers look for?

JAMA , for example, outline the qualities that their medical editors evaluate before sending papers to peer reviewers.

This “initial pass” checks for the following points:

  • timely and original material
  • clear writing
  • appropriate study methods
  • reasonable conclusions that the data support

The information must be important, and the topic needs to be of general medical interest.

How do journals respond?

Journals can respond to submissions in a few different ways.

The editors at the New England Journal of Medicine , for instance, either reject the paper outright or use one of three responses after using the peer review process to guide their decision.

These responses are:

  • Major revision: The editor expresses interest in the manuscript, but the authors need to make a revision because the report is “not acceptable” for publication in its current form.
  • Minor revision: “Some revisions” are necessary before the editor can accept the submission for publication.
  • Willing rejection: The authors need to “conduct further research or collect additional data” to make the manuscript suitable for publication.

Other publications might take different actions after completing a peer review.

Although peer review can help a publication retain integrity and publish content that advances the field of science, it is by no means a perfect system.

The number of journals worldwide is increasing, which means that finding an equivalent number of experienced reviewers is difficult. Peer reviewers also rarely receive financial compensation even though the process can be time-consuming and stressful, which might reduce impartiality.

Personal bias may also filter into the process, reducing its accuracy. For example, some conservative doctors, who prefer traditional methods, might reject a more innovative report, even if it is scientifically sound.

Reviewers might also form negative or positive preconceptions depending on their age, gender, nationality, and prestige.

Despite these flaws, journals use peer review to make sure that material is accurate. The editor can always reject reviews that they feel show a form of bias.

Is peer review the most reliable method of checking a research report?

Peer review is not perfect, but it does provide the editor with the opinion of multiple experts in the field or area of focus of the review. As a result, it ensures that the topic of study is relevant, current, and useful to the reader.

Generally, reviewers are researchers or experts in their field, and they are able to gauge the accuracy and any potential bias of a research study.

Last medically reviewed on March 29, 2019

  • Medical Innovation
  • Clinical Trials / Drug Trials

How we reviewed this article:

  • Bohannon, J. (2013). Who’s afraid of peer review? http://www.sciencemag.org/content/342/6154/60.full
  • Carter, B. (2017). Peer review: A good but flawed system? https://journals.sagepub.com/doi/pdf/10.1177/1367493517727320
  • Hames, I. (2013). COPE ethical guidelines for peer reviewers. https://publicationethics.org/files/u7140/Peer%20review%20guidelines.pdf
  • Instructions for authors. (2019). http://jama.jamanetwork.com/public/instructionsForAuthors.aspx#EditorialReviewandPublication
  • Kumar, R. (2013). The Science hoax: Poor  journalology  reflects poor training in peer review. http://www.bmj.com/content/347/bmj.f7465.full
  • Marincola, E. (2013). Science communication: Power of community. http://www.sciencemag.org/content/342/6163/1168.2.full
  • Publishing model. (n.d.). https://www.bmj.com/about-bmj/publishing-model
  • Publication process. (n.d.). http://www.nejm.org/page/media-center/publication-process
  • Responsibilities in the submission and peer-review process. (n.d.). http://www.icmje.org/recommendations/browse/roles-and-responsibilities/responsibilities-in-the-submission-and-peer-peview-process.html
  • The Lancet: Information for authors. (n.d.).  http://www.thelancet.com/lancet-information-for-authors/article-types-manuscript-requirements

Share this article

Latest news

  • Acupuncture may help reduce stroke risk in people with rheumatoid arthritis 
  • Test screening for 11 blood biomarkers could predict dementia 15 years sooner
  • Tai chi may be more effective than aerobic exercise at lowering blood pressure
  • Could viral infections over a lifetime influence Alzheimer’s risk?
  • Why people with severe psoriasis have a higher risk of heart disease

Related Coverage

A case-control study, like other medical research, can help scientists find new medications and treatments. Find out how 'cases' are compared with…

Clinical trials are carried out to ensure that medical practices and treatments are safe and effective. People with a health condition may choose to…

Telemedicine enables a medical exchange between the doctor and the patient through technology. Learn more about the uses, benefits, and drawbacks here.

In this ‘In Conversation’ podcast and accompanying feature, Medical News Today’s editors discuss 2021's research and medical news highlights.

What is peer review?

From a publisher’s perspective, peer review functions as a filter for content, directing better quality articles to better quality journals and so creating journal brands.

Running articles through the process of peer review adds value to them. For this reason publishers need to make sure that peer review is robust.

Editor Feedback

"Pointing out the specifics about flaws in the paper’s structure is paramount. Are methods valid, is data clearly presented, and are conclusions supported by data?” (Editor feedback)

“If an editor can read your comments and understand clearly the basis for your recommendation, then you have written a helpful review.” (Editor feedback)

Principles of Peer Review

Peer Review at Its Best

What peer review does best is improve the quality of published papers by motivating authors to submit good quality work – and helping to improve that work through the peer review process. 

In fact, 90% of researchers feel that peer review improves the quality of their published paper (University of Tennessee and CIBER Research Ltd, 2013).

What the Critics Say

The peer review system is not without criticism. Studies show that even after peer review, some articles still contain inaccuracies and demonstrate that most rejected papers will go on to be published somewhere else.

However, these criticisms should be understood within the context of peer review as a human activity. The occasional errors of peer review are not reasons for abandoning the process altogether – the mistakes would be worse without it.

Improving Effectiveness

Some of the ways in which Wiley is seeking to improve the efficiency of the process, include:

  • Reducing the amount of repeat reviewing by innovating around transferable peer review
  • Providing training and best practice guidance to peer reviewers
  • Improving recognition of the contribution made by reviewers

Visit our Peer Review Process and Types of Peer Review pages for additional detailed information on peer review.

Transparency in Peer Review

Wiley is committed to increasing transparency in peer review, increasing accountability for the peer review process and giving recognition to the work of peer reviewers and editors. We are also actively exploring other peer review models to give researchers the options that suit them and their communities.

  • Open access
  • Published: 30 April 2020

The limitations to our understanding of peer review

  • Jonathan P. Tennant   ORCID: orcid.org/0000-0001-7794-0218 1 &
  • Tony Ross-Hellauer   ORCID: orcid.org/0000-0003-4470-7027 2  

Research Integrity and Peer Review volume  5 , Article number:  6 ( 2020 ) Cite this article

40k Accesses

98 Citations

167 Altmetric

Metrics details

Peer review is embedded in the core of our knowledge generation systems, perceived as a method for establishing quality or scholarly legitimacy for research, while also often distributing academic prestige and standing on individuals. Despite its critical importance, it curiously remains poorly understood in a number of dimensions. In order to address this, we have analysed peer review to assess where the major gaps in our theoretical and empirical understanding of it lie. We identify core themes including editorial responsibility, the subjectivity and bias of reviewers, the function and quality of peer review, and the social and epistemic implications of peer review. The high-priority gaps are focused around increased accountability and justification in decision-making processes for editors and developing a deeper, empirical understanding of the social impact of peer review. Addressing this at the bare minimum will require the design of a consensus for a minimal set of standards for what constitutes peer review, and the development of a shared data infrastructure to support this. Such a field requires sustained funding and commitment from publishers and research funders, who both have a commitment to uphold the integrity of the published scholarly record. We use this to present a guide for the future of peer review, and the development of a new research discipline based on the study of peer review.

Peer Review reports

Introduction

Peer review is a ubiquitous element of scholarly research quality assurance and assessment. It forms a critical part of a research and development enterprise that annually invests $2 trillion US dollars (USD) globally [ 1 ] and produces more than 3 million peer-reviewed research articles [ 2 ]. As an institutional norm governing scientific legitimacy, it plays a central role in defining the hierarchical structure of higher education and academia [ 3 ]. Now, publication of peer-reviewed journal articles plays a pivotal role in research careers, conferring academic prestige and scholarly legitimacy upon research and individuals [ 4 ]. In spite of this crucial role it plays, peer review remains critically poorly understood in its function and efficacy, yet almost universally highly regarded [ 5 , 6 , 7 , 8 , 9 , 10 , 11 ].

As a core component of our immense scholarship system, peer review is routinely and widely criticised [ 12 , 13 , 14 ]. Much ink has been spilled on highly cited and widely circulated editorials either criticising or championing peer review [ 15 , 16 , 17 , 18 , 19 , 20 , 21 ]. A number of small- to medium-scale population-level studies have investigated various aspects of peer review’s functionality (see [ 12 , 22 , 23 ] for summaries); yet the reality is that there remain major gaps in our theoretical and empirical understanding of it. Research on peer review is not particularly well-developed, especially as part of the broader issue of research integrity; often produces conflicting, overlapping or inconclusive results depending on scale and scope; and seems to suffer from similar biases to much of the rest of the scholarly literature [ 8 ].

As such, there is a real danger that advocates of reform in peer review do not always appreciate the often-limited scope in our general understanding of the ideology and practice of peer review. Ill-informed generalisations are abound, for example, the oft-heard ‘peer review is broken’ rhetoric [ 24 , 25 ], compared with those who herald it as a ‘golden standard’. Peer review is also often taken as a hallmark of ‘quality’, however, despite the acknowledgement that it is also an incredibly diverse and multi-modal process. The tensions between these viewpoints create a strange dissonant rationale, that peer review is uniform and ‘the best that we have’, yet also flawed, often without fully appreciating the complexity and history of the process [ 26 , 27 , 28 , 29 ]. Consequently, debates around peer review seem to have become quite polarised; it either remains virtually untouchable, and often dogmatically so, as a deeply embedded structure within scholarly communication, or is something fatally corrupted and to be abandoned in toto. On the one hand, criticisms levied towards peer review can be seen as challenging scientific legitimacy and authority and therefore creates resistance towards developing a more nuanced and detailed understanding of it, both in terms of practice and theory. On the other hand, calls for radical reforms risk throwing out the baby with the water so imply systematic understanding of peer review as irrelevant.

This makes inter- and intra-discipline and systematic comparisons about peer review particularly problematic, especially at a time when substantial reform is happening across the wider scholarly communication landscape. The diversity of stakeholders engaging with peer review is now increasing with the ongoing changes around ‘Open Scholarship’; for example, policymakers, think-tanks, research funders and technologists are increasingly concerned about the state of the art in research and its communication and role in wider society, for example, regarding the United Nations Sustainable Development Goals. In this context, developing a collective empirical and theoretical understanding of the function and limitations of peer review is of paramount importance. Specifically available funding for such research is also almost entirely absent, with exceptions such as the European Commission-funded PEERE initiative [ 30 , 31 ]. This is especially so when compared to relatively rapidly accumulating attention for research reproducibility [ 32 , 33 , 34 , 35 , 36 ], now with calls specifically for research on reproducibility (e.g. via the Association for Psychological Science or the Dutch Research Council). There is now an imperative for the quantitative analysis of peer review as a critical and interdisciplinary field of study [ 9 , 31 , 37 , 38 , 39 ].

This article aims to better explore and demarcate the gaps in our understanding of peer review to help guide future exploration of this critical part of our knowledge infrastructure. Our primary emphasis is to provide recommendations for future research based around the need for a rigorous and coordinated programme focused on a new multi-disciplinary field of Peer Review Studies. We provide a roadmap that highlights the difficulty and priority levels for each of these recommendations. This study complements ongoing and recent work in this area around strengthening the principles and practices for peer review across stakeholders [ 40 ].

To identify gaps in our knowledge, we identified a number of core themes around peer review and peer review research. We then identified relevant literature, primarily based around recent meta-reviews and syntheses to identify the things that we do know about peer review. We then ‘inverted’ this knowledge and iteratively worked through each core theme to identify what we do not know at varying levels in a semi-systematic way. Part of this involved discussions with many colleagues, both in formal and informal settings, which greatly helped to shape our understanding of this project, highlight relevant literature, as well as identify the many gaps we had personally overlooked. We acknowledge that this might not have been sufficient to identify all potential gaps, which are potentially vast, but it should provide a suitable method for identifying major themes of interest for the main stakeholder groups.

Within these themes, we have attempted to make clear those things about peer review which are in principle (and may likely remain) obscure, as well as those things which are in principle knowable but currently obscure practically due to a lack of data or prior attention. The consequence of this structural interrogation is that we can begin to identify strategic research priorities and recommendations for the future of peer review research at a meta-level [ 40 ]. The assessments of priority and difficulty level are largely subjective and based on our understanding of issues surrounding data availability and their potential influence on the field of peer review. These research topics can also be used to determine what the optimal models of peer review might be between different journals, demographics and disciplines and interrogate what ‘quality’ means under different circumstances. Data sources here can include those obtained through journals/publishers sharing their data, empirical field studies, studying historical archives, interviews or surveys with authors, editors and reviewers or randomised controlled trials [ 22 , 38 , 41 , 42 , 43 , 44 ].

Results and discussion

In this section, we will discuss the limits to our knowledge of peer review in a general, interdisciplinary fashion. We focus on a number of core themes. First, we discuss the role of editors; issues surrounding their accountability, biases and conflicts of interest; and the impact this can have on their decision-making processes. Second, we discuss the roles of peer reviewers themselves, including the impacts of blinding, as well as notions of expertise in what constitutes a ‘peer’. Third, we discuss the intended purpose and function of peer review and whether it actually upholds these things as a quality control mechanism. Fourth, we consider the social and epistemic consequences of peer review. Finally, we discuss some of the ongoing innovations around [open] peer review tools and services and the impacts that these might have.

Roles of editors in peer review

Editors have a non-uniform and heterogeneous set of roles across journals, typically focused in some way around decision-making processes. Here, when we refer to ‘an editor’, we mean someone in such a position of authority at a journal, including editors-in-chief, managing editors, associate editors and all similar roles. Typically, by focusing on a binary outcome for articles (i.e. reject or accept), editorial peer review has become more of a judicial role than a critical examination [ 45 ], as the focus becomes more about the decision rather than the process leading to that decision. Justifications or criteria for editorial rejections (either ‘desk’ rejections or following peer review), and decisions, overall, are rarely given or automated, and poorly known despite perhaps being one of the most frustrating elements of the scholarly publishing process. It is rarely explicitly known whether journals send all submissions out for peer review or are selective in some way, for example, based on the scope of the journal and the perceived fit of articles. There are almost no studies regarding the nature of editorial comments and how these might differ from, or complement, respective reviewer comments. An analysis of these issues across a wide range of journals and disciplines would provide insight into one of the most important components of scholarly research.

We currently only have patchy insight into factors such as the number of times a paper might have been rejected before final acceptance, and further critical insight is needed into the general study of acceptance rates [ 46 , 47 , 48 ]. This is especially so as authors will very often search for another journal or venue to have their paper published when rejected by a single journal, which has important implications for journal-based evaluation systems. Limited available evidence suggests that a relatively small pool of researchers does the majority of the reviewing work [ 49 , 50 , 51 ]. This raises questions about how often editors elect to use ‘good’ or critical reviewers without exhausting or overworking them, and the potential consequences this might have on professional or personal relationships between the different parties and their respective reputations. Software does now exist to help automate these procedures (e.g. ScholarOne’s Reviewer Locator), but their role and usage and how these might affect who and how often reviewers invited remains largely unknown.

Editors wield supreme, executive power in the scholarly publishing decision-making process, rather than it being derived from a mandate from the masses. Because of this, scholarly publishing is inherently meritocratic (ideologically and perhaps in practice), rather than being democratic. Despite this, how editors attained their positions is rarely known, as are the motivations behind why some editors might start their own journal, write their own editorials or solicit submissions from other researchers. This is further complicated when conflicts might arise between the commercial interests or influence of a publisher (e.g. selling journals) and editorial concepts around academic freedom and intellectual honesty and integrity. There are around 33,100 active scholarly peer-reviewed English-language journals, each with their own editorial and publishing standards [ 2 ], emphasising the potential scale of this problem.

Editorial decisions are largely subjective and based on individuals and their relative competencies and motivations; this includes, for example, how they see their journal fit within the present and future research and publishing landscape as well as the perceived impact a paper might have both on their journal and on the research field. These biases are extremely difficult to conceptualise and measure and almost certainly always lacking in impartiality. Such editorial biases also relate to issues of epistemic diversity within the editorial process itself, which can lead to knowledge homogenisation, a perpetuation of the ‘Matthew effect’ in scholarly research [ 52 , 53 ] and inequities in the diffusion of scientific ideas [ 54 ]. These issues are further exacerbated by the fact that editors often fail to disclose their conflicts of interest, which can be viewed as compromising their objectivity [ 55 , 56 ], and the extent to which editors treat their reports seriously, as well as any dialogue between them and reviewers and authors [ 57 ]. For example, how an editor might decide to signal to authors which reviewer comments are more important to address and which can be overlooked and consequently, how authors might then deal with these. Just like questionable research practices or misconduct such as fraud, often these factors will remain invisible to peer review and the research community [ 58 ].

Journals and publishers can assist with these issues in a number of ways. For example, simply providing the name of the handling editor and any other editorial staff involved in a manuscript, including any other professional roles they have, any previous interactions they might have had with both reviewers and authors and the depth of evaluation they applied to a manuscript. However, such information could inadvertently lead to superficial judgements of research based more on the status of editors. Journals can also share data on their peer review workflows, including referee recommendations where possible [ 59 ]. The relationship of such recommendations to editorial decisions has currently only been performed at a relatively small scale for single journals [ 60 , 61 ] and requires further investigation [ 62 ]. Disclosure of this information would provide not only great insight into editorial decisions and their legitimacy, but also be useful in improving review and editorial management systems, including based around training and support [ 6 ]. This could also be used to help to clarify what the conditions required in order to meet the quality criteria at different journals are, as well as whether authors are made fully aware of review reports and how these intersect with those criteria.

Role of reviewers in peer review

It is known that, to various degrees, factors, such as author nationality, prestige of institutional affiliation, reviewer and nationality, gender, research discipline, confirmation bias and publication bias, all affect reviewer impartiality in various ways [ 63 ], with potential negative downstream consequences on the composition of the scholarly record, as well as for the authors themselves. However, this understanding of peer review bias is typically based on, and therefore limited to, available (i.e. published) data—usually at a small, journal-based scale—and not fully understood at a systems-level [ 37 , 64 ]. These biases can range from subtle differences to factors that majorly influence the partiality of individuals, each one being a shortcut to decision-making that potentially compromises our ability to think rationally. Additional personal factors, such as life experiences, thinking style, workload pressures, psychography, emotional state, cognitive capacity, can all potentially influence reviewers, and almost certainly do. Furthermore, there remain a number of different additional complex and hidden social dimensions of bias that can potentially impact review integrity. For example, relationships (professional or otherwise) between authors and reviewers remain largely unknown—whether or not they are rivals or competitors, colleagues, collaborators or even friends/partners, each of which can introduce bias in a different way into peer review [ 9 , 65 , 66 ]. Finally, the relationship between journal policies relating to these factors and the practical application of those policies, and the consequences of such, still remains poorly understood.

The potential range of biases calls into question of what defines a ‘peer’ and our understanding of ‘expertise’. Expertise and the status of a peer are both incredibly multi-dimensional concepts, varying across research disciplines, communities, demographics, career stage, research history and through time. Yet the factors that prescribe both concepts remain often highly concealed, and both can ultimately affect reviewer and editorial decisions, for example, how reviewers might select which elements of an article to be more critical of, and subjective notions of, ‘quality’ or relevance. It is unclear whether or not reviewers ‘get better’ through time and experience, and whether the ‘quality’ of their reviewing varies depending on the type of journal they are reviewing for, or even form of research (e.g. empirical versus theoretical).

Often, there is a lack of distinction between the referee as a judge, juror and independent assessor. This raises a number of pertinent questions about the role of reviewer recommendations, the function of which varies greatly between publishers, journals and disciplines [ 5 ]. These expectations for reviewers remain almost universally unknown. If access to the methods, software and data for replication is provided, it is often unclear if reviewers are requested or expected to perform these tests individually or if the editorial staff are to do so. The fact that the assessment of manuscripts requires a holistic view, which requires attention to a variety of factors, including stylistic aspects or findings novelties, makes the task and depth of reviewing extremely challenging. It is also exceptionally difficult or impossible to review data once they have been collected, and therefore there is an inherent element in trust that methods and protocols have been executed correctly and in good faith. Exceptions do exist, largely from the software community, with both the Journal of Open Research Software and Journal of Open Source Software clearly requiring code review as part of their processes. While there is also a general lack of rewards/incentives that could motivate reviewers to embark in rigorous testing or replications, some journals do now offer incentives such as credits or discounts for future publications for performing reviews. However, how widespread or attractive these are for researchers and the potential impact they might have remains poorly known. Editors and journals have strong incentives to increase their internal controls, which they often informally outsource this effort to often uninformed reviewers.

Only recently, in the field of biomedicine, has there been any research conducted into the role and competencies of editors and peer reviewers [ 6 , 67 , 68 ]. Here, reviewers were expected to perform an inconsistent variety of multiple tasks including providing recommendations, addressing ethical concerns, assessing the content of the manuscript and making general comments about submitted manuscripts. While some information can be gained by having journals share data on the peer review workflows and decisions made by editors and the respective recommendations from reviewers, this will only paint an incomplete picture about the functional role of reviewers and how this variation in the division of labour and responsibility influences ultimate decision-making processes. While this can be functional to sharing editorial risk in the decision-making [ 69 ], it often undermines responsibility with negative implications on the legitimacy of the decision as it is perceived by authors [ 56 ].

The only thing close to a system-wide standard, that we are aware of, in this regard is the ‘Ethical Guidelines for peer reviewers’ from the Committee on Publication Ethics (COPE). At present, we have almost no understanding of whether or not authors and reviewers obligingly comply with such policies, irrespective of whether they actually agree with them or not. For example, how many reviewers sign their reports even during a blinded process and what the potential consequences of this (e.g. on reviewer honesty and integrity) might be or even the extent to which such anonymity is compromised [ 70 ]. There is an obligation here for journals to provide absolute clarity regarding the roles and expectations of reviewers and how their reviews will be used and to provide data on policy compliance through time.

One of the most critical ongoing debates in ‘open peer review’ regards whether or not blinding should be preferred as it offers justifiable protection, compared to the times when blinding encourages irresponsible behaviour during peer review [ 63 , 70 , 71 ]. For example, it is commonly cited that revealing reviewer identities could be detrimental or off-putting to early career researchers or other higher risk or under-represented communities within research due to offending senior researchers and suffering reprisals. Such reprisals could be either public or more subtle (e.g. future rejection of grant proposals or sabotage of collaborations). It has also recently been argued that a consequence of such blinding is concealing of the social structures that perpetuate such biases or inequities, rather than actually dealing with the root causes [ 72 ], and this reflects more of a problem with the ability for individuals within academia to abuse their status to the detriment of others [ 64 ]. However, the extent to which such fears are based on real and widespread events, or more conceptual or based on ‘anecdata’, remains largely unknown; a recent survey in psychology found that such fears are actually greatly exaggerated from reality [ 73 ], but such might not necessarily extrapolate to other research fields. Additionally, there is a long history of open identification at some publishers (e.g. PeerJ, BioMed Central) that could be leveraged to help assess the basis for these fears. There is also some evidence to suggest that blinding is often unsuccessful, for example in nursing journals [ 74 ]. Irrespective, any system moving towards open identities must remain mindful of these concerns and make sure such risks can be avoided. It remains to be seen whether even stricter rules and guidelines for manuscript handling, with ‘triple-blinded’ and automated systems can provide a better guard against both conscious and unconscious bias [ 75 ].

There are also critical elements of peer that can be exposed by providing transparency into the identity of reviewers [ 16 , 76 ]. Presently available evidence on this remains often inconclusive, at the local scale, or often even in conflict as to what the optimal model for reducing or alleviating bias might be [ 43 , 70 , 77 , 78 , 79 , 80 , 81 ]. Simply exposing a name does not automatically mean that all identity-related biases are automatically eliminated; but it serves three major purposes:

First, if reviewer identities are known in advance, we might typically expect them to be more critical and objective rather than subjective during the review process itself, as transparency in this case imposes at least partial accountability. With this, it can be examined as to whether this leads to higher quality reviews, lengthier reports, longer submission times, influence on reviewer recommendations and the impact this might have on research quality overall; factors that have been mostly overlooked in previous investigations of this topic. Journals can use these data to assess the potential impact these have on the cost and time management for peer review.

Second, it means that some of the relationships and motivations of a reviewer can be inspected, as well as any other factors that might be influencing their decision (e.g. status, affiliation, gender). These can then be used to assess the uptake of and attitudes towards open identities, and whether there are systematic biases in the process towards certain demographics. More pragmatically for journals, these can then be compared to reviewer decline rates to streamline their invitation processes.

Third, it means that if some sort of bias or misconduct does occur during the process, then it is easier to address if the identity of the reviewer is known, for example, by a third-party organisation such as COPE.

Functionality and quality of peer review

Peer review is now almost ubiquitous among scholarly journals and considered to be automatically required and an integrated part of the publication process, whether it is functionally necessary or not. There is a lack of consensus about what peer review is, what it is for and what differentiates a ‘good’ review from a ‘bad’ review, or how to even begin to define review ‘quality’ [ 82 ]. This sort of lack of clarity can lead to all sorts of confusion among discussions, policies and practices. Research ‘quality’ is something that inherently evolves through time; for example, the impact of a particular discovery might not be recognised until many years after its original publication. Furthermore, there is an important distinction between ‘value’ and ‘quality’ for peer review and research; the former is a more subjective trait and related to the perception of the usage of an output, and its perceived impact, whereas the latter is more about the process itself as an intrinsic mark of rigour, validation or certification [ 83 ].

There are all sorts of reasons why this lack of clarity has transpired, primarily owing to the closed nature of the process. One major part of this uncertainty pertains to the fact that, during the review process, we typically have no idea what changes were actually made between successive versions. Comparison between preprints shared on arXiv and bioRxiv and their final published versions, for example, has shown that overall peer review seems to contribute very few changes and that the quality of reporting is similar [ 69 , 84 ]. Assessment of the actual ‘value add’ of peer review remains difficult at scale, despite version control systems being technologically easy to implement [ 23 , 85 , 86 ], for example at the Journal of Open Source Software .

This problem is ingrained in the inherently diverse nature of the scholarly research enterprise, and thus peer review quality can relate to a multitude of different factors, e.g. rigorous methodological interrogation, identification of statistical errors and flaws, speed or turn-around of review, or strengthening of argumentation style or narrative [ 87 ]. Such elements that might contribute towards quality are difficult to assess in any formative way due to the inherent secrecy. We are often unable to discern whether peer reviews are more about form or matter, whether they have scrutinised enough to detect errors, whether or not they have actually filtered out ‘bad’ or flawed research, whether the data, software and materials were appropriately inspected, or whether replication/reproducibility attempts were made. This problem is reflected by the discussion above regarding the expected roles of reviewers. If research reports were made openly accessible, they could be systematically inspected to see what peer review entailed at different levels, and provide empirical evidence for its function. This could then also be used to create standardised peer review ‘check-lists’ to help guide reviewers through the process. Research and development of tools for measuring the quality of peer review are only in their relative infancy [ 82 ], and even then focused mostly on disciplines such as biomedicine [ 88 ].

It is entirely possible that some publishers have already gathered, processed and analysed peer review data internally to measure and improve their own systems. This represents a potentially large file drawer problem, as such information is only of limited use if only used for private purposes, or only made public if it enhanced the image or prestige of their journals. There are a number of elements of the peer review process that empirical data could be gathered, at varying degrees of difficulty, to better understand its functionality, including:

Duration of the length of different phases of the process (note that this is not equivalent to actual time spent) [ 89 , 90 ]

Number of referee reports per article

Length of referee reports

Number of rounds of peer review per article

Whether code, data and materials were made available during the review process

Whether any available code, data or materials were inspected/analysed during the process

The proportion of reviewers who decline offers to review and if possible, why they do

Relative acceptance rates following peer review

Who decides whether identities should be made open (i.e. the journal, authors, reviewers and/or editors), and when these decisions are made in the process

Who decides whether the reports should be made open, when these decisions are made during the process, and what should be included in them (e.g. editorial comments)

Proportion of articles that get ‘desk rejections’ compared to rejection after peer review

Ultimate fate of submitted manuscripts

Whether the journal an article was ultimately published in was the journal to perform the review (important now with cascading review systems)

Whether editors assign particular reviewers in order to generate a specific desired outcome

These represent just some of the potential data sources that could be used to provide evidence for the key question of what peer review actually does and compare these factors through time, across and between disciplines and systematically. For example, it would be interesting to look at how peer review varies at a number of levels:

Between journals of different ‘prestige’

Between journals and publishers from across different disciplines

Whether any differences exist between learned society journals and those owned by commercial publishers

Whether peer review varies geographically

Whether there are some individuals or laboratories who perform to an exceptional standard during peer review

How all of these factors might have evolved through time

Peer review and reproducibility

There are two core elements to examine here. First, if peer review is taken to be a mark of research quality, this raises the question of whether or not peer review itself should be reproducible; an issue that remains controversial. There is little current concrete evidence that it is, and research into inter-reviewer reliability (just one aspect of reproducibility) shows variable results [ 58 , 91 ]. Second, peer review is currently limited in being physically able to reproduce experiments made, despite this being a core tenet of scholarship. Thus, the default is often to trust that experiments were performed correctly, data were gathered and analysed appropriately, and the results are reflective of this. This issue is tied to the above discussions regarding the expectation of reviewers as well as the function of peer review. Indeed, it remains critically unknown whether specialised reviewers (e.g. in methods, statistics) are used and actually apply their skills during the review process to test the rigour of performed research. There is potential here for automated services to play a role in improving reproducibility, for example, in checking statistical analyses for accuracy. However, increasing adoption of automated services during peer review is likely to raise even more questions about the role and function of human reviewers.

This is perhaps one of the main reasons why fraudulent behaviour, or questionable research practices, still enter the scholarly record at high proportions, even though peer review occurs [ 15 , 92 ]. The Peer Reviewers’ Openness Initiative was a bold step towards recognising this [ 69 , 91 ], in terms of increasing the transparency and rigour of the review process. However, it has not been widely adopted as part of any standardised review process and remains relatively poorly known and implemented. This is deeply problematic, as it means that reproducibility is something often considered post hoc to the publication process, rather than a formal requirement for it and as something tested by the review process. This has a number of consequences such as the ongoing and widespread ‘reproducibility crises’ [ 32 ]. Much of this could probably have been avoided if researchers were more cautious in conducting research and interpreting results, if incentives were aligned more with performing high-quality research than publishing in ‘high impact journals’ [ 84 , 93 , 94 ] and if peer review was more effective at ensuring reproducibility.

Social and epistemic impacts of peer review

In terms of the influence of peer review subsequent to the formalised process itself, the actual impact it has on scientific discourses remains virtually unknown. Peer review is a bi-directional process, and the authors, editors, and reviewers all stand to gain from it as a learning experience and for developing new ideas. Not only is such learning potential highly variable across disciplines, but also is an incredibly difficult aspect to empirically measure. Little attention has been paid to the relationship between peer review as a mark of quality assurance and other post-publication forms of research evaluation. Recent research has documented the extent to which evaluation is based on criteria such as the journal impact factor [ 93 ], something which is decoupled from peer review. Indeed, the relationship between pre-publication evaluation and post-publication assessment has received virtually no attention, as far as we are aware, at either the individual, journal, publisher, discipline, institute or national levels. It is entirely possible that if we gained a deeper empirical understanding of peer review as a primary form of research evaluation, it could help to reduce the burden and impact of secondary systems for career advancement.

One potential solution to this has been an increasing push to publish review reports. However, similar to open identification, such a process creates a number of potential issues and further questions. For example, does knowledge that review reports will be publicised deter reviewers from accepting requests for review? And does this knowledge change the behaviour of reviewers and the tone and quality of their reports? This issue could go both ways. Some researchers, under the knowledge that their reports will be published, will strive to make it as critical, constructive, and detailed as possible; irrespective of whether or not their names are associated with it. Others, however, might feel that this can appear too combative and thus be more lenient with their reviews. Therefore, there are outstanding questions on how opening reports up can affect the quality, substance, length and submission time of review reports, as well as any associated costs. Such is further confounded by the fact that the record of public review reports will be inherently skewed based on the articles that are ultimately published and may exclude reviews for articles which remain rejected or ultimately unpublished.

Regarding many of the social issues we have described, care needs to be taken to distinguish between which biases/traits are intrinsic to peer review itself and which are passively entrained within peer review due to larger socio-cultural factors within research. For example, if a research community is locally centralised and homogeneous, this will be reflected in lower epistemic diversity during peer review; whereas the opposite may be true for more heterogeneous and decentralised research communities. It is imperative to understand not only the diversity of opinions that are being excluded in peer review, but also the consequences of epistemic exclusion. The totality of bias in human-driven peer review can likely never be fully eradicated, and it is unlikely that we will ever witness the implementation of a purely objective process. However, by assessing and contextualising them in as much depth as possible, we can at least acknowledge and understand the influences these have, and begin to systematically mitigate any potentially deleterious effects that such biases might have on peer review.

Furthermore, there is relatively little understanding of the impact of peer review on innovation. It has been previously claimed that peer review, as it is often employed, leads to conservatism through suppression of innovation or greater acknowledgement of limitations [ 45 , 95 ], as well as ideological bias, but it is difficult to gauge the reality of this. If peer review leads to epistemic homogeneity due to its conservatism, this can have negative consequences on the replicability of research findings [ 96 ]. As such, it remains virtually unknown what the dynamic trade-off is between innovation and quality control; the former of which relies on creativity and originality, while the latter relies on consensus, accuracy and precision. Where is the magic point between rapid dissemination and slow and critical assessment? At some point along this spectrum, does peer review become redundant or functionally obsolete in its present forms? Available evidence shows that often peer review tends to fail to recognise even Nobel-quality research, often rejecting it outright and thus resisting the process of scientific discovery [ 97 , 98 ]. Providing insight into these questions is critical, as it impacts our understanding of the whole ideology of peer review in advancing scholarship, as well as its ability to detect or assign value to ‘impactful’ research. This is complicated further by the fact that peer review is often seen as a solution to generate trust in results and used as a method to distribute academic capital and standing among different research communities [ 4 , 99 ], while we remain with a very limited understanding of whether it has achieved its objectives as a filtering method [ 83 ]. Irrespective of what the process entailed at an article level, peer review still assigns an imprimatur, via ‘stamp of approval’ or endorsement over which knowledge enters the scholarly record and can thus be built upon.

Beyond traditional peer review

As well as all of the above, which are more based around obtaining information from ‘traditional’ journal-coupled editorial peer review processes, there are now also a number of novel services that allow for different forms of peer review. Often these are platforms that tend to decouple peer review from journals in one way or another, making it more participatory or offering ‘post-publication’ either over preprints or final published versions of record [ 23 , 85 ]. Previous research has shown that on some open commenting systems, user engagement tends to be relatively low for research articles [ 89 , 100 ]. Thus, there is the existential question of how to overcome low levels of uptake for open participation (either on preprints or final-version articles). It seems that a critical element here is whether an open participatory process requires editorial control, if elements of it can be automated and to what extent ‘quality control’ over referee selection impacts the process, for example, does it make conflicts of interest more difficult to detect. There is no doubt that editors will continue to play a prominent role here in terms of arbitration, quality control, and encouraging engagement while fostering a community environment [ 76 ]. However, whether this can be done successfully within an open participatory framework remains to be seen; either with or without journals. One potentially disruptive element here is that of micro-publications, in which engagement is potentially less time consuming and this participation can be streamlined and a simpler task, thus potentially increasing reviewer uptake. However, this assumption relies on editors maintaining a similar role to their traditional function, and one remaining question is what impact would removing editorial mediation have on open participation.

Several innovative systems for interactive peer review have emerged in the last decades. These include the Copernicus system of journals, EMBO, eLife, and the Frontiers series. Here, peer review remains largely an editorially controlled process, but the process between reviewers and authors is treated more as a digital discussion, until some sort of consensus is usually reached to help guide an editorial decision. At present, it remains largely unknown whether this process is superior to the traditional organised unilateral series of exchanges, in the context of whether this process leads to a generally higher review quality or more frequent error detection. Logistically, it remains largely unknown whether this leads to a faster and more efficient review process overall, with potential consequences on the overall cost of managing and conducting peer review.

The principal reason why the World Wide Web was created and now exists was for the sharing of research results and articles prior to peer review (i.e. preprints), and either in parallel to or circumnavigating the slower and more costly journal-coupled review and communication processes [ 90 , 101 , 102 ]. However, this does not mean that preprints are the solution to all issues around peer review and scholarly publishing, especially as they are still regarded in different ways by different communities; something that undoubtedly requires further study [ 99 ]. With the recent explosion of preprints in the Life Sciences [ 103 ], a number of different services have emerged that ‘overlay’ peer review in one form or another on top of the developing preprint infrastructure [ 104 ], for example, biOverlay in the Life Sciences. However, the general uptake of such services appears to be fairly low [ 105 ]; most recently, this led to Academic Karma, a leading platform in this area, to shut-down (April 2019). In February 2018, the Prelights service was launched to help highlight biological preprints, and Peer Community In represents a service for reviewing and recommending preprints, both independent from journals. PREreview is another recently launched service that facilitates the collaborative review of preprints [ 106 ] The impact and potential sustainability of these innovative ‘deconstruction’ services, among others, is presently completely unknown. The fate of articles that pass through such a process also remains obscured; do they end up being published in journals too, or do authors feel that the review and communication process is sufficient to deem this unnecessary.

As well as services offering commenting functions on top of preprints, a number also exist for commenting on top of final, published versions of peer-reviewed articles. This includes services such as ScienceOpen and PubPub, as well as those that mimic the Stack Overflow style of commenting, including PhysicsOverflow, an open platform for real-time discussions between the physics community combined with an open peer review system, and MathOverflow, with both often considered to be akin to an ‘arXiv-2.0’. A system that sits in both this category and that of open pre-review manuscripts is that developed by F1000. This growing service is backed by big players including the Gates Foundation and Wellcome Trust [ 107 ]. Here, it works virtually the same as a traditional journal, except that submitted articles are published online and the subject to continuous, successive and versioned rounds of editorially managed open peer review. These services are all designed with the implication that review and publication should be more of a continuous process, rather than the quasi-final and discretised versions of manuscripts that are typically published today. There remains a large gap in our understanding of the motivations for people to engage, or not, with such platforms, as well as whether or not they lead to changes in the quality of peer review.

Researcher attitudes towards [open] peer review

Within all of the ongoing innovations around peer review, shockingly little rigorous research has been conducted on researcher attitudes towards these changes. A recent survey ( n = 3,062) provided a basis for understanding researcher perceptions towards changes around open peer review (OPR) [ 22 ]. Many of these problems must be framed against how researchers also view traditional forms of peer review, as well as against concurrent developments around preprints in different fields. With OPR now moving more into the mainstream in a highly variable manner, there remain a number of outstanding issues that require further investigation:

Are the findings of levels of experience with and attitudes towards OPR reported in the survey results above consistent across studies?

Which specific OPR systems (run via journals or third-party services) do users (within differing disciplines) most prefer?

What measures might further incentivise uptake of OPR?

How fixed are attitudes to the various facets of OPR and how might they be changed?

How might shifting attitudes towards OPR impact willingness to engage with the process?

What are attitudes to OPR for research outputs other than journal articles (e.g. data, software, conference submissions, project proposals, etc.)?

How have attitudes changed over time? As OPR gains familiarity amongst researchers and is further adopted in scholarly publishing, do attitudes towards specific elements like open identities change? In what ways?

To what extent are attitudes and practices regarding OPR consistent? What factors influence any discrepancies?

Is an openly participatory process more attractive to reviewers, and is it more effective than traditional peer review? And if so, how many participants does it take to be as or more effective?

Does openness change the demographic participation in peer review, for authors, editors, and reviewers?

This review of the limits to our understanding of peer review aimed to make clear that there are still dangerously large gaps in our knowledge of this essential component of scholarly communication. In Table 1 , we presented a tabulated roadmap summarising peer review topics that should be researched (Table 1 ).

Based on this roadmap, we see several high-priority ways in which to make immediate progress.

Peer Review Studies must be established as a discrete multi-disciplinary and global research field, combining elements of social sciences, history, philosophy, scientometrics, network analysis, psychology, library studies, and journalology.

To reach a global consensus on, and define a minimum standard for, what constitutes peer review; possibly as a ‘boundary object’ in order to accommodate field-specific variations [ 108 ]

To conduct a full systematic review of our entire global knowledge pool of peer review, so that the gaps in our knowledge can be more rigorously identified and quantitatively demarcated

These three key items should be then used as the basis for the systematisation of new research programmes, revealed by our analysis, combining new collaborations between researchers and publishers. In order to support this, it will require more funding, both from research funding bodies and publishers, both of whom need to recognise their respective duties in the stewardship and optimising the quality of published research. For this, a joint infrastructure for data sharing is clearly required as a foundation, based around a minimal set of criteria for standards and reporting. The ultimate focus of this research field should be fixed around the efficacy and value added by peer review in all its different forms. Designing a core outcome set would help to optimise and streamline the process to make it more efficient and effective for all relevant stakeholders.

All of the research items in this roadmap can be explored in a variety of ways, and at a number of different levels, for example, across journals, publishers, disciplines, through time and across different demographics. Views on the relevant importance of these issues may vary; however, in our opinion, based on the weights we would assign to their relative difficulty to address and level of importance, it would make sense to focus particularly on the following issues:

Declaration of editorial conflicts of interests and documenting any editorial misconduct

Expectations for, and motivations and competencies of, reviewer

Researcher attitudes towards the various elements of [open] peer review

Taking a broad view, it is pertinent to tie our roadmap into wider questions surrounding reform in higher education and research, including ongoing changes in research funding and assessment. At present, peer review is systematically under-valued where most of it takes place—at academic institutions. Peer review needs to be taken seriously as an activity by hiring, review, promotion and tenure committees, with careful consideration given to any potential competitive power dynamics, particularly against earlier-career researchers or other higher risk demographics. Having it more valued at this level provides a strong incentive to learn how to do peer review correctly, while appreciating the deep complexities and diversity that surrounds the process. This includes establishing baseline knowledge and skills to form core competencies for those that are engaged in the review process so that they can fulfil their duties more appropriately [ 6 ]. This opening of the ‘black box of peer review’ will be critical for the future of an optimised peer review system, and avoiding any malpractice in the process.

There are several related elements to this discussion that we also elected not to discuss in order to maintain focus here. One of these is the issue of cost. Scholarly publishers often cite that one of the most critical things that they do is manage the peer review process—which is almost invariably performed as a voluntary service by researchers. Some estimates of the human time and cost do exist, with an estimate in 2008 putting the value of voluntary peer review services provided at around £1.9 billion per year [ 109 ] and that around 15 million hours are wasted through redundancy in the reject-resubmit cycle each year [ 110 ]. Together, these show that there is clear potential for improved efficiency in many aspects of peer review and which requires further investigation. Further information into the total financial burden of peer review might enable a cost-benefit analysis which could benefit all current stakeholders engaged in the future of peer review. Such could measure the relative benefits of quality control via peer review with the time and cost associated, as well as the impact of it preventing certain forms of knowledge entering scholarly discourses, and how this reflects epistemic diversity throughout the wider research enterprise.

Conclusions

This article addressed unknowns within our current understanding of journal-coupled peer review. This represents a critical overview that is distinct from previous work, which has largely focused on what we can say based on the limited available evidence. Peer review is a diverse and versatile process, and it is entirely possible that we have missed a number of important elements. We also recognise that there are simply unknown unknowns (i.e. things we do not know that we do not know). Furthermore, the fact that peer review is not a mechanism isolated from the context but an essential part of a complex, evolving ecological system, which involves different entities interacting in the domain of scholarly research and communication, makes this challenge even more difficult. As such, there is scope for extending what we have done to other forms of peer review, including for grants and clinical trials [ 111 , 112 ].

We hope here to have presented researchers with both a call to action and a roadmap for future research to progress their own research agendas as well as our communal knowledge of peer review by shining some light into the peer review box. Our effort was aimed to stimulate a more rational, less ideological approach and create the conditions for developing collaborative attitudes between all stakeholders involved in the scholarly communication system [ 76 , 113 ]. In order to support this, we believe that we critically need a sustained and strategic programme of research dedicated to the study of peer review. This requires direct funding from both publishers and research funding bodies, and the creation of a shared, open data infrastructure [ 114 ]. Such could coalesce around, for example, the International Peer Review Congress [ 115 ].

This will help to ensure that state-of-the-art research employs similar vocabulary and standards to enable comparability between results within a cohesive and strategic framework. Substantial steps forward in this regard have recently been made by Allen et al. [ 40 ]. Such progress can also help us to understand which problems or deficiencies are specific to peer review itself, and so can be at least in principle improved through incremental or radical reforms, and which problems are nested within, or symptomatic of, a wider organisational or institutional context, and so requiring other initiatives to address (e.g. academic hypercompetition and incentive systems).

Our final wish is that all actors within the scholarly communication ecosystem remain cognizant of the limitations of peer review, where we have evidence and where we do not, and use this to make improvements and innovations in peer review based upon a solid and rigorous scientific foundation. Without such a strategic focus on understanding peer review, in a serious and co-ordinated manner, scholarly legitimacy might decline in the future, and the authoritative status of scientific research in society might be at risk.

Availability of data and materials

T. I. R. Institute. 2017 R&D trends forecast: results from the Industrial Research Institute’s annual survey. Res Technol Manag . 2017;60:18–25.

Google Scholar  

R. Johnson, A. Watkinson, M. Mabe, The STM Report: an overview of scientific and scholarly publishing. International Association of Scientific, Technical and Medical Publishers (2018).

Rennie D. Editorial peer review: its development and rationale. Peer Rev Health Sci . 2003;2:1–13.

C. Neylon, Arenas of productive conflict: Universities, peer review, conflict and knowledge (2018) (available at https://hcommons.org/deposits/item/hc:22483/ ).

J. P. Tennant, B. Penders, T. Ross-Hellauer, A. Marušić, F. Squazzoni, A. W. Mackay, C. R. Madan, D. M. Shaw, S. Alam, B. Mehmani, Boon, bias or bane? The potential influence of reviewer recommendations on editorial decision-making (2019).

Moher D, Galipeau J, Alam S, Barbour V, Bartolomeos K, Baskin P, Bell-Syer S, Cobey KD, Chan L, Clark J, Deeks J, Flanagin A, Garner P, Glenny A-M, Groves T, Gurusamy K, Habibzadeh F, Jewell-Thomas S, Kelsall D, Lapeña JF, MacLehose H, Marusic A, McKenzie JE, Shah J, Shamseer L, Straus S, Tugwell P, Wager E, Winker M, Zhaori G. Core competencies for scientific editors of biomedical journals: consensus statement. BMC Med . 2017;15:167.

Overbeke J, Wager E. 3: The state of evidence: what we know and what we don’t know about journal peer review. JAMA . 2011;272:79–174.

Malički M, von Elm E, Marušić A. Study design, publication outcome, and funding of research presented at International Congresses on Peer Review and Biomedical Publication. JAMA . 2014;311:1065–7.

Dondio P, Casnici N, Grimaldo F, Gilbert N, Squazzoni F. The “invisible hand” of peer review: the implications of author-referee networks on peer review in a scholarly journal. J Inform . 2019;13:708–16.

Grimaldo F, Marušić A, Squazzoni F. Fragments of peer review: a quantitative analysis of the literature (1969-2015). PLOS ONE . 2018;13:e0193148.

Batagelj V, Ferligoj A, Squazzoni F. The emergence of a field: a network analysis of research on peer review. Scientometrics . 2017;113:503–32.

Ross-Hellauer T. What is open peer review? A systematic review. F1000Research . 2017;6:588.

Allen H, Boxer E, Cury A, Gaston T, Graf C, Hogan B, Loh S, Wakley H, Willis M. What does better peer review look like? Definitions, essential areas, and recommendations for better practice. Open Sci Framework . 2018. https://doi.org/10.17605/OSF.IO/4MFK2 .

S. Parks, S. Gunashekar, Tracking Global Trends in Open Peer Review (2017; https://www.rand.org/blog/2017/10/tracking-global-trends-in-open-peer-review.html ).

Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med . 2006;99:178–82.

Groves T. Is open peer review the fairest system? Yes. BMJ . 2010;341:c6424.

Khan K. Is open peer review the fairest system? No. BMJ . 2010;341:c6425.

Smith R. Peer review: reform or revolution? Time to open up the black box of peer review. BMJ . 1997;315:759–60.

Relman AS. Peer review in scientific journals--what good is it? West J Med . 1990;153:520–2.

Kassirer JP, Campion EW. Peer review: crude and understudied, but indispensable. JAMA . 1994;272:96–7.

Wessely S. What do we know about peer review? Psychol Med . 1996;26:883–6.

Ross-Hellauer T, Deppe A, Schmidt B. Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers. PLOS ONE . 2017;12:e0189311.

Tennant JP, Dugan JM, Graziotin D, Jacques DC, Waldner F, Mietchen D, Elkhatib Y, Collister LB, Pikas CK, Crick T, Masuzzo P, Caravaggi A, Berg DR, Niemeyer KE, Ross-Hellauer T, Mannheimer S, Rigling L, Katz DS, Tzovaras BG, Pacheco-Mendoza J, Fatima N, Poblet M, Isaakidis M, Irawan DE, Renaut S, Madan CR, Matthias L, Kjær JN, O’Donnell DP, Neylon C, Kearns S, Selvaraju M, Colomb J. A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research . 2017;6:1151.

Kaplan D. How to fix peer review: separating its two functions—improving manuscripts and judging their scientific merit—would help. J Child Fam Stud . 2005;14:321–3.

Hunter J. Post-publication peer review: opening up scientific conversation. Front. Comput. Neurosci . 2012;6. https://doi.org/10.3389/fncom.2012.00063 .

Csiszar A. Peer review: troubled from the start. Nat News . 2016;532:306.

Baldwin M. Credibility, peer review, and Nature, 1945–1990. Notes Rec. 2015;69:337–52.

Moxham N, Fyfe A. The Royal Society And the prehistory of peer review, 1665–1965. Historical J . 2017:1–27.

A. Fyfe, K. Coate, S. Curry, S. Lawson, N. Moxham, C. M. Røstvik, Untangling Academic Publishing. A history of the relationship between commercial interests, academic prestige and the circulation of research., 26 (2017).

R. Wijesinha-Bettoni, K. Shankar, A. Marusic, F. Grimaldo, M. Seeber, B. Edmonds, C. Franzoni, F. Squazzoni, Reviewing the review process: new frontiers of peer review. Editorial Board , 82 (2016).

Squazzoni F, Brezis E, Marušić A. Scientometrics of peer review. Scientometrics . 2017;113:501–2.

Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, du Sert NP, Simonsohn U, Wagenmakers E-J, Ware JJ, Ioannidis JPA. A manifesto for reproducible science. Nat Human Behav . 2017;1:0021.

O. S. Collaboration. Estimating the reproducibility of psychological science. Science . 2015;349:aac4716.

Crick T, Hall B, Ishtiaq S. Reproducibility in research: systems, infrastructure, culture. J Open Res Software . 2017;5:32.

ter Riet G, Storosum BWC, Zwinderman AH. What is reproducibility? F1000Res . 8:36, 2019.

L. A. Barba, Terminologies for reproducible research. arXiv:1802.03311 [cs ] (2018) (available at http://arxiv.org/abs/1802.03311 ).

Bravo G, Grimaldo F, López-Iñesta E, Mehmani B, Squazzoni F. The effect of publishing peer review reports on referee behavior in five scholarly journals. Nat Commun . 2019;10:322.

Squazzoni F, Grimaldo F, Marušić A. Publishing: Journals could share peer-review data. Nature . 2017. https://doi.org/10.1038/546352a .

S. Pranić, B. Mehmani, S. Marušić, M. Malički, A. Marušić, in New Frontiers of Peer Review (PEERE), European Cooperation in Science and Technology (COST ) (2017).

Allen H, Cury A, Gaston T, Graf C, Wakley H, Willis M. What does better peer review look like? Underlying principles and recommendations for better practice. Learned Publishing . 2019;32:163–75.

J. C. Bailar III, K. Patterson, Journal peer review: the need for a research agenda (Mass Medical Soc, 1985).

Lee CJ, Moher D. Promote scientific integrity via journal peer review data. Science . 2017;357:256–7.

van Rooyen S, Delamothe T, Evans SJW. Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial. BMJ . 2010;341:c5729.

Polka JK, Kiley R, Konforti B, Stern B, Vale RD. Publish peer reviews. Nature . 2018;560:545.

Hope AA, Munro CL. Criticism and judgment: a critical look at scientific peer review. Am J Crit Care . 2019;28:242–5.

B.-C. Bjórk, Acceptance rates of scholarly peer-reviewed journals: a literature survey. El Profesional de la Información . 28 (2019), doi:10/gf6zzk.

Sugimoto CR, Larivière V, Ni C, Cronin B. Journal acceptance rates: a cross-disciplinary analysis of variability and relationships with journal measures. J Inform . 2013;7:897–906.

Khosravi MR. Reliability of scholarly journal acceptance rates. Library Hi Tech News . 2018. https://doi.org/10.1108/LHTN-07-2018-0044 .

Charles W, Fox AYK, Albert TH. Vines, Recruitment of reviewers is becoming harder at some journals: a test of the influence of reviewer fatigue at six journals in ecology and evolution. Res Integrity Peer Rev . 2017;2:3.

Gropp RE, Glisson S, Gallo S, Thompson L. Peer review: a system under stress. BioScience . 2017;67:407–10.

Kovanis M, Trinquart L, Ravaud P, Porcher R. Evaluating alternative systems of peer review: a large-scale agent-based modelling approach to scientific publication. Scientometrics . 2017;113:651–71.

Heesen R, Romeijn J-W. Epistemic diversity and editor decisions: a statistical Matthew effect. Philosophers’ Imprint . 2019. http://philsci-archive.pitt.edu/16262/ .

Hofmeister R, Krapf M. How do editors select papers, and how good are they at doing it? B.E. J Econ Analysis Policy . 2011;11. https://doi.org/10.2202/1935-1682.3022 .

Morgan AC, Economou DJ, Way SF, Clauset A. Prestige drives epistemic inequality in the diffusion of scientific ideas. EPJ Data Sci. 2018;7:1–16.

Dal-Ré R, Caplan AL, Marusic A. Editors’ and authors’ individual conflicts of interest disclosure and journal transparency. A cross-sectional study of high-impact medical specialty journals. BMJ Open . 2019;9:e029796.

Teixeira da Silva JA, Dobránszki J, Bhar RH, Mehlman CT. Editors should declare conflicts of interest. Bioethical Inquiry . 2019;16:279–98.

Huisman J, Smits J. Duration and quality of the peer review process: the author’s perspective. Scientometrics . 2017;113:633–50.

A. Marusic, 10 The role of the peer review process. Fraud and Misconduct in Biomedical Research , 128 (2019).

N. van Sambeek, D. Lakens, “Reviewers’ decision to sign reviews is related to their recommendation” (preprint, PsyArXiv, 2019), , doi: https://doi.org/10.31234/osf.io/4va6p .

Bornmann L, Mutz R, Daniel H-D. A reliability-generalization study of journal peer reviews: a multilevel meta-analysis of inter-rater reliability and its determinants. PLOS ONE . 2010;5:e14331.

Campos-Arceiz A, Primack RB, Koh LP. Reviewer recommendations and editors’ decisions for a conservation journal: is it just a crapshoot? And do Chinese authors get a fair shot? Biol Conservation . 2015;186:22–7.

Tennant JP, Penders B, Ross-Hellauer T, Marušić A, Squazzoni F, Mackay AW, Madan CR, Shaw DM, Alam S, Mehmani B, Graziotin D, Nicholas D. Boon, bias or bane? The potential influence of reviewer recommendations on editorial decision-making. Eur Sci Editing . 2019;45. https://doi.org/10.20316/ESE.2019.45.18013 .

Lee CJ, Sugimoto CR, Zhang G, Cronin B. Bias in peer review. J Assoc Inform Sci Technol . 2013;64:2–17.

Tennant JP. The dark side of peer review. Editorial Office News. 2017;10:2.

Sandström U, Hällsten M. Persistent nepotism in peer-review. Scientometrics . 2008;74:175–89.

Teplitskiy M, Acuna D, Elamrani-Raoult A, Körding K, Evans J. The sociology of scientific validity: how professional networks shape judgement in peer review. Res Policy . 2018;47:1825–41.

Glonti K, Cauchi D, Cobo E, Boutron I, Moher D, Hren D. A scoping review protocol on the roles and tasks of peer reviewers in the manuscript review process in biomedical journals. BMJ Open . 2017;7:e017468.

Glonti K, Cauchi D, Cobo E, Boutron I, Moher D, Hren D. A scoping review on the roles and tasks of peer reviewers in the manuscript review process in biomedical journals. BMC Med . 2019;17:118.

M. Dahrendorf, T. Hoffmann, M. Mittenbühler, S.-M. Wiechert, A. Sarafoglou, D. Matzke, E.-J. Wagenmakers, “Because it is the right thing to do”: taking stock of the Peer Reviewers’ Openness Initiative” (preprint, PsyArXiv, 2019), , doi: https://doi.org/10.31234/osf.io/h39jt .

Tomkins A, Zhang M, Heavlin WD. Reviewer bias in single- versus double-blind peer review. Proc Natl Acad Sci . 2017;114:12708–13.

H. Bastian, The Fractured Logic of Blinded Peer Review in Journals (2017; http://blogs.plos.org/absolutely-maybe/2017/10/31/the-fractured-logic-of-blinded-peer-review-in-journals/ ).

Lundine J, Bourgeault IL, Glonti K, Hutchinson E, Balabanova D. “I don’t see gender”: conceptualizing a gendered system of academic publishing. Soc Sci Med . 2019;235:112388.

Lynam DR, Hyatt CS, Hopwood CJ, Wright AGC, Miller JD. Should psychologists sign their reviews? Some thoughts and some data. J Abnormal Psychol . 2019;128:541–6.

Baggs JG, Broome ME, Dougherty MC, Freda MC, Kearney MH. Blinding in peer review: the preferences of reviewers for nursing journals. J Advanced Nurs . 2008;64:131–8.

J. Tóth, Blind myself: simple steps for rditors and software providers to take against affiliation bias. Sci Eng Ethics (2019), doi:10/gf6zzj.

Tennant JP. The state of the art in peer review. FEMS Microbiol Lett . 2018;365. https://doi.org/10.1093/femsle/fny204 .

van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers’recommendations: a randomised trial. BMJ . 1999;318:23–7.

Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D. Does masking author identity improve peer review quality? A randomized controlled trial. PEER Investigators. JAMA . 1998;280:240–2.

McNutt RA, Evans AT, Fletcher RH, Fletcher SW. The effects of blinding on the quality of peer review. A randomized trial. JAMA . 1990;263:1371–6.

Okike K, Hug KT, Kocher MS, Leopold SS. Single-blind vs double-blind peer review in the setting of author prestige. JAMA . 2016;316:1315–6.

Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA . 1998;280:237–40.

Bianchi F, Grimaldo F, Squazzoni F. The F3-index. Valuing reviewers for scholarly journals. J Informetrics . 2019;13:78–86.

Cowley SJ. How peer-review constrains cognition: on the frontline in the knowledge sector. Front. Psychol . 2015;6. https://doi.org/10.3389/fpsyg.2015.01706 .

J. P. Alperin, C. M. Nieves, L. Schimanski, G. E. Fischman, M. T. Niles, E. C. McKiernan, How significant are the public dimensions of faculty work in review, promotion, and tenure documents? (2018) (available at https://hcommons.org/deposits/item/hc:21015/ ).

Priem J, Hemminger BM. Decoupling the scholarly journal. Front Comput Neurosci . 2012;6. https://doi.org/10.3389/fncom.2012.00019 .

Ghosh SS, Klein A, Avants B, Millman KJ. Learning from open source software projects to improve scientific review. Front Comput Neurosci . 2012;6:18.

Horbach SPJM, Halffman W. The ability of different peer review procedures to flag problematic publications. Scientometrics . 2019;118:339–73.

Superchi C, González JA, Solà I, Cobo E, Hren D, Boutron I. Tools used to assess the quality of peer review reports: a methodological systematic review. BMC Med Res Methodology . 2019;19:48.

E. Adie, Commenting on scientific articles (PLoS edition) (2009), (available at http://blogs.nature.com/nascent/2009/02/commenting_on_scientific_artic.html ).

Ginsparg P. ArXiv at 20. Nature . 2011;476:145–7.

Morey RD, Chambers CD, Etchells PJ, Harris CR, Hoekstra R, Lakens D, Lewandowsky S, Morey CC, Newman DP, Schönbrodt FD, Vanpaemel W, Wagenmakers E-J, Zwaan RA. The Peer Reviewers’ Openness Initiative: incentivizing open research practices through peer review. Royal Soc Open Sci . 2016;3:150547.

Fanelli D. How many scientists fabricate and falsify research? A systematic teview and meta-analysis of survey data. PLOS ONE . 2009;4:e5738.

E. C. McKiernan, L. A. Schimanski, C. M. Nieves, L. Matthias, M. T. Niles, J. P. Alperin, “Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations” (e27638v2, PeerJ Inc., 2019), , doi: https://doi.org/10.7287/peerj.preprints.27638v2 .

Schimanski LA, Alperin JP. The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future. F1000Res . 2018;7:1605.

Keserlioglu K, Kilicoglu H, ter Riet G. Impact of peer review on discussion of study limitations and strength of claims in randomized trial reports: a before and after study. Res Integrity Peer Rev . 2019;4:19.

Danchev V, Rzhetsky A, Evans JA. Centralized scientific communities are less likely to generate replicable results. eLife . 2019;8:e43094.

Kumar M. A review of the review process: manuscript peer-review in biomedical research. Biol Med . 2009;1:16.

Campanario JM. Rejecting and resisting Nobel class discoveries: accounts by Nobel Laureates. Scientometrics . 2009;81:549–65.

Neylon C, Pattinson D, Bilder G, Lin J. On the origin of nonequivalent states: How we can talk about preprints. F1000Res . 2017;6:608.

E. Adie, Who comments on scientific papers – and why? (2008), (available at http://blogs.nature.com/nascent/2008/07/who_leaves_comments_on_scienti_1.html ).

Ginsparg P. Preprint Déjà Vu. EMBO J . 2016:e201695531.

A. Gentil-Beccot, S. Mele, T. Brooks, Citing and reading behaviours in high-energy physics. How a community stopped worrying about journals and learned to love repositories. arXiv:0906.5418 [cs ] (2009) (available at http://arxiv.org/abs/0906.5418 ).

Carneiro CFD, Queiroz VGS, Moulin TC, Carvalho CAM, Haas CB, Rayêe D, Henshall DE, De-Souza EA, Espinelli F, Boos FZ, Guercio GD, Costa IR, Hajdu KL, Modrák M, Tan PB, Burgess SJ, Guerra SFS, Bortoluzzi VT, Amaral OB. Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature. bioRxiv . 2019:581892.

Tennant JP, Bauin S, James S, Kant J. The evolving preprint landscape: Introductory report for the Knowledge Exchange working group on preprints. BITSS . 2018. https://doi.org/10.17605/OSF.IO/796TU .

Marra M. Astrophysicists and physicists as creators of ArXiv-based commenting resources for their research communities. An initial survey. Inform Services Use . 2017;37:371–87.

S. Hindle, Saderi, PREreview — a new resource for the collaborative review of preprints (2017; https://elifesciences.org/labs/57d6b284/prereview-a-new-resource-for-the-collaborative-review-of-preprints ).

T. Ross-Hellauer, B. Schmidt, B. Kramer, “Are funder Open Access platforms a good idea?” (PeerJ Inc., 2018), , doi: https://doi.org/10.7287/peerj.preprints.26954v1 .

Moore SA. A genealogy of open access: negotiations between openness and access to research. Revue française des sciences de l’information et de la communication . 2017. https://doi.org/10.4000/rfsic.3220 .

R. I. Network, Activities, costs and funding flows in the scholarly communications system in the UK: Report commissioned by the Research Information Network (RIN) (2008).

Stemmle L, Collier K. RUBRIQ: tools, services, and software to improve peer review. Learned Publishing . 2013;26:265–8.

V. Demicheli, C. Di Pietrantonj, Peer review for improving the quality of grant applications. Cochrane Database Syst Rev , MR000003 (2007).

T. Jefferson, M. Rudin, S. Brodney Folse, F. Davidoff, Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev , MR000016 (2007).

Rennie D. Let’s make peer review scientific. Nat News . 2016;535:31.

Squazzoni F, Ahrweiler P, Barros T, et al. Unlock ways to share data on peer review. Nature . 2020;578:512–4. https://doi.org/10.1038/d41586-020-00500-y .

Article   Google Scholar  

Ioannidis JPA, Berkwits M, Flanagin A, Godlee F, Bloom T. Ninth international congress on peer review and scientific publication: call for research. BMJ . 2019;366. https://doi.org/10.1136/bmj.l5475 .

Download references

Acknowledgements

For critical feedback as this manuscript was in production, we would like to extend our sincerest gratitude and appreciation to Flaminio Squazzoni, Ana Marušić, David Moher, Naseem Dillman-Hasso, Esther Plomp and Ian Mulvany. Their critical feedback and guidance helped to greatly improve this work. Any errors or oversights are purely the responsibility of the authors. For helpful comments via Twitter, we would like to thank Paola Masuzzo, Bernhard Mittermaier, Chris Hartgerink, Ashley Farley, Marcel Knöchelmann, Marein de Jong, Gustav Nilsonne, Adam Day, Paul Ganderton, Wren Montgomery, Alex Csiszar, Michael Schlüssel, Daniela Saderi, Dan Singleton, Mark Youngman, Nancy Gough, Misha Teplitskiy, Kieron Flanagan, Jeroen Bosman and Irene Hames, who all provided useful responses to this tweet . For feedback on the preprint version of this article, we wish to thank Martyn Rittman, Pleen Jeune and Samir Hachani. For formal peer reviews on this article, we wish to thank the two anonymous reviewers. Mario Malicki provided excellent and comprehensive editorial oversight, as well as helpful comments on the preprint version. Any remaining errors are purely the responsibility of the authors.

The authors received no specific funding for this work.

Author information

Jonathan P. Tennant is deceased.

Authors and Affiliations

Institute for Globally Distributed Open Research and Education, Gianyar, Bali, Indonesia

Jonathan P. Tennant

Graz University of Technology & Know Center GmbH, Graz, Austria

Tony Ross-Hellauer

You can also search for this author in PubMed   Google Scholar

Contributions

JPT conceived of the idea for this project, and both authors contributed equally to drafting and editing the manuscript. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Tony Ross-Hellauer .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

TRH is the Editor-in-Chief of the OA journal Publications, published by MDPI. JPT is the Executive Editor of the journal Geoscience Communication published by Copernicus. Both of these are volunteer positions.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Tennant, J.P., Ross-Hellauer, T. The limitations to our understanding of peer review. Res Integr Peer Rev 5 , 6 (2020). https://doi.org/10.1186/s41073-020-00092-1

Download citation

Received : 14 September 2019

Accepted : 18 March 2020

Published : 30 April 2020

DOI : https://doi.org/10.1186/s41073-020-00092-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Peer review studies
  • Quality control
  • Quality assurance
  • Scholarly communication
  • Open peer review
  • Scholarly publishing
  • Reproducibility
  • Research impact

Research Integrity and Peer Review

ISSN: 2058-8615

why is peer reviewed literature important

Biden won’t be charged in classified docs case; special counsel cites instances of ‘poor memory’

WASHINGTON — Special counsel Robert Hur has declined to prosecute President Joe Biden for his handling of classified documents but said in a report released Thursday that Biden’s practices “present serious risks to national security” and added that part of the reason he wouldn't charge Biden was that the president could portray himself as an "elderly man with a poor memory" who would be sympathetic to a jury.

“Our investigation uncovered evidence that President Biden willfully retained and disclosed classified materials after his vice presidency when he was a private citizen,” the report said, but added that the evidence “does not establish Mr. Biden’s guilt beyond a reasonable doubt.”

The report from Hur — who previously appointed by former President Donald Trump as one of the country's top federal prosecutors — also made clear the "material distinctions" between a theoretical case against Biden and the pending case against Trump for his handling of classified documents, noting the "serious aggravating facts" in Trump's case.

Biden said in remarks from the White House after the report was made public that he was pleased that the report cleared him.

"The decision to decline criminal charges was straightforward," Biden said.

He also said: “My memory’s fine.”

Hur’s report included several shocking lines about Biden’s memory, which the report said “was significantly limited” during his 2023 interviews with the special counsel. Biden’s age and presentation would make it more difficult to convince a jury beyond a reasonable doubt that the now-81-year-old was guilty of willfully committing a crime.

“We have also considered that, at trial, Mr. Biden would likely present himself to a jury, as he did during our interview of him, as a sympathetic, well-meaning, elderly man with a poor memory,” it said. “Based on our direct interactions with and observations of him, he is someone for whom many jurors will want to identify reasonable doubt. It would be difficult to convince a jury that they should convict him — by then a former president well into his eighties — of a serious felony that requires a mental state of willfulness.”

Later in the report, the special counsel said that the president’s memory was “worse” during an interview with him than it was in recorded conversations from 2017.

“He did not remember when he was vice president, forgetting on the first day of the interview when his term ended (‘if it was 2013 — when did I stop being Vice President?’), and forgetting on the second day of the interview when his term began (‘in 2009, am I still Vice President?’),” the report said.

Biden also had difficulty remembering the timing of his son Beau’s death, as well as a debate about Afghanistan, the report said.

“He did not remember, even within several years, when his son Beau died,” the report said.

Defenders of the president quickly pointed out that he sat for the interview in the days after Hamas’ Oct. 7 attack on Israel. Biden, giving previously scheduled remarks on Thursday, appeared to nod to that, saying, “I was in the middle of handling an international crisis.”

He also added that he was “especially pleased” that the special counsel “made clear the stark differences between this case and Donald Trump.”

Andrew Weissman, who served on special counsel Robert Mueller’s team, said Thursday on MSNBC that Hur’s decision to lodge criticisms of Biden’s memory problems was “gratuitous” and reminded him of when former FBI Director James Comey held a news conference criticizing Hillary Clinton in the months before the 2016 election.

“This is not being charged. And yet a person goes out and gives their opinion with adjectives and adverbs about what they think, entirely inappropriate,” he said. “I think a really fair criticism of this is, unfortunately, we’re seeing a redux of what we saw with respect to James Comey at the FBI with respect to Hillary Clinton in terms of really not adhering to what I think are the highest ideals of the Department of Justice.”

page 131 photo hur report

In a Monday letter to Hur and his deputy special counsel, Richard Sauber and Bob Bauer, Biden’s personal counsel, disputed how the report characterized the president’s memory.

“We do not believe that the report’s treatment of President Biden’s memory is accurate or appropriate,” Sauber and Bauer wrote in the letter, which was also released on Thursday. “The report uses highly prejudicial language to describe a commonplace occurrence among witnesses: a lack of recall of years-old events.”

Separately, Sauber responded to the report by saying the White House is “pleased” it has concluded and that there were no criminal charges.

“As the Special Counsel report recognizes, the President fully cooperated from day one,” he said in a statement. “His team promptly self-reported the classified documents that were found to ensure that these documents were immediately returned to the government because the President knows that’s where they belong.”

Sauber went on to appear to criticize the report but raised no specific points.

“We disagree with a number of inaccurate and inappropriate comments in the Special Counsel’s report,” Sauber said in his statement. “Nonetheless, the most important decision the Special Counsel made — that no charges are warranted — is firmly based on the facts and evidence.”

Hur’s report said there were “clear” material distinctions between a potential case against Biden and the pending case against Trump, noting that unlike “the evidence involving Mr. Biden, the allegations set forth in the indictment of Mr. Trump, if proven, would present serious aggravating facts.”

why is peer reviewed literature important

Most notably, the report said, “after being given multiple chances to return classified documents and avoid prosecution, Mr. Trump allegedly did the opposite.” In contrast, it said, “Mr. Biden turned in classified documents to the National Archives and the Department of Justice, consented to the search of multiple locations including his homes, sat for a voluntary interview, and in other ways cooperated with the investigation.”

Some of the report focuses on documents about Afghanistan, from early in Barack Obama’s presidency. About a month after Biden left office as vice president, in a recorded conversation with his ghostwriter in February 2017, Biden remarked that he “just found all this classified stuff downstairs,” the report said. He told him, “Some of this may be classified, so be careful," in one recording. Biden was believed to have been referring to classified documents about the Afghanistan troop surge in 2009, which Biden opposed.

The announcement tops off a lengthy saga that began in November 2022, after one of Biden’s personal attorneys found classified documents that appeared to be from the Obama administration at the Penn Biden Center for Diplomacy and Global Engagement, which Biden had used as a personal office after his vice presidential term concluded. Classified documents were later also found at Biden’s Delaware home.

The existence of classified documents at Biden’s home and former office were first reported in January 2023. CBS News first reported the existence of the documents at the Penn Biden Center.

Attorney General Merrick Garland in January 2023 announced that he would appoint Hur as special counsel to oversee the investigation into Biden, saying the appointment authorized him “to investigate whether any person or entity violated the law in connection with this matter.”

Biden was interviewed in October as part of the investigation, the White House said. The interview was voluntary, according to White House spokesman Ian Sams.

“As we have said from the beginning, the President and the White House are cooperating with this investigation, and as it has been appropriate, we have provided relevant updates publicly, being as transparent as we can consistent with protecting and preserving the integrity of the investigation,” Sams said at the time.

NBC News has also previously reported that the special counsel had interviewed Hunter Biden as well, according to a source familiar with the matter.

With Hur’s announcement, Donald Trump remains the only president in history to face criminal charges, which include seven criminal charges in connection with mishandling classified documents found at Mar-a-Lago. According to the indictment in that case, Trump had more than 100 classified documents at his Florida home, including documents with “Top Secret” classification markings.

why is peer reviewed literature important

Ryan J. Reilly is a justice reporter for NBC News.

why is peer reviewed literature important

Ken Dilanian is the justice and intelligence correspondent for NBC News, based in Washington.

why is peer reviewed literature important

Megan Lebowitz is a politics reporter for NBC News.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • v.3(4); 2016 Oct

Peer review and the publication process

Parveen azam ali.

1 The School of Nursing and Midwifery, The University of Sheffield, Barber House Annexe, 3a Clarkehouse Road, Sheffield, S10 2LA, UK

Roger Watson

2 Faculty of Health and Social Care, University of Hull, Cottingham Road, Hull, HU6 7RX, UK

To provide an overview of the peer review process, its various types, selection of peer reviewers, the purpose and significance of the peer review with regard to the assessment and management of quality of publications in academic journals.

Discussion paper.

This paper draws on information gained from literature on the peer review process and the authors' knowledge and experience of contributing as peer reviewers and editors in the field of health care, including nursing.

There are various types of peer review: single blind; double blind; open; and post‐publication review. The role of the reviewers in reviewing manuscripts and their contribution to the scientific and academic community remains important.

Introduction

Publication in academic journals plays an important role in the development and progress of any profession, including nursing (Dipboye 2006 ). On the one hand, it provides professionals such as nurses with an opportunity to share their examples of best practice and research results with colleagues in the discipline. On the other hand, academic and scientific publications serve as a source of knowledge and evidence for students, novice practitioners and emerging researchers (Henly & Dougherty 2009 ) and contribute to their professional development. To serve these purposes effectively, appropriate scrutiny of manuscripts submitted to academic journals, to determine their worth, quality, methodological rigour, utility and publishability before appearing in the electronic and print media, is warranted. Such quality assurance mechanisms are essential to ensure publication of reliable and high quality research and scholarly evidence (Shattell et al . 2010 ).

The publication process begins with a manuscript submission to a journal by an author. As shown in Figure  1 – which outlines the editorial processes at Wiley – a manuscript goes through several stages before actual publication (Jefferson et al . 2007 ). The process outlined in Figure  1 may be more elaborate than for some journals and the various tasks may be distributed differently across the editorial team, but this figure includes all of the possible steps that can take place in the publication process. The first stage of the process is an editorial review that aims to assess the quality and merits of a manuscript. The editor (often the editor‐in‐chief) of the journal concerned reviews the manuscript to determine its relevance to the journal and suitability to undergo peer review. Further checks take place at the editorial desk by an editorial assistant, including checks for similarity to other sources using a similarity detection package such as iThenticate ® . If the manuscript is too similar to other sources, it may be rejected or it may be unsubmitted and returned to the author for amendment. Additional checks for readability and the extent to which the manuscript conforms to the standards of the journal, for example, word‐length and use of international reporting standards take place. In Figure  1 , this is done by a managing editor and, again, the manuscript may be rejected or returned to the author for amendment. Once satisfied, the managing editor assigns an editor, identifies, and assigns 2‐3 reviewers with appropriate knowledge, skills, methodological expertise and experience to assess the manuscript and feedback on its quality, rigour and publishability. Peer reviewers' feedback helps the editor to decide if the manuscript is rejected, accepted or needs revision before it can be accepted for publication. Whatever the case, the decision is communicated to the author. When a revision is required, the reviewers suggest changes or ask for more details from the authors before accepting the manuscript for publication. Once the manuscript is accepted, it moves to the third stage, which is called production and ensures the production of a readable and comprehensible article free of spelling mistakes, and presented in the uniform style of a particular journal (Jefferson et al . 2007 ). The author is also expected to check and approve the final proof before the final stage which is an administrative process, to ensure the allocation of appropriate tracking number, called Digital Object Identifier (DOI), to the article and regular production of a journal (Jefferson et al . 2007 ). The peer review process is important to understand, not only for potential authors, but also for those involved in the process, as it is often an individual/solitary exercise.

An external file that holds a picture, illustration, etc.
Object name is NOP2-3-193-g001.jpg

The editorial process, including peer review. EiC, editor‐in‐chief; EA , editorial assistant ( SP i is a company providing editorial assistants); ME , managing editor.

Until recently, little guidance was available to peer reviewers, though, publishers and journals have started developing resources for novice and potential peer reviewers (Pierson 2011 ). The availability of relatively limited information about the peer review process deters authors' and reviewers' ability and willingness to be involved in the process. An awareness of the peer review process may help authors understand the process, and expectations better and therefore, may alleviate their anxiety and facilitate preparation of appropriate quality manuscripts. Experienced authors will be well aware that not every manuscript is accepted and that some journals have very low publication rates. For example, the Journal of Advanced Nursing (one of the present authors is an editor) receives approximately 1,400 manuscripts annually and publishes fewer than 20% of them. The Journal of the American Medical Association ( JAMA ) receives over 5000 manuscripts annually and publishes fewer than 5% of them (Personal communication from Howard Bauchner, Editor in‐Chief JAMA ). Such knowledge may also help authors and readers to become involved in the peer review process. This article aims to provide an overview of the peer review process for authors, novice peer reviewers and those who may have an interest in becoming a peer reviewer. Various types of peer review, selection of peer reviewers, the role of peer review, and issues associated with peer review are explored.

Background to peer review

Peer review lies at the core of science and academic life (Kearney & Freda 2005 , Henly & Dougherty 2009 ). It is an established component of the publication process, professional practice and the academic reward system (Lee et al . 2013 ). The process involves checking or evaluating the scholarly work by a group of experts in the same discipline. The process is used by academic institutions, funding bodies and publishers to identify strengths, weaknesses and the potential to be published of a proposed piece of work (Pierson 2011 ). It is an essential element of the publication process that purports to ensure quality and excellence in papers published in scientific, educational and professional journals (Henly & Dougherty 2009 ). The history of editorial review extends over 200 years (Kronick 1990 , Rennie 2003 ); however, the practice of peer review in its current form only developed in the 19th century (Fyfe 2015 ) and since 1967 peer review has become the norm. It is now considered a gold standard process that not only helps journals to judge manuscripts, but also acts as a criterion to judge the journals (Bordage & Caelleigh 2001 ). Before the introduction of peer review, the majority of editors of academic and scientific journals were generalists. After World War II, medical and technological advancement and changes made it impossible for generalist editors to judge papers requiring specialist knowledge. Therefore, it was considered necessary to seek the assistance of expert content specialists to assist in the process of reviewing (Christenbery 2011 ). Since then peer review has become an integral part of the publication process.

Utility of peer review

There are many beneficiaries of the peer review process and these include authors, editors and publishers, peer reviewers, disciplines and society. The process provides authors with an opportunity to improve the quality and clarity of their manuscript. Publishing in a peer reviewed journal is considered prestigious. Comments provided by the reviewers guide and help the journal's editor and editorial staff to identify acceptable or substandard manuscripts (Christenbery 2011 ). Editors rely on the peer review system to inform the choices they make among the many manuscripts competing for the few places available for publication (Broome et al . 2010 , Lipworth et al . 2011 ).

The peer review process is also useful for peer reviewers themselves, as it helps them develop knowledge and expertise in their specific field. Acting as a peer reviewer may also be recognized as an example of ‘contribution to the profession’ in individual performance reviews (Pierson 2011 ). ‘The peer review process can also affect society at large when a social policy implication is suggested or inferred from the published manuscript’ (Hojat et al . 2003 , p. 76). In addition, publication of well written, methodologically sound and well informed research and scholarly papers help professions such as nursing to develop.

Types of peer review

There are, essentially, two types of peer review: closed and open. The former is more common, but the latter is becoming more popular and authors and reviewers encounter both types of reviews. Closed review has two variants – as will be explained – and we are now seeing post‐publication review (PPPR) in some journals. Each method has its own advantages and disadvantages as specified in Table  1 .

Characteristics of various peer review methods

Closed peer review

Closed peer review is a system where either the identities of at least one of the parties in the review process – usually the reviewers – are not disclosed. Closed review works in two ways: single blind and double blind. In single blind review , the author is not aware of the reviewers' identities. However, the reviewers are aware of the authors' identities, affiliations and credentials. It is the most common approach used in the majority of academic and scientific journals, especially biomedical journals (Kearney & Freda 2005 ). The method is criticized for several flaws such as the possibility of reviewer bias as the reviewer is not blinded to the details of the authors. The method could be considered unfair on the grounds that the manuscript is the intellectual property of the author (Dividoff & DeAngelis 2001 ) and, therefore, should be reviewed openly and not secretly (Smith 1999 ). Some believe that the single blind review gives the reviewers an opportunity to be harsh to the authors as they feel assured that the authors will not be able to identify them. In addition, reviewers working in the same field may delay the feedback to delay publication, if they themselves are thinking of publishing on the same topic. Despite this criticism, single blind peer review remains a commonly used method.

Double blind review is also commonly used by many professional biomedical journals (Kearney & Freda 2005 , Baggs et al . 2008 ). Nearly all (95%) nursing journals follow this approach (Kearney & Freda 2005 ). In this approach, the authors and reviewers are not aware of each other's identities and institutional affiliations. Proponents of double‐blind review maintain that this approach eliminates chances of bias in the manuscript review process; whereas, opponents believe that such blinding does not improve the quality of the review (van Rooyen et al . 1998 , Shea et al . 2001 ). Evidence suggest that, despite double blinding, reviewers may still be able recognize authors through other markers such as writing style, subject matter and self‐citation. Like the single blind review, there is a chance that the reviewers may be unnecessarily critical while giving feedback to the authors.

Open peer review

In contrast to the closed review, open peer review is a system where authors and reviewer are known to each other throughout the process. Many major journals such as the British Medical Journal (BMJ) encourage this approach. In an open review, authors and reviewers' names may be published alongside each other with an option to publish reviewers' reports alongside. Proponents believe that this is a better approach as nothing is done in secret and the authors' intellectual property rights are respected (Dividoff & DeAngelis 2001 ). The approach may also act as a regulatory mechanism for the reviewers whom ‘will produce better work and avoid offhand, careless or rude comments when their identity is known’ (Ware 2008 , p. 6). Reviewers are recognized for their contribution as their names are published in the journal. Opponents, however, maintain that open review may lead to less honest, less critical and less rigorous review by the reviewers who may fear revenge. Opponents believe that knowing the authors' identity, reputation and institutional affiliation may affect the review process and contribute to a biased decision. We also consider it possible that some reviewers may be overly critical with the intention of appearing to be very rigorous to their peers. Open reviewing recently received some criticism following an incident involving the open access online journal PloS One (Bernstein 2015 ). The case involved some sexist remarks from a reviewer towards an author advising her to work with male colleagues who were, ostensibly, more successful. This was made possible by dint of the fact that the reviewer could identify the author and her gender due to the open review system. The reviewer and the editor who allowed the comments to be passed on to the author are no longer associated with the journal.

Other forms of peer review

Hunter ( 2012 , p. 1) states ‘Peer review is broken’ and she continues to explain that, from the author's perspective: ‘Peer review is slow; it delays publication. It's almost always secret; authors do not know who is reviewing their work – perhaps an ally but, equally, perhaps a competitor'. However, more recently, advances in the electronic publishing technology (Ware 2008 ) have enabled the development of another form of review called ‘post‐publication peer review’ (PPPR), which means that the review is performed once the article is already published. Initially, PPPR was only generally acceptable as a supplement to the peer review process and not as a sole process (Ware 2008 ) but is becoming more mainstream and, for example, the blog The Future of Scientific Publishing ( https://futureofscipub.wordpress.com/open-post-publication-peer-review/ ; accessed 8 December 2015) advocates more post‐publication reviewing as a form of scrutiny of papers which are in the public domain and, moreover, advocates and open system of review. By some this has been seen as a response to the: ‘urgent need to reform the way in which authors, editors, and publishers conduct the first line of quality control, the peer review’ (Teixeira da Silva & Dobránski 2015 , p.1). PPPR can take two forms ‘primary PPPR’ or ‘secondary PPPR’. In primary PPR, an unreviewed article is published after initial editorial checks. It can then be reviewed by formally invited reviewers, as practiced by F1000Research and Copernicus journals (Amsen 2014 ) who describe their process as ‘publish then filter’ (Hunter 2012 ). In secondary PPPR, the aricle is published after initial editorial checks but it is available for review by voluntary reviewers. In both cases, the article is altered by the authors on the basis of the PPPR comments and, essentially, evolves towards a published peer reviewed article. Thus, PPPR – of whatever form – complements traditional peer review and ‘allows for the continuous improvement and strengthening of the quality of science publishing’ (Teixeira da Silva & Dobránski 2015 , p.1) and now has some prominent supporters, including Richard Smith ( 2015 ), the former Editor of the BMJ .

In terms of accelerating the peer review process, regardless of the outcome, Kriegeskorte ( 2012 ) indicates that the PPPR system essentially merges the ‘review and reception’, or publication, of articles. He envisages the literature being accessed by web‐portals which take readers directly to articles based on subject material rather than through journals or journal webpages, admittedly something that is already evident, and thus facilitating the process of review and the reputation of individual articles rather than journals. Kriegeskorte ( 2012 ) sees this as an alternative to potentially good articles being rejected on submission and also the rapid, and possibly undeserved, reputation that some articles gain. In Kriegeskorte's words (p. 7) ‘important papers will accumulate a solid set of evaluations and bubble up in the process – some of them rapidly, others after years’. Naturally, some ‘quality control’ of reviewers is exercised as some publishers require peer reviewers to meet certain criteria. For instance, Science Open requires a reviewer to have at least five articles published in their ORCiD profile. However, at Winnower, any registered user can review a published article and leave their comment (Amsen 2014 ). Alternatively, commenting on published articles via blogs or other third party sites is always possible.

An informal system of PPPR has always existed and this has been facilitated by the recent major advances in electronic publishing and by the near universality of journals being published online. The rise of online social media and networking is now facilitating, in turn, a steady stream of comment on publications. Authors increasingly ‘get their retaliation in first’ by eking out results and manuscripts through social media platforms such as blogs and microblogs – most specifically, Twitter – whereby an exchange of views can take place in advance, even, of a refereed article. In addition, some journals publish open access; some exclusively and some offering the facility to publish articles open access for a fee called an APC (article processing charge). Even if the content is not freely available, academics have easy access to most scientific publications through their university libraries via gateways such as ATHENS. This means that, with the use of online early publication, by many publishers, of articles before they are serialized and with the immediate posting of articles by some online open access publishers such as BioMed Central, that academics have access to a steady stream of articles in their field. Where scientific literature may not be as freely available, for example, in some developing countries and to those working outside academic publishers do take steps to increase ease of access to their work through specific deals and, of course, it is always open to any academic to request an offprint (hard copy or electronic) directly from authors.

Finally, and very recently, is the advent of the website PubPeer which explicitly exists to provide anonymous post‐publication review of published, refereed, articles. As explained by Watson ( 2016 ), PubPeer is in its infancy, but growing and has received some negative press as in the description of promoting ‘vigilante science’.

Selection of peer reviewers

Reviewers are usually people who have published on the same topic (Brazeau et al . 2008 ) and selection of the reviewer is an important task that is normally carried out by the editor of the journal. Editors identify and invite suitable, experienced and interested people in the subject matter or relevant field by using the key words authors (peer review) have used in the past. Many journals use a bank of established and regular reviewers, but some use the keywords to identify individuals via search engines and databases, for example, ResearcherID. Some journals ask the authors to name reviewers and one study (Kowalczuk et al . 2015 ) suggest that, while this has little effect on the quality of reviews, it does lead to higher recommendations to accept manuscripts. However, the process of authors suggesting reviewers has led to some scandals related to fabricated peer reviews (Barbash 2015 , Moylan 2015 ) and some journals are no longer using this process. In some journals, authors can also indicate individuals they would not wish to review their manuscripts. The editors may also invite authors to become subsequent reviewers, sometimes by asking them to provide their Curriculum vitae (Evans et al . 1993 ) or on the basis of particular qualifications (e.g. a PhD) and a publication track record in peer reviewed journals. The method of selecting the reviewer does not, necessarily, affect the quality of the review as individuals are different and, therefore, their interpretation, views and methods of review will, in any case, vary. However, contrary to what might be expected, it has been demonstrated that emerging academics are usually better reviewers as they provide comprehensive and thorough feedback (Evans et al . 1993 , van Rooyen et al . 1998 ). Evidence also identified no improvement in the quality of review with academic seniority or gender (Gilbert et al . 1994 , Fox et al . 2016 ).

Role of peer reviewers

Reviewers contribute to the development of the knowledge base of any profession, such as nursing, by giving their valuable time to review manuscripts (Dipboye 2006 , Pierson 2011 ). Reviewers are volunteers and rarely receive any monetary compensation for their role (Relman & Angell 1989 ). The role of a reviewer is very important, yet a challenging and complex professional activity. To be a good reviewer requires theoretical, methodological and practical knowledge and an ability to apply that knowledge when evaluating a manuscript and writing constructive feedback to help the author improve the quality of their manuscript (Lovejoy et al . 2011 ). In addition, reviewers' feedback helps the editor to make a decision about the manuscript (Broome et al . 2010 ). Acting as a peer reviewer is useful for an individual academic, as it helps them to develop their subject knowledge, analytical abilities and skills required to provide constructive feedback. The activity is usually recorded on their curriculum vitae and thus can be recognized in performance appraisal and progression. There are various reasons why reviewers choose to review manuscripts. These include a desire to play their part as a member of the academic community, improve their reputation and career progression (Ware 2008 ) and increase their knowledge and understanding of their subject. Other common factors that may encourage academics to act as peer reviewer include the inducement of getting a free or reduced subscription to the journal, acknowledgement in journals and payment in kind (Ware 2008 ). The reviewers have to adhere to certain principles of the review as advocated by the Committee on Publication Ethics ( 2013 ) and academic journals. These are summarized in Table  2 .

Principles of Peer Review recommended by Committee on Publication Ethics ( 2013 )

Issues with peer review

As already indicated, the peer review process is criticized by many academics who believe ‘…it is ineffective, largely a lottery, anti‐innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant’ (Smith 2015 ). Some believe that various flaws and problems in the peer review process may affect the quality of reviews and, thereby, the quality of publications. These flaws include: slowness of the publication processes; negative impact on authors; poor preparation and training of reviewers; variable review requirements; ineffectiveness of peer review; and biases in peer review. We believe, these issues are relevant to all forms of peer review, although, some may be more relevant to some forms of peer review than others.

Peer review slows the publication process

There is a perception that peer review may slow the process of publication. ‘…the original purpose of peer review was to ration access to resources for scholarly exposure. Nowadays, however, exposure is not a scarce resource, since publications can be made available electronically, essentially free of cost. The question, therefore, is one of quality control and we do not know how much refereeing the scholarly market actually wants’ (The British Academy 2007 , p. 11). However, peer review is a quality control mechanism which, despite contributing to slowness of procedures, enhances the quality of the publication. In addition, most journals – these days – not only specify a date when a review is due, but also send reminders (a week before the review is due; on the due date) to reviewers to remind them to complete and submit their review timely. This approach is very useful as it helps reviewers to complete their review in time.

Negative impact on authors

Undergoing peer review can be a negative experience for some authors due to insensitive and irresponsible behaviour of some reviewers who may not read the manuscript, provide irrelevant comments or feedback, and use the opportunity to promote their work or make negative and malicious comments (Smith 2015 ). However, development and communication of appropriate practice guidelines and principles of peer review may help overcome such issues. In addition, the journal editors can play a very important role and may be able to intervene in such situation by discussing the issues with the reviewer. This issue may have more impact in the context of post‐publication peer review. Publicly available harsh, unnecessary, negative and insensitive comments can be detrimental to author's rapport and may have an impact on their confidence and ability to write in future.

Poor reviewer preparation

Formal training and preparation may help reviewers develop appropriate review skills, but is often not widely available. The process itself is not easy to learn (Provenzale & Stanley 2006 ) and educational programmes do not prepare postgraduate students for the role of peer reviewer (Eastwood 2000 ). This, in turn, affects the confidence and ability of reviewers who may only learn the art of reviewing through trial and error. New reviewers usually do not have any training or awareness about how to review a paper. A reviewer may not have any mentorship or any experience of reviewing someone else's work. This issue can be overcome by ensuring that postgraduate students, doctoral and post‐doctoral academics are provided with appropriate training and guidance to develop their review and feedback skills (The British Academy 2007 , House of Commons Science and Technology Committee 2011 ). One strategy may be that postgraduate students and emerging academics should be invited to review manuscripts as a third reviewer. Appropriate mentorship and guidance can be provided by introducing a buddy system where novice reviewers are ‘buddied’ with experienced reviewers. In either of the cases above, this needs to be done with the permission of the journal and declared and some journals ask for this as a specific declaration when reviews are submitted. This may help novice reviewers to develop reviewing skills and knowledge. Presently, very few journals give reviewers access to other reviewers' comments. Nevertheless, giving reviewers access to other reviewers comments about the same manuscript can also be a useful way of helping reviewers improve their knowledge and skills (House of Commons Science and Technology Committee 2011 ). As manuscripts are now reviewed electronically, providing access to other reviewers' comments and feedback is fairly straightforward and hassle‐free.

Variable review requirements

There is a wide variation in the review requirements and expectations among different journals. Recently, various publishers and journals have started to develop guidelines to help reviewers understand the expectations. Some journals are very prescriptive and expect strict compliance by the reviewers, while others may be less specific about their expectations. Although it is important to provide some guidance about review and communicate expectation to ensure consistency in review, too much prescription may limit the reviewer's ability to critically assess and feedback on strengths and areas of improvement of a manuscript. Again providing appropriate guidance, mentorship opportunities and sharing of fellow reviewer's reports can help reviewers identify their own style of review and develop confidence and ability to provide constructive feedback.

Ineffectiveness of peer review

Research examining effectiveness of peer review is still limited (Patel 2014 ). The lack of research supporting or negating the effectiveness of peer review contributes to ambiguity about the effectiveness of peer review and fuels the criticism against peer review (Jefferson et al . 2002 , Ware 2008 , Patel 2014 ). Some researchers consider peer review as an unreliable method of quality assurance and error detection (Godlee et al . 1998 , Patel 2014 ). They believe that reviewing by two reviewers is insufficient to identify issues with the manuscript. The authors maintain that to make the peer review process reliable and comparable, an editor is required to have a minimum of six reviewers, whereas generally, it is often difficult to identify two or three reviewers to review a paper (Rothwell & Martyn 2000 , Ware 2008 ). It should also be recognized that peer review is not a scientific process; it is a process based on people and the judgements they make. People differ in their expertise, opinions and experience and, therefore, their opinion or feedback about same manuscript can differ. In addition, reviewers do not make the decisions about which manuscript to accept or reject, but only provide their view on a manuscript, which aids the editors in making a decision.

Peer review and bias

The peer review process cannot be free from bias; bias can only be minimized. Generally a single blind review is criticized for the risk of bias. However, the effectiveness of the blinding process itself is questionable (Kearney & Freda 2005 , Baggs et al . 2008 , Ware 2008 ). Another flaw of the peer review system is the biased decisions of the peer reviewers. Evidence suggests that reviewers tend to accept papers that provide confirmatory results and reject those that do not confirm established theories (Mahoney 1977 ). Similarly, peer reviewers tend to accept studies that offer positive results and reject those that report negative results. This issue is referred to as ‘file drawer problem’ (Rosenthal 1979 p. 638) as the research with negative results due to non‐acceptance remain in the file drawer of the researcher and are not disseminated to the wider community. Some researchers have even mentioned that peer review works against innovative studies (Armstrong 1996 , Hojat et al . 2003 , Lee, et al . 2013 ), a point reinforced recently by the former Editor of the BMJ (Smith 2015 ). Reviews can also be influenced by the characteristics of authors (gender, political or religious affiliation, institutional affiliation, nationality, country of origin) (Smith 2015 , Fox et al . 2016 ) and whether they are identified by the editor or proposed by the author (Kowalczuk et al . 2015 ). These issues can be minimized by ensuring reviewers are aware of and adhere to ethical principles of review.

Despite various issues, the usefulness of the peer review process cannot be overlooked. The process of peer review, mainly in publishing but also in other aspects of academic life is regularly discussed (Fyfe 2015 , Smith 2015 ). The process recently came under the scrutiny of the British government (House of Commons Science and Technology Committee 2011 ) and other bodies (Watson 2012 ) after some accusations about biased publishing in the field of climate science. The scrutiny was in‐depth and prolonged, but the conclusion was that the peer review system in it various manifestations were far from perfect, but that it was the best we had and should continue.

It is essential to remember that peer reviewing is a voluntary activity, which means that the reviewers are not paid for their work and often complete reviews in their own time. While contributing to reviewing processes is a professional and moral obligation of any author whose work has undergone peer review (Priem & Rasheed 2006 ), it is important to make this activity as rewarding and developmental as possible. Recognizing reviewers for their work by publishing their names in the journal or providing them with awards and recognition certificates can be a useful strategy. More recently, various publishers and journals have started using these strategies to recognize the reviewers' contribution. Such strategies may be useful and may increase the motivation of reviewers and, in turn, may enhance quality of review by reviewers.

Peer review is one of various mechanisms used to ensure the quality of publications in academic journals. It helps authors, journal editors and the reviewer themselves. It is a process that is unlikely to be eliminated from the publication process. All forms of peer review have their own strengths and weaknesses. To make the process more effective and useful, it is important to develop peer review skills, especially, among postgraduate students. There should be published guidelines and help for novice peer reviewers. Mentoring new reviewers and sharing the feedback of different reviewers can help new reviewers. More research is needed to determine the effectiveness of the peer review process.

Conflict of interest

Author contributions.

All authors have agreed on the final version and meet at least one of the following criteria [recommended by the ICMJE ( http://www.icmje.org/recommendations/ )]:

  • substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data;
  • drafting the article or revising it critically for important intellectual content.
  • Amsen E. (2014) What is Post‐Publication Peer Review? Available at http://blog.f1000research.com/2014/07/08/what-is-post-publication-peer-review/on 15 December 2015.
  • Armstrong S.J. (1996) We need to rethink the editorial role of peer reviewers . The Chronicle of Higher Education 43 ( 9 ), B3. [ Google Scholar ]
  • Baggs J.G., Broome M.E., Dougherty M.C., Freda M.C. & Kearney M.H. (2008) Blinding in peer review: the preferences of reviewers for nursing journals . Journal of Advanced Nursing 64 ( 2 ), 131–138. [ PubMed ] [ Google Scholar ]
  • Barbash F. (2015) Major publisher retracts 43 scientific papers amid wider fake peer‐review scandal . Washington Post . Available at http://www.washingtonpost.com/news/morning-mix/wp/2015/03/27/fabricated-peer-reviews-prompt-scientific-journal-to-retract-43-papers-systematic-scheme-may-affect-other-journals/ on 30 January 2016. [ Google Scholar ]
  • Bernstein R. (2015) PLOS ONE Ousts Reviewer, Editor After Sexist Peer‐Review Storm ScienceInsider 1 May . Available at http://news.sciencemag.org/scientific-community/2015/04/sexist-peer-review-elicits-furious-twitter-response on 07 December 2015
  • Bordage G. & Caelleigh A.S. (2001) A tool for reviewers: “Review Criteria for Research Manuscripts” . Academic Medicine 76 ( 9 ), 904–908. [ Google Scholar ]
  • Brazeau G.A., DiPiro J.T., Fincham J.E., Boucher B.A. & Tracy T.S. (2008) Your Role and Responsibilities in the Manuscript Peer Review Process . American Journal of Pharmaceutical Education 72 ( 3 ), 69. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Broome M., Dougherty M.C., Freda M.C., Kearney M.H. & Baggs J.G. (2010) Ethical concerns of nursing reviewers: an international survey . Nursing Ethics 17 ( 6 ), 741–748. [ PubMed ] [ Google Scholar ]
  • Christenbery T.L. (2011) Manuscript peer review: a guide for advanced practice nurses . Journal of the American Academy of Nurse Practitioners 23 ( 1 ), 15–22. [ PubMed ] [ Google Scholar ]
  • Committee on Publication Ethics (2013) COPE Ethical Guidelines for Peer Reviewers . COPE; Available at http://publicationethics.org/files/Peer%20review%20guidelines_0.pdf on 20 July 2015. [ Google Scholar ]
  • Dipboye R.L. (2006) Peer reviews in the production of knowledge: why I stopped worrying and learned to appreciate the flaws in the review process In Winning Reviews: A Guide for Evaluating Scholarly Writing (Baruch Y., Sullivan S. & Schepmyer H., eds.), Palgrave Macmillan, New York, pp. 3–26. [ Google Scholar ]
  • Dividoff F. & DeAngelis C.D. (2001) Sponsorship, authorship, and accountability . Journal of American Medical Association 286 , 1232–1233. [ Google Scholar ]
  • Eastwood S. (2000) Ethical issues in biomedical publication In Ethical Issues in Biomedical Publication (Jones A.H. & McLellan F., eds), Johns Hopkins University Press, Baltimore, pp. 250–275. [ Google Scholar ]
  • Evans A.T., McNutt R.A., Fletcher S.W. & Fletcher R.H. (1993) The characteristics of peer reviewers who produce good‐quality reviews . Journal of General Internal Medicine 8 ( 8 ), 422–428. [ PubMed ] [ Google Scholar ]
  • Fox C.W., Burns C.S. & Meyer J.A. (2016) Editor and reviewer gender influence the peer review process but not peer review outcomes at an ecology journal . Functional Ecology 30 ( 1 ), 140–153. [ Google Scholar ]
  • Fyfe A. (2015) Peer Review: Not as old as you Might Think . Times Higher Eductaion; Available at https://www.timeshighereducation.co.uk/features/peer-review-not-old-you-might-think on 12 March 2016. [ Google Scholar ]
  • Gilbert J.R., Williams E.S. & Lundberg G.D. (1994) Is there gender bias in JAMA's peer review process? Journal of the American Medical Association 272 , 139–142. [ PubMed ] [ Google Scholar ]
  • Godlee F., Gale C.R. & Martyn C.N. (1998) Effect on the quality of peer review of blinding reviewers and asking them to sign their reports‐A randomized controlled trial . Journal of the American Medical Association 280 , 237–240. [ PubMed ] [ Google Scholar ]
  • Henly S.J. & Dougherty M.C. (2009) Quality of manuscript reviews in nursing research . Nursing Outlook 57 , 18–26. [ PubMed ] [ Google Scholar ]
  • Hojat M., Gonnella J.S. & Caelleigh A.S. (2003) Impartial judgment by the ‘gatekeepers’ of science: fallibility and accountability in the peer review process . Advances in Health Sciences Education 8 , 75–96. [ PubMed ] [ Google Scholar ]
  • House of Commons Science and Technology Committee (2011) Peer Review in Scientific Publications . The Stationery Office Ltd, London. [ Google Scholar ]
  • Hunter J. (2012) Post‐publication review: opning up scientific conversation . Frontiers in Computational Neuroscience 6 , 63. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Jefferson T., Alderson P., Wager E. & Davidoff F. (2002) Effects of editorial peer review: a systematic review . Journal of the American Medical Association 287 , 2784–2786. [ PubMed ] [ Google Scholar ]
  • Jefferson T., Rudin M., Brodney Folse S. & Davidoff F. (2007) Editorial peer review for improving the quality of reports of biomedical studies . Cochrane Database of Systematic Reviews 2 , MR000016. doi: 10.1002/14651858.MR000016.pub3 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kearney M.H. & Freda M.C. (2005) Nurse Editors' Views on the peer Review Process . Research in Nursing & Health 28 , 444–452. [ PubMed ] [ Google Scholar ]
  • Kowalczuk M.K., Dudbridge F., Nanda S., Harriman S.L., Patel J. & Moylan E.C. (2015) Retrospective analysis of the quality of reports by author‐suggested and non‐author‐suggested reviewers in journals operating on open or single‐blind peer review models . BMJ Open 5 , e008707. doi: 10.1136/bmjopen‐2015‐008707 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kriegeskorte N. (2012) Open evaluation: a vision for entirely transparent post‐publicatoin peer review and rating for science . Frontiers in Computational Neuroscience 6 , 79. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kronick D.A. (1990) Peer‐review in 18th‐century scientific journalism . Journal of the American Medical Association 263 , 1321–1322. [ PubMed ] [ Google Scholar ]
  • Lee C.J., Sugimoto C.R., Zhang G. & Cronin B. (2013) Bias in peer review . Journal of the American Society for Information Science and Technology 64 ( 1 ), 2–17. [ Google Scholar ]
  • Lipworth W.L., Kerridge I.H., Carter S.M. & Little M. (2011) Journal peer review in context: a qualitative study of the social and subjective dimensions of manuscript review in biomedical publishing . Social Science & Medicine 72 , 1056–1063. [ PubMed ] [ Google Scholar ]
  • Lovejoy T.I., Revenson T.A. & France C.R. (2011) Reviewing manuscripts for peer‐review journals: a primer for novice and seasoned reviewers . Annals of Behavioral Medicine 42 ( 1 ), 1–13. [ PubMed ] [ Google Scholar ]
  • Mahoney M.J. (1977) Publication prejudices: an experimental study of confirmatory bias in the peer review system . Cognitive Therapy and Research 1 ( 2 ), 161–175. [ Google Scholar ]
  • Moylan E. (2015) Inappropriate Manipulation of Peer Review . BioMed Central blog; Available at http://blogs.biomedcentral.com/bmcblog/2015/03/26/manipulation-peer-review/on 15 March 2016. [ Google Scholar ]
  • Patel J. (2014) Why training and specialization is needed for peer review: a case study of peer review for randomized controlled trials . BMC Medicine 12 ( 1 ), 1–7. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pierson C.A. (2011) Reviewing Journal Manuscripts: An Easy to Follow Guide for any Nurse Reviewing Journal Manuscripts for Publication . Wiley‐Blackwell; Available at http://naepub.com/wp-content/uploads/2015/08/24329-Nursing-ReviewingMSS12ppselfcover_8.5x11_for_web.pdf on 15 March 2016. [ Google Scholar ]
  • Priem R.L. & Rasheed A.A. (2006) “Reviewing as a Vital Professional Service” In Winning Reviews: A Guide for Evaluating Scholarly Writing (Baruch Y., Sullivan S. & Schepmyer H., eds.). Palgrave Macmillan, Basingstoke, pp. 27–40. [ Google Scholar ]
  • Provenzale J.M. & Stanley R.J. (2006) A systematic guide to reviewing a manuscript . Journal of Nuclear Medicine Technology 34 ( 2 ), 92–99. [ PubMed ] [ Google Scholar ]
  • Relman A.S. & Angell M. (1989) How good is peer review? New England Journal of Medicine 321 ( 12 ), 827–829. [ PubMed ] [ Google Scholar ]
  • Rennie D. (2003) Editorial peer review: its development and rationale In Peer Review in Health Sciences , 2nd edn (Godlee F. & Jefferson T., eds), BMJ Books, BMJ publishing group, pp. 1–13. [ Google Scholar ]
  • van Rooyen S., Godlee F., Evans S., Smith R. & Black N. (1998) Effect of blinding and unmasking on the quality of peer review: a randomized trial . Journal of American Medical Association 280 , 234–237. [ PubMed ] [ Google Scholar ]
  • Rosenthal R. (1979) The file drawer problem and tolerance for null results . Psychological Bulletin 86 ( 3 ), 638. [ Google Scholar ]
  • Rothwell P.M. & Martyn C.N. (2000) Reproducibility of peer review in clinical neuroscience: is agreement between reviewers any greater than would be expected by chance alone? Brain 123 , 1964–1969. [ PubMed ] [ Google Scholar ]
  • Shattell M., Chinn P., Thomas S. & Cowling W.R. (2010) Authors' and editors' perspectives on peer review quality in three scholarly nursing journals . Journal of Nursing Scholarship 42 , 58–65. [ PubMed ] [ Google Scholar ]
  • Shea J.A., Caelleigh A.S., Pangaro L. & Steinecke A. (2001) Review process . Academic Medicine 76 ( 9 ), 911–914. [ Google Scholar ]
  • Smith R. (1999) Opening of BMJ peer review . British Medical Journal 318 , 4–5. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Smith E. (2015) Ineffective at any dose? Why peer review simply doesn't work . Times Higher Eductaion . Available at https://www.timeshighereducation.co.uk/content/the-peer-review-drugs-dont-work on 15 March 2016. [ Google Scholar ]
  • Teixeira da Silva J.A. & Dobránski C.Sc (2015) Problems with traditional science publishing and finding a wider niche for post‐publication peer review . Accountability in Research: Policies and Quality Assurance 22 , 22–40. [ PubMed ] [ Google Scholar ]
  • The British Academy (2007) Peer Review: The Challenges for the Humanities and Social Sciences A British Academy Report . The British Academy; Available at from http://www.britac.ac.uk/policy/peer-review.cfm on 20 June 2015. [ Google Scholar ]
  • Ware M. (2008) Peer Review: Benefits, Perceptions and Alternatives . Publishing Research Consortium, London. [ Google Scholar ]
  • Watson R. (2012) Peer review under the spotlight in the UK . Journal of Advanced Nursing 68 ( 4 ), 718–720. [ PubMed ] [ Google Scholar ]
  • Watson R. (2016) PubPeer: never heard of it? You have now . Nurse Author & Editor 26 ( 1 ), 2. [ Google Scholar ]

IMAGES

  1. Why Peer Validation Matters in eCommerce? 2024

    why is peer reviewed literature important

  2. Peer Review

    why is peer reviewed literature important

  3. Why Is Literature Review Important? (3 Benefits Explained)

    why is peer reviewed literature important

  4. What's peer review? 5 things you should know before covering research

    why is peer reviewed literature important

  5. Peer Review: Everything You Would Like to Know

    why is peer reviewed literature important

  6. What is Peer Review?

    why is peer reviewed literature important

VIDEO

  1. Literature Review 101

  2. Approaches to searching the literature

  3. What is Literature Review?

  4. What is Literature??

  5. English Literature

  6. Types of literature review

COMMENTS

  1. Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide

    Peer reviewers provide suggestions to authors on how to improve the quality of their manuscripts, and also identify any errors that need correcting before publication. Go to: HISTORY OF PEER REVIEW

  2. Peer review in scholarly publishing part A: why do it? : IJS Oncology

    Peer review is a process whereby scientific experts evaluate a manuscript and provide feedback, offering a recommendation of whether the work is suitable for publication. In this article, we discuss the principles behind peer review and the different forms that it can take.

  3. What Is Peer Review and Why Is It Important?

    Academia & Publishing What Is Peer Review and Why Is It Important? David Sleeman | 11.07.2022 It's one of the major cornerstones of the academic process and critical to maintaining rigorous quality standards for research papers. Whichever side of the peer review process you're on, we want to help you understand the steps involved.

  4. Importance of Peer Review

    Peer review establishes the validity and reliability of manuscript evaluation. When reviews are conflicting, additional content experts might be chosen. Because reviewers are clinical, content, and methodological experts, objectivity is enhanced. Reviewers often advise authors about literature that has been missed or question the analytic approach.

  5. Peer review

    Peer review has a key role in ensuring that information published in scientific journals is as truthful, valid and accurate as possible. It relies on the willingness of researchers to give of their valuable time to assess submitted papers, not just to validate the work but also to help authors improve its presentation before publication.

  6. Approaching literature review for academic purposes: The Literature

    A sophisticated literature review (LR) can result in a robust dissertation/thesis by scrutinizing the main problem examined by the academic study; anticipating research hypotheses, methods and results; and maintaining the interest of the audience in how the dissertation/thesis will provide solutions for the current gaps in a particular field.

  7. Writing a literature review

    Introduction A formal literature review is an evidence-based, in-depth analysis of a subject. There are many reasons for writing one and these will influence the length and style of your review, but in essence a literature review is a critical appraisal of the current collective knowledge on a subject.

  8. What Is Peer Review?

    Methodology What Is Peer Review? | Types & Examples What Is Peer Review? | Types & Examples Published on December 17, 2021 by Tegan George . Revised on June 22, 2023. Peer review, sometimes referred to as refereeing, is the process of evaluating submissions to an academic journal.

  9. Critical Analysis: The Often-Missing Step in Conducting Literature

    Literature reviews are essential in moving our evidence-base forward. "A literature review makes a significant contribution when the authors add to the body of knowledge through providing new insights" (Bearman, 2016, p. 383).Although there are many methods for conducting a literature review (e.g., systematic review, scoping review, qualitative synthesis), some commonalities in ...

  10. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  11. Peer Review in Scientific Publications: Benefits, Critiques, & A

    It functions to encourage authors to meet the accepted high standards of their discipline and to control the dissemination of research data to ensure that unwarranted claims, unacceptable interpretations or personal views are not published without prior expert review.

  12. Why Use Peer Reviewed Articles in your Research?

    Watch on Why use Peer Reviewed Articles? Peer reviewed articles are considered the gold standard source type for academic research. They are written by researchers, or experts, on the topic, so it takes some of the guess work out of wondering if you should use them or not. In peer reviewed articles, you will find: findings from original research

  13. Peer Review Matters: Research Quality and the Public Trust

    In an era of evidence-based medicine, peer review is an engine and protector of that evidence. Such evidence, vetted by and surviving the peer review process, serves to inform clinical decision-making, providing practitioners with the information to make diagnostic and therapeutic decisions. Unfortunately, there is recent and growing pressure to prioritize the speed of research dissemination ...

  14. Peer review isn't perfect − I know because I teach others how to do it

    When I teach research methods, a major focus is peer review.As a process, peer review evaluates academic papers for their quality, integrity and impact on a field, largely shaping what scientists ...

  15. The essential role of peer review

    The benefits of peer review are real, whereas the alternative—giving up peer review in favour of a scientific 'freedom of expression'—would create many problems of its own.

  16. What is Peer Review?

    Systematic reviews Meta-analyses What information is NOT peer-reviewed? There is much excellent, credible information in existence that is NOT peer-reviewed. Peer-review is simply ONE MEASURE of quality. Much of this information is referred to as "gray literature." Government Agencies

  17. Why is peer review important?

    Why is peer review important? Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

  18. Peer review: What is it and why do we do it?

    Flaws Peer review is a quality control measure for medical research. It is a process in which professionals review each other's work to make sure that it is accurate, relevant, and significant....

  19. The Literature Review: A Foundation for High-Quality Medical Education

    A literature review forms the basis for high-quality medical education research and helps maximize relevance, originality, generalizability, and impact. A literature review provides context, informs methodology, maximizes innovation, avoids duplicative research, and ensures that professional standards are met.

  20. What is Peer Review?

    Peer review is designed to assess the validity, quality and often the originality of articles for publication. Its ultimate purpose is to maintain the integrity of science by filtering out invalid or poor quality articles. From a publisher's perspective, peer review functions as a filter for content, directing better quality articles to ...

  21. The Ongoing Importance of Peer Review

    The goal of peer review is to provide the editor and author with comments that evaluate the soundness and validity of the research, the methodology, the results, and conclusions ( Horbach & Halffman, 2018 ). Does the manuscript increase nursing knowledge and present new ideas?

  22. The limitations to our understanding of peer review

    Peer review is embedded in the core of our knowledge generation systems, perceived as a method for establishing quality or scholarly legitimacy for research, while also often distributing academic prestige and standing on individuals. Despite its critical importance, it curiously remains poorly understood in a number of dimensions. In order to address this, we have analysed peer review to ...

  23. Now More Than Ever: Reflections on the State and Importance of Peer Review

    As John Saultz noted, peer review is the "epistemological foundation standing between authors and readers of scientific papers." 1 It is certainly a time-consuming effort on the part of reviewers, and when performed specifically for scholarly journals, it is generally performed without compensation.

  24. Biden won't be charged in classified docs case; special counsel cites

    The decision caps off a yearlong saga and means Donald Trump remains the only president in history to face criminal charges.

  25. Peer review and the publication process

    Peer review is one of various mechanisms used to ensure the quality of publications in academic journals. It helps authors, journal editors and the reviewer themselves. It is a process that is unlikely to be eliminated from the publication process. All forms of peer review have their own strengths and weaknesses.