• Search Menu
  • Sign in through your institution
  • Advance Articles
  • Editor's Choice
  • CME Reviews
  • Best of 2021 collection
  • Abbreviated Breast MRI Virtual Collection
  • Contrast-enhanced Mammography Collection
  • Author Guidelines
  • Submission Site
  • Open Access
  • Self-Archiving Policy
  • Accepted Papers Resource Guide
  • About Journal of Breast Imaging
  • About the Society of Breast Imaging
  • Guidelines for Reviewers
  • Resources for Reviewers and Authors
  • Editorial Board
  • Advertising Disclaimer
  • Advertising and Corporate Services
  • Journals on Oxford Academic
  • Books on Oxford Academic

Society of Breast Imaging

  • < Previous

A Step-by-Step Guide to Writing a Scientific Review Article

  • Article contents
  • Figures & tables
  • Supplementary Data

Manisha Bahl, A Step-by-Step Guide to Writing a Scientific Review Article, Journal of Breast Imaging , Volume 5, Issue 4, July/August 2023, Pages 480–485, https://doi.org/10.1093/jbi/wbad028

  • Permissions Icon Permissions

Scientific review articles are comprehensive, focused reviews of the scientific literature written by subject matter experts. The task of writing a scientific review article can seem overwhelming; however, it can be managed by using an organized approach and devoting sufficient time to the process. The process involves selecting a topic about which the authors are knowledgeable and enthusiastic, conducting a literature search and critical analysis of the literature, and writing the article, which is composed of an abstract, introduction, body, and conclusion, with accompanying tables and figures. This article, which focuses on the narrative or traditional literature review, is intended to serve as a guide with practical steps for new writers. Tips for success are also discussed, including selecting a focused topic, maintaining objectivity and balance while writing, avoiding tedious data presentation in a laundry list format, moving from descriptions of the literature to critical analysis, avoiding simplistic conclusions, and budgeting time for the overall process.

  • narrative discourse

Email alerts

Citing articles via.

  • Recommend to your Librarian
  • Journals Career Network

Affiliations

  • Online ISSN 2631-6129
  • Print ISSN 2631-6110
  • Copyright © 2024 Society of Breast Imaging
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER COLUMN
  • 08 October 2018

How to write a thorough peer review

  • Mathew Stiller-Reeve 0

Mathew Stiller-Reeve is a climate researcher at NORCE/Bjerknes Centre for Climate Research in Bergen, Norway, the leader of SciSnack.com, and a thematic editor at Geoscience Communication .

You can also search for this author in PubMed   Google Scholar

Scientists do not receive enough peer-review training. To improve this situation, a small group of editors and I developed a peer-review workflow to guide reviewers in delivering useful and thorough analyses that can really help authors to improve their papers.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-018-06991-0

This is an article from the Nature Careers Community, a place for Nature readers to share their professional experiences and advice. Guest posts are encouraged. You can get in touch with the editor at [email protected].

Related Articles

scientific paper review sample

Engage more early-career scientists as peer reviewers

Help graduate students to become good peer reviewers

  • Peer review

Who will make AlphaFold3 open source? Scientists race to crack AI model

Who will make AlphaFold3 open source? Scientists race to crack AI model

News 23 MAY 24

Pay researchers to spot errors in published papers

Pay researchers to spot errors in published papers

World View 21 MAY 24

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

Nature Index 01 MAY 24

Egypt is building a $1-billion mega-museum. Will it bring Egyptology home?

Egypt is building a $1-billion mega-museum. Will it bring Egyptology home?

News Feature 22 MAY 24

Researcher parents are paying a high price for conference travel — here’s how to fix it

Researcher parents are paying a high price for conference travel — here’s how to fix it

Career Column 27 MAY 24

How researchers in remote regions handle the isolation

How researchers in remote regions handle the isolation

Career Feature 24 MAY 24

What steps to take when funding starts to run out

What steps to take when funding starts to run out

Full-Time Faculty Member in Molecular Agrobiology at Peking University

Faculty positions in molecular agrobiology, including plant (crop) molecular biology, crop genomics and agrobiotechnology and etc.

Beijing, China

School of Advanced Agricultural Sciences, Peking University

scientific paper review sample

Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Warmly Welcomes Talents Abroad

“Qiushi” Distinguished Scholar, Zhejiang University, including Professor and Physician

No. 3, Qingchun East Road, Hangzhou, Zhejiang (CN)

Sir Run Run Shaw Hospital Affiliated with Zhejiang University School of Medicine

scientific paper review sample

Associate Editor, Nature Briefing

Associate Editor, Nature Briefing Permanent, full time Location: London, UK Closing date: 10th June 2024   Nature, the world’s most authoritative s...

London (Central), London (Greater) (GB)

Springer Nature Ltd

scientific paper review sample

Professor, Division Director, Translational and Clinical Pharmacology

Cincinnati Children’s seeks a director of the Division of Translational and Clinical Pharmacology.

Cincinnati, Ohio

Cincinnati Children's Hospital & Medical Center

scientific paper review sample

Data Analyst for Gene Regulation as an Academic Functional Specialist

The Rheinische Friedrich-Wilhelms-Universität Bonn is an international research university with a broad spectrum of subjects. With 200 years of his...

53113, Bonn (DE)

Rheinische Friedrich-Wilhelms-Universität

scientific paper review sample

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

Peer Review Template

Save or print this guide Download PDF

Think about structuring your reviewer report like an upside-down pyramid. The most important information goes at the top, followed by supporting details.

scientific paper review sample

Sample outline

In your own words, summarize the main research question, claims, and conclusions of the study. Provide context for how this research fits within the existing literature.

Discuss the manuscript’s strengths and weaknesses and your overall recommendation.

Major Issues

  • Other points (optional) If applicable, add confidential comments for the editors. Raise any concerns about the manuscript that they may need to consider further, such as concerns about ethics. Do not use this section for your overall critique. Also mention whether you might be available to look at a revised version.
  • Reviewer Guidelines
  • Peer review model
  • Scope & article eligibility
  • Reviewer eligibility
  • Peer reviewer code of conduct
  • Guidelines for reviewing
  • How to submit
  • The peer-review process
  • Peer Reviewing Tips
  • Benefits for Reviewers

The genesis of this paper is the proposal that genomes containing a poor percentage of guanosine and cytosine (GC) nucleotide pairs lead to proteomes more prone to aggregation than those encoded by GC-rich genomes. As a consequence these organisms are also more dependent on the protein folding machinery. If true, this interesting hypothesis could establish a direct link between the tendency to aggregate and the genomic code.

In their paper, the authors have tested the hypothesis on the genomes of eubacteria using a genome-wide approach based on multiple machine learning models. Eubacteria are an interesting set of organisms which have an appreciably high variation in their nucleotide composition with the percentage of CG genetic material ranging from 20% to 70%. The authors classified different eubacterial proteomes in terms of their aggregation propensity and chaperone-dependence. For this purpose, new classifiers had to be developed which were based on carefully curated data. They took account for twenty-four different features among which are sequence patterns, the pseudo amino acid composition of phenylalanine, aspartic and glutamic acid, the distribution of positively charged amino acids, the FoldIndex score and the hydrophobicity. These classifiers seem to be altogether more accurate and robust than previous such parameters.

The authors found that, contrary to what expected from the working hypothesis, which would predict a decrease in protein aggregation with an increase in GC richness, the aggregation propensity of proteomes increases with the GC content and thus the stability of the proteome against aggregation increases with the decrease in GC content. The work also established a direct correlation between GC-poor proteomes and a lower dependence on GroEL. The authors conclude by proposing that a decrease in eubacterial GC content may have been selected in organisms facing proteostasis problems. A way to test the overall results would be through in vitro evolution experiments aimed at testing whether adaptation to low GC content provide folding advantage.

The main strengths of this paper is that it addresses an interesting and timely question, finds a novel solution based on a carefully selected set of rules, and provides a clear answer. As such this article represents an excellent and elegant bioinformatics genome-wide study which will almost certainly influence our thinking about protein aggregation and evolution. Some of the weaknesses are the not always easy readability of the text which establishes unclear logical links between concepts.

Another possible criticism could be that, as any in silico study, it makes strong assumptions on the sequence features that lead to aggregation and strongly relies on the quality of the classifiers used. Even though the developed classifiers seem to be more robust than previous such parameters, they remain only overall indications which can only allow statistical considerations. It could of course be argued that this is good enough to reach meaningful conclusions in this specific case.

The paper by Chevalier et al. analyzed whether late sodium current (I NaL ) can be assessed using an automated patch-clamp device. To this end, the I NaL effects of ranolazine (a well known I NaL inhibitor) and veratridine (an I NaL activator) were described. The authors tested the CytoPatch automated patch-clamp equipment and performed whole-cell recordings in HEK293 cells stably transfected with human Nav1.5. Furthermore, they also tested the electrophysiological properties of human induced pluripotent stem cell-derived cardiomyocytes (hiPS) provided by Cellular Dynamics International. The title and abstract are appropriate for the content of the text. Furthermore, the article is well constructed, the experiments were well conducted, and analysis was well performed.

I NaL is a small current component generated by a fraction of Nav1.5 channels that instead to entering in the inactivated state, rapidly reopened in a burst mode. I NaL critically determines action potential duration (APD), in such a way that both acquired (myocardial ischemia and heart failure among others) or inherited (long QT type 3) diseases that augmented the I NaL magnitude also increase the susceptibility to cardiac arrhythmias. Therefore, I NaL has been recognized as an important target for the development of drugs with either antiischemic or antiarrhythmic effects. Unfortunately, accurate measurement of I NaL is a time consuming and technical challenge because of its extra-small density. The automated patch clamp device tested by Chevalier et al. resolves this problem and allows fast and reliable I NaL measurements.

The results here presented merit some comments and arise some unresolved questions. First, in some experiments (such is the case in experiments B and D in Figure 2) current recordings obtained before the ranolazine perfusion seem to be quite unstable. Indeed, the amplitude progressively increased to a maximum value that was considered as the control value (highlighted with arrows). Can this problem be overcome? Is this a consequence of a slow intracellular dialysis? Is it a consequence of a time-dependent shift of the voltage dependence of activation/inactivation? Second, as shown in Figure 2, intensity of drug effects seems to be quite variable. In fact, experiments A, B, C, and D in Figure 2 and panel 2D, demonstrated that veratridine augmentation ranged from 0-400%. Even assuming the normal biological variability, we wonder as to whether this broad range of effect intensities can be justified by changes in the perfusion system. Has been the automated dispensing system tested? If not, we suggest testing the effects of several K + concentrations on inward rectifier currents generated by Kir2.1 channels (I Kir2.1 ).

The authors demonstrated that the recording quality was so high that the automated device allows to the differentiation between noise and current, even when measuring currents of less than 5 pA of amplitude. In order to make more precise mechanistic assumptions, the authors performed an elegant estimation of current variance (σ 2 ) and macroscopic current (I) following the procedure described more than 30 years ago by Van Driessche and Lindemann 1 . By means of this method, Chevalier et al. reducing the open channel probability, while veratridine increases the number of channels in the burst mode. We respectfully would like to stress that these considerations must be put in context from a pharmacological point of view. We do not doubt that ranolazine acts as an open channel blocker, what it seems clear however, is that its onset block kinetics has to be “ultra” slow, otherwise ranolazine would decrease peak I NaL even at low frequencies of stimulation. This comment points towards the fact that for a precise mechanistic study of ionic current modifying drugs it is mandatory to analyze drug effects with much more complicated pulse protocols. Questions thus are: does this automated equipment allow to the analysis of the frequency-, time-, and voltage-dependent effects of drugs? Can versatile and complicated pulse protocols be applied? Does it allow to a good voltage control even when generated currents are big and fast? If this is not possible, and by means of its extraordinary discrimination between current and noise, this automated patch-clamp equipment will only be helpful for rapid I NaL -modifying drug screening. Obviously it will also be perfect to test HERG blocking drug effects as demanded by the regulatory authorities.

Finally, as cardiac electrophysiologists, we would like to stress that it seems that our dream of testing drug effects on human ventricular myocytes seems to come true. Indeed, human atrial myocytes are technically, ethically and logistically difficult to get, but human ventricular are almost impossible to be obtained unless from the explanted hearts from patients at the end stage of cardiac diseases. Here the authors demonstrated that ventricular myocytes derived from hiPS generate beautiful action potentials that can be recorded with this automated equipment. The traces shown suggested that there was not alternation in the action potential duration. Is this a consistent finding? How long do last these stable recordings? The only comment is that resting membrane potential seems to be somewhat variable. Can this be resolved? Is it an unexpected veratridine effect? Standardization of maturation methods of ventricular myocytes derived from hiPS will be a big achievement for cardiac cellular electrophysiology which was obliged for years to the imprecise extrapolation of data obtained from a combination of several species none of which was representative of human electrophysiology. The big deal will be the maturation of human atrial myocytes derived from hiPS that fulfil the known characteristics of human atrial cells.

We suggest suppressing the initial sentence of section 3. We surmise that results obtained from the experiments described in this section cannot serve to understand the role of I NaL in arrhythmogenesis.

1. Van Driessche W, Lindemann B: Concentration dependence of currents through single sodium-selective pores in frog skin. Nature . 1979; 282 (5738): 519-520 PubMed Abstract | Publisher Full Text

The authors have clarified several of the questions I raised in my previous review. Unfortunately, most of the major problems have not been addressed by this revision. As I stated in my previous review, I deem it unlikely that all those issues can be solved merely by a few added paragraphs. Instead there are still some fundamental concerns with the experimental design and, most critically, with the analysis. This means the strong conclusions put forward by this manuscript are not warranted and I cannot approve the manuscript in this form.

  • The greatest concern is that when I followed the description of the methods in the previous version it was possible to decode, with almost perfect accuracy, any arbitrary stimulus labels I chose. See https://doi.org/10.6084/m9.figshare.1167456 for examples of this reanalysis. Regardless of whether we pretend that the actual stimulus appeared at a later time or was continuously alternating between signal and silence, the decoding is always close to perfect. This is an indication that the decoding has nothing to do with the actual stimulus heard by the Sender but is opportunistically exploiting some other features in the data. The control analysis the authors performed, reversing the stimulus labels, cannot address this problem because it suffers from the exact same problem. Essentially, what the classifier is presumably using is the time that has passed since the recording started.
  • The reason for this is presumably that the authors used non-independent data for training and testing. Assuming I understand correctly (see point 3), random sampling one half of data samples from an EEG trace are not independent data . Repeating the analysis five times – the control analysis the authors performed – is not an adequate way to address this concern. Randomly selecting samples from a time series containing slow changes (such as the slow wave activity that presumably dominates these recordings under these circumstances) will inevitably contain strong temporal correlations. See TemporalCorrelations.jpg in https://doi.org/10.6084/m9.figshare.1185723 for 2D density histograms and a correlation matrix demonstrating this.
  • While the revised methods section provides more detail now, it still is unclear about exactly what data were used. Conventional classification analysis report what data features (usual columns in the data matrix) and what observations (usual rows) were used. Anything could be a feature but typically this might be the different EEG channels or fMRI voxels etc. Observations are usually time points. Here I assume the authors transformed the raw samples into a different space using principal component analysis. It is not stated if the dimensionality was reduced using the eigenvalues. Either way, I assume the data samples (collected at 128 Hz) were then used as observations and the EEG channels transformed by PCA were used as features. The stimulus labels were assigned as ON or OFF for each sample. A set of 50% of samples (and labels) was then selected at random for training, and the rest was used for testing. Is this correct?
  • A powerful non-linear classifier can capitalise on such correlations to discriminate arbitrary labels. In my own analyses I used both an SVM with RBF as well as a k-nearest neighbour classifier, both of which produce excellent decoding of arbitrary stimulus labels (see point 1). Interestingly, linear classifiers or less powerful SVM kernels fare much worse – a clear indication that the classifier learns about the complex non-linear pattern of temporal correlations that can describe the stimulus label. This is further corroborated by the fact that when using stimulus labels that are chosen completely at random (i.e. with high temporal frequency) decoding does not work.
  • The authors have mostly clarified how the correlation analysis was performed. It is still left unclear, however, how the correlations for individual pairs were averaged. Was Fisher’s z-transformation used, or were the data pooled across pairs? More importantly, it is not entirely surprising that under the experimental conditions there will be some correlation between the EEG signals for different participants, especially in low frequency bands. Again, this further supports the suspicion that the classification utilizes slow frequency signals that are unrelated to the stimulus and the experimental hypothesis. In fact, a quick spot check seems to confirm this suspicion: correlating the time series separately for each channel from the Receiver in pair 1 with those from the Receiver in pair 18 reveals 131 significant (p‹0.05, Bonferroni corrected) out of 196 (14x14 channels) correlations… One could perhaps argue that this is not surprising because both these pairs had been exposed to identical stimulus protocols: one minute of initial silence and only one signal period (see point 6). However, it certainly argues strongly against the notion that the decoding is any way related to the mental connection between the particular Sender and Receiver in a given pair because it clearly works between Receivers in different pairs! However, to further control for this possibility I repeated the same analysis but now comparing the Receiver from pair 1 to the Receiver from pair 15. This pair was exposed to a different stimulus paradigm (2 minutes of initial silence and a longer paradigm with three signal periods). I only used the initial 3 minutes for the correlation analysis. Therefore, both recordings would have been exposed to only one signal period but at different times (at 1 min and 2 min for pair 1 and 15, respectively). Even though the stimulus protocol was completely different the time courses for all the channels are highly correlated and 137 out of 196 correlations are significant. Considering that I used the raw data for this analysis it should not surprise anyone that extracting power from different frequency bands in short time windows will also reveal significant correlations. Crucially, it demonstrates that correlations between Sender and Receiver are artifactual and trivial.
  • The authors argue in their response and the revision that predictive strategies were unlikely. After having performed these additional analyses I am inclined to agree. The excellent decoding almost certainly has nothing to do with expectation or imagery effects and it is irrelevant whether participants could guess the temporal design of the experiment. Rather, the results are almost entirely an artefact of the analysis. However, this does not mean that predictability is not an issue. The figure StimulusTimecourses.jpg in https://doi.org/10.6084/m9.figshare.1185723 plots the stimulus time courses for all 20 pairs as can be extracted from the newly uploaded data. This confirms what I wrote in my previous review, in fact, with the corrected data sets the problem with predictability is even greater. Out of the 20 pairs, 13 started with 1 min of initial silence. The remaining 7 had 2 minutes of initial silence. Most of the stimulus paradigms are therefore perfectly aligned and thus highly correlated. This also proves incorrect the statement that initial silence periods were 1, 2, or 3 minutes. No pair had 3 min of initial silence. It would therefore have been very easy for any given Receiver to correctly guess the protocol. It should be clear that this is far from optimal for testing such an unorthodox hypothesis. Any future experiments should employ more randomization to decrease predictability. Even if this wasn’t the underlying cause of the present results, this is simply not great experimental design.
  • The authors now acknowledge in their response that all the participants were authors. They say that this is also acknowledged in the methods section, but I did not see any statement about that in the revised manuscript. As before, I also find it highly questionable to include only authors in an experiment of this kind. It is not sufficient to claim that Receivers weren’t guessing their stimulus protocol. While I am giving the authors (and thus the participants) the benefit of the doubt that they actually believe they weren’t guessing/predicting the stimulus protocols, this does not rule out that they did. It may in fact be possible to make such predictions subconsciously (Now, if you ask me, this is an interesting scientific question someone should do an experiment on!). The fact familiar with the protocol may help that. Any future experiments should take steps to prevent this.
  • I do not follow the explanation for the binomial test the authors used. Based on the excessive Bayes Factor of 390,625 it is clear that the authors assumed a chance level of 50% on their binomial test. Because the design is not balanced, this is not correct.
  • In general, the Bayes Factor and the extremely high decoding accuracy should have given the authors reason to start. Considering the unusual hypothesis did the authors not at any point wonder if these results aren’t just far too good to be true? Decoding mental states from brain activity is typically extremely noisy and hardly affords accuracies at the level seen here. Extremely accurate decoding and Bayes Factors in the hundreds of thousands should be a tell-tale sign to check that there isn’t an analytical flaw that makes the result entirely trivial. I believe this is what happened here and thus I think this experiment serves as a very good demonstration for the pitfalls of applying such analysis without sanity checks. In order to make claims like this, the experimental design must contain control conditions that can rule out these problems. Presumably, recordings without any Sender, and maybe even when the “Receiver” is aware of this fact, should produce very similar results.

Based on all these factors, it is impossible for me to approve this manuscript. I should however state that it is laudable that the authors chose to make all the raw data of their experiment publicly available. Without this it would have impossible for me to carry out the additional analyses, and thus the most fundamental problem in the analysis would have remained unknown. I respect the authors’ patience and professionalism in dealing with what I can only assume is a rather harsh review experience. I am honoured by the request for an adversarial collaboration. I do not rule out such efforts at some point in the future. However, for all of the reasons outlined in this and my previous review, I do not think the time is right for this experiment to proceed to this stage. Fundamental analytical flaws and weaknesses in the design should be ruled out first. An adversarial collaboration only really makes sense to me for paradigms were we can be confident that mundane or trivial factors have been excluded.

This manuscript does an excellent job demonstrating significant strain differences in Burdian's paradigm. Since each Drosophila lab has their own wild type (usually Canton-S) isolate, this issue of strain differences is actually a very important one for between lab reproducibility. This work is a good reminder for all geneticists to pay attention to the population effects in the background controls, and presumably the mutant lines we are comparing.

I was very pleased to see the within-isolate behavior was consistent in replicate experiments one year apart. The authors further argue that the between-isolate differences in behavior arise from a Founder's effect, at least in the differences in locomotor behavior between the Paris lines CS_TP and CS_JC. I believe this is a very reasonable and testable hypothesis. It predicts that genetic variability for these traits exist within the populations. It should now be possible to perform selection experiments from the original CS_TP population to replicate the founding event and estimate the heritability of these traits.

Two other things that I liked about this manuscript are the ability to adjust parameters in figure 3, and our ability to download the raw data. After reading the manuscript, I was a little disappointed that the performance of the five strains in each 12 behavioral variables weren't broken down individually in a table or figure. I thought this may help us readers understand what the principle components were representing. The authors have made this data readily accessible in a downloadable spreadsheet.

This is an exceptionally good review and balanced assessment of the status of CETP inhibitors and ASCVD from a world authority in the field. The article highlights important data that might have been overlooked when promulgating the clinical value of CETPIs and related trials.

Only 2 areas need revision:

  • Page 3, para 2: the notion that these data from Papp et al . convey is critical and the message needs an explicit sentence or two at end of paragraph.
  • Page 4, Conclusion: the assertion concerning the ethics of the two Phase 3 clinical trials needs toning down. Perhaps rephrase to indicate that the value and sense of doing these trials is open to question, with attendant ethical implications, or softer wording to that effect.

The Wiley et al . manuscript describes a beautiful synthesis of contemporary genetic approaches to, with astonishing efficiency, identify lead compounds for therapeutic approaches to a serious human disease. I believe the importance of this paper stems from the applicability of the approach to the several thousand of rare human disease genes that Next-Gen sequencing will uncover in the next few years and the challenge we will have in figuring out the function of these genes and their resulting defects. This work presents a paradigm that can be broadly and usefully applied.

In detail, the authors begin with gene responsible for X-linked spinal muscular atrophy and express both the wild-type version of that human gene as well as a mutant form of that gene in S. pombe . The conceptual leap here is that progress in genetics is driven by phenotype, and this approach involving a yeast with no spine or muscles to atrophy is nevertheless and N-dimensional detector of phenotype.

The study is not without a small measure of luck in that expression of the wild-type UBA1 gene caused a slow growth phenotype which the mutant did not. Hence there was something in S. pombe that could feel the impact of this protein. Given this phenotype, the authors then went to work and using the power of the synthetic genetic array approach pioneered by Boone and colleagues made a systematic set of double mutants combining the human expressed UBA1 gene with knockout alleles of a plurality of S. pombe genes. They found well over a hundred mutations that either enhanced or suppressed the growth defect of the cells expressing UBI1. Most of these have human orthologs. My hunch is that many human genes expressed in yeast will have some comparably exploitable phenotype, and time will tell.

Building on the interaction networks of S. pombe genes already established, augmenting these networks by the protein interaction networks from yeast and from human proteome studies involving these genes, and from the structure of the emerging networks, the authors deduced that an E3 ligase modulated UBA1 and made the leap that it therefore might also impact X-linked Spinal Muscular Atrophy.

Here, the awesome power of the model organism community comes into the picture as there is a zebrafish model of spinal muscular atrophy. The principle of phenologs articulated by the Marcotte group inspire the recognition of the transitive logic of how phenotypes in one organism relate to phenotypes in another. With this zebrafish model, they were able to confirm that an inhibitor of E3 ligases and of the Nedd8-E1 activating suppressed the motor axon anomalies, as predicted by the effect of mutations in S. pombe on the phenotypes of the UBA1 overexpression.

I believe this is an important paper to teach in intro graduate courses as it illustrates beautifully how important it is to know about and embrace the many new sources of systematic genetic information and apply them broadly.

This paper by Amrhein et al. criticizes a paper by Bradley Efron that discusses Bayesian statistics ( Efron, 2013a ), focusing on a particular example that was also discussed in Efron (2013b) . The example concerns a woman who is carrying twins, both male (as determined by sonogram and we ignore the possibility that gender has been observed incorrectly). The parents-to-be ask Efron to tell them the probability that the twins are identical.

This is my first open review, so I'm not sure of the protocol. But given that there appears to be errors in both Efron (2013b) and the paper under review, I am sorry to say that my review might actually be longer than the article by Efron (2013a) , the primary focus of the critique, and the critique itself. I apologize in advance for this. To start, I will outline the problem being discussed for the sake of readers.

This problem has various parameters of interest. The primary parameter is the genetic composition of the twins in the mother’s womb. Are they identical (which I describe as the state x = 1) or fraternal twins ( x = 0)? Let y be the data, with y = 1 to indicate the twins are the same gender. Finally, we wish to obtain Pr( x = 1 | y = 1), the probability the twins are identical given they are the same gender 1 . Bayes’ rule gives us an expression for this:

Pr( x = 1 | y = 1) = Pr( x =1) Pr( y = 1 | x = 1) / {Pr( x =1) Pr( y = 1 | x = 1) + Pr( x =0) Pr( y = 1 | x = 0)}

Now we know that Pr( y = 1 | x = 1) = 1; twins must be the same gender if they are identical. Further, Pr( y = 1 | x = 0) = 1/2; if twins are not identical, the probability of them being the same gender is 1/2.

Finally, Pr( x = 1) is the prior probability that the twins are identical. The bone of contention in the Efron papers and the critique by Amrhein et al. revolves around how this prior is treated. One can think of Pr( x = 1) as the population-level proportion of twins that are identical for a mother like the one being considered.

However, if we ignore other forms of twins that are extremely rare (equivalent to ignoring coins finishing on their edges when flipping them), one incontrovertible fact is that Pr( x = 0) = 1 − Pr( x = 1); the probability that the twins are fraternal is the complement of the probability that they are identical.

The above values and expressions for Pr( y = 1 | x = 1), Pr( y = 1 | x = 0), and Pr( x = 0) leads to a simpler expression for the probability that we seek ‐ the probability that the twins are identical given they have the same gender:

Pr( x = 1 | y = 1) = 2 Pr( x =1) / [1 + Pr( x =1)] (1)

We see that the answer depends on the prior probability that the twins are identical, Pr( x =1). The paper by Amrhein et al. points out that this is a mathematical fact. For example, if identical twins were impossible (Pr( x = 1) = 0), then Pr( x = 1| y = 1) = 0. Similarly, if all twins were identical (Pr( x = 1) = 1), then Pr( x = 1| y = 1) = 1. The “true” prior lies somewhere in between. Apparently, the doctor knows that one third of twins are identical 2 . Therefore, if we assume Pr( x = 1) = 1/3, then Pr( x = 1| y = 1) = 1/2.

Now, what would happen if we didn't have the doctor's knowledge? Laplace's “Principle of Insufficient Reason” would suggest that we give equal prior probability to all possibilities, so Pr( x = 1) = 1/2 and Pr( x = 1| y = 1) = 2/3, an answer different from 1/2 that was obtained when using the doctor's prior of 1/3.

Efron(2013a) highlights this sensitivity to the prior, representing someone who defines an uninformative prior as a “violator”, with Laplace as the “prime violator”. In contrast, Amrhein et al. correctly points out that the difference in the posterior probabilities is merely a consequence of mathematical logic. No one is violating logic – they are merely expressing ignorance by specifying equal probabilities to all states of nature. Whether this is philosophically valid is debatable ( Colyvan 2008 ), but weight to that question, and it is well beyond the scope of this review. But setting Pr( x = 1) = 1/2 is not a violation; it is merely an assumption with consequences (and one that in hindsight might be incorrect 2 ).

Alternatively, if we don't know Pr( x = 1), we could describe that probability by its own probability distribution. Now the problem has two aspects that are uncertain. We don’t know the true state x , and we don’t know the prior (except in the case where we use the doctor’s knowledge that Pr( x = 1) = 1/3). Uncertainty in the state of x refers to uncertainty about this particular set of twins. In contrast, uncertainty in Pr( x = 1) reflects uncertainty in the population-level frequency of identical twins. A key point is that the state of one particular set of twins is a different parameter from the frequency of occurrence of identical twins in the population.

Without knowledge about Pr( x = 1), we might use Pr( x = 1) ~ dunif(0, 1), which is consistent with Laplace. Alternatively, Efron (2013b) notes another alternative for an uninformative prior: Pr( x = 1) ~ dbeta(0.5, 0.5), which is the Jeffreys prior for a probability.

Here I disagree with Amrhein et al. ; I think they are confusing the two uncertain parameters. Amrhein et al. state:

“We argue that this example is not only flawed, but useless in illustrating Bayesian data analysis because it does not rely on any data. Although there is one data point (a couple is due to be parents of twin boys, and the twins are fraternal), Efron does not use it to update prior knowledge. Instead, Efron combines different pieces of expert knowledge from the doctor and genetics using Bayes’ theorem.”

This claim might be correct when describing uncertainty in the population-level frequency of identical twins. The data about the twin boys is not useful by itself for this purpose – they are a biased sample (the data have come to light because their gender is the same; they are not a random sample of twins). Further, a sample of size one, especially if biased, is not a firm basis for inference about a population parameter. While the data are biased, the claim by Amrheim et al. that there are no data is incorrect.

However, the data point (the twins have the same gender) is entirely relevant to the question about the state of this particular set of twins. And it does update the prior. This updating of the prior is given by equation (1) above. The doctor’s prior probability that the twins are identical (1/3) becomes the posterior probability (1/2) when using information that the twins are the same gender. The prior is clearly updated with Pr( x = 1| y = 1) ≠ Pr( x = 1) in all but trivial cases; Amrheim et al. ’s statement that I quoted above is incorrect in this regard.

This possible confusion between uncertainty about these twins and uncertainty about the population level frequency of identical twins is further suggested by Amrhein et al. ’s statements:

“Second, for the uninformative prior, Efron mentions erroneously that he used a uniform distribution between zero and one, which is clearly different from the value of 0.5 that was used. Third, we find it at least debatable whether a prior can be called an uninformative prior if it has a fixed value of 0.5 given without any measurement of uncertainty.”

Note, if the prior for Pr( x = 1) is specified as 0.5, or dunif(0,1), or dbeta(0.5, 0.5), the posterior probability that these twins are identical is 2/3 in all cases. Efron (2013b) says the different priors lead to different results, but this result is incorrect, and the correct answer (2/3) is given in Efron (2013a) 3 . Nevertheless, a prior that specifies Pr( x = 1) = 0.5 does indicate uncertainty about whether this particular set of twins is identical (but certainty in the population level frequency of twins). And Efron’s (2013a) result is consistent with Pr( x = 1) having a uniform prior. Therefore, both claims in the quote above are incorrect.

It is probably easiest to show the (lack of) influence of the prior using MCMC sampling. Here is WinBUGS code for the case using Pr( x = 1) = 0.5.

Running this model in WinBUGS shows that the posterior mean of x is 2/3; this is the posterior probability that x = 1.

Instead of using pr_ident_twins <- 0.5, we could set this probability as being uncertain and define pr_ident_twins ~ dunif(0,1), or pr_ident_twins ~ dbeta(0.5,0.5). In either case, the posterior mean value of x remains 2/3 (contrary to Efron 2013b , but in accord with the correction in Efron 2013a ).

Note, however, that the value of the population level parameter pr_ident_twins is different in all three cases. In the first it remains unchanged at 1/2 where it was set. In the case where the prior distribution for pr_ident_twins is uniform or beta, the posterior distributions remain broad, but they differ depending on the prior (as they should – different priors lead to different posteriors 4 ). However, given the biased sample size of 1, the posterior distribution for this particular parameter is likely to be misleading as an estimate of the population-level frequency of twins.

So why doesn’t the choice of prior influence the posterior probability that these twins are identical? Well, for these three priors, the prior probability that any single set of twins is identical is 1/2 (this is essentially the mean of the prior distributions in these three cases).

If, instead, we set the prior as dbeta(1,2), which has a mean of 1/3, then the posterior probability that these twins are identical is 1/2. This is the same result as if we had set Pr( x = 1) = 1/3. In both these cases (choosing dbeta(1,2) or 1/3), the prior probability that a single set of twins is identical is 1/3, so the posterior is the same (1/2) given the data (the twins have the same gender).

Further, Amrhein et al. also seem to misunderstand the data. They note:

“Although there is one data point (a couple is due to be parents of twin boys, and the twins are fraternal)...”

This is incorrect. The parents simply know that the twins are both male. Whether they are fraternal is unknown (fraternal twins being the complement of identical twins) – that is the question the parents are asking. This error of interpretation makes the calculations in Box 1 and subsequent comments irrelevant.

Box 1 also implies Amrhein et al. are using the data to estimate the population frequency of identical twins rather than the state of this particular set of twins. This is different from the aim of Efron (2013a) and the stated question.

Efron suggests that Bayesian calculations should be checked with frequentist methods when priors are uncertain. However, this is a good example where this cannot be done easily, and Amrhein et al. are correct to point this out. In this case, we are interested in the probability that the hypothesis is true given the data (an inverse probability), not the probabilities that the observed data would be generated given particular hypotheses (frequentist probabilities). If one wants the inverse probability (the probability the twins are identical given they are the same gender), then Bayesian methods (andtherefore a prior) are required. A logical answer simply requires that the prior is constructed logically. Whether that answer is “correct” will be, in most cases, only known in hindsight.

However, one possible way to analyse this example using frequentist methods would be to assess the likelihood of obtaining the data for each of the two hypothesis (the twins are identical or fraternal). The likelihood of the twins having the same gender under the hypothesis that they are identical is 1. The likelihood of the twins having the same gender under the hypothesis that they are fraternal is 0.5. Therefore, the weight of evidence in favour of identical twins is twice that of fraternal twins. Scaling these weights so they sum to one ( Burnham and Anderson 2002 ), gives a weight of 2/3 for identical twins and 1/3 for fraternal twins. These scaled weights have the same numerical values as the posterior probabilities based on either a Laplace or Jeffreys prior. Thus, one might argue that the weight of evidence for each hypothesis when using frequentist methods is equivalent to the posterior probabilities derived from an uninformative prior. So, as a final aside in reference to Efron (2013a) , if we are being “violators” when using a uniform prior, are we also being “violators” when using frequentist methods to weigh evidence? Regardless of the answer to this rhetorical question, “checking” the results with frequentist methods doesn’t give any more insight than using uninformative priors (in this case). However, this analysis shows that the question can be analysed using frequentist methods; the single data point is not a problem for this. The claim in Armhein et al. that a frequentist analyis "is impossible because there is only one data point, and frequentist methods generally cannot handle such situations" is not supported by this example.

In summary, the comment by Amrhein et al. raises some interesting points that seem worth discussing, but it makes important errors in analysis and interpretation, and misrepresents the results of Efron (2013a) . This means the current version should not be approved.

Burnham, K.P. & D.R. Anderson. 2002. Model Selection and Multi-model Inference: a Practical Information-theoretic Approach. Springer-Verlag, New York.

Colyvan, M. 2008. Is Probability the Only Coherent Approach to Uncertainty? Risk Anal. 28: 645-652.

Efron B. (2003a) Bayes’ Theorem in the 21st Century. Science 340(6137): 1177-1178.

Efron B. (2013b) A 250-year argument: Belief, behavior, and the bootstrap. Bull Amer. Math Soc. 50: 129-146.

  • The twins are both male. However, if the twins were both female, the statistical results would be the same, so I will simply use the data that the twins are the same gender.
  • In reality, the frequency of twins that are identical is likely to vary depending on many factors but we will accept 1/3 for now.
  • Efron (2013b) reports the posterior probability for these twins being identical as “a whopping 61.4% with a flat Laplace prior” but as 2/3 in Efron (2013a) . The latter (I assume 2/3 is “even more whopping”!) is the correct answer, which I confirmed via email with Professor Efron. Therefore, Efron (2013b) incorrectly claims the posterior probability is sensitive to the choice between a Jeffreys or Laplace uninformative prior.
  • When the data are very informative relative to the different priors, the posteriors will be similar, although not identical.

I am very glad the authors wrote this essay. It is a well-written, needed, and useful summary of the current status of “data publication” from a certain perspective. The authors, however, need to be bolder and more analytical. This is an opinion piece, yet I see little opinion. A certain view is implied by the organization of the paper and the references chosen, but they could be more explicit.

The paper would be both more compelling and useful to a broad readership if the authors moved beyond providing a simple summary of the landscape and examined why there is controversy in some areas and then use the evidence they have compiled to suggest a path forward. They need to be more forthright in saying what data publication means to them, or what parts of it they do not deal with. Are they satisfied with the Lawrence et al. definition? Do they accept the critique of Parsons and Fox? What is the scope of their essay?

The authors take a rather narrow view of data publication, which I think hinders their analyses. They describe three types of (digital) data publication: Data as a supplement to an article; data as the subject of a paper; and data independent of a paper. The first two types are relatively new and they represent very little of the data actually being published or released today. The last category, which is essentially an “other” category, is rich in its complexity and encompasses the vast majority of data released. I was disappointed that the examples of this type were only the most bare-bones (Zenodo and Figshare). I think a deeper examination of this third category and its complexity would help the authors better characterize the current landscape and suggest paths forward.

Some questions the authors might consider: Are these really the only three models in consideration or does the publication model overstate a consensus around a certain type of data publication? Why are there different models and which approach is better for different situations? Do they have different business models or imply different social contracts? Might it also be worthy of typing “publishers” instead of “publications”? For example, do domain repositories vs. institutional repositories vs. publishers address the issues differently? Are these models sustaining models or just something to get us through the next 5-10 years while we really figure it out?

I think this oversimplification inhibited some deeper analysis in other areas as well. I would like to see more examination of the validation requirement beyond the lens of peer review, and I would like a deeper examination of incentives and credit beyond citation.

I thought the validation section of the paper was very relevant, but somewhat light. I like the choice of the term validation as more accurate than “quality” and it fits quite well with Callaghan’s useful distinction between technical and scientific review, but I think the authors overemphasize the peer-review style approach. The authors rightly argue that “peer-review” is where the publication metaphor leads us, but it may be a false path. They overstate some difficulties of peer-review (No-one looks at every data value? No, they use statistics, visualization, and other techniques.) while not fully considering who is responsible for what. We need a closer examination of different roles and who are appropriate validators (not necessarily conventional peers). The narrowly defined models of data publication may easily allow for a conventional peer-review process, but it is much more complex in the real-world “other” category. The authors discuss some of this in what they call “independent data validation,” but they don’t draw any conclusions.

Only the simplest of research data collections are validated only by the original creators. More often there are teams working together to develop experiments, sampling protocols, algorithms, etc. There are additional teams who assess, calibrate, and revise the data as they are collected and assembled. The authors discuss some of this in their examples like the PDS and tDAR, but I wish they were more analytical and offered an opinion on the way forward. Are there emerging practices or consensus in these team-based schemes? The level of service concept illustrated by Open Context may be one such area. Would formalizing or codifying some of these processes accomplish the same as peer-review or more? What is the role of the curator or data scientist in all of this? Given the authors’s backgrounds, I was surprised this role was not emphasized more. Finally, I think it is a mistake for science review to be the main way to assess reuse value. It has been shown time and again that data end up being used effectively (and valued) in ways that original experts never envisioned or even thought valid.

The discussion of data citation was good and captured the state of the art well, but again I would have liked to see some views on a way forward. Have we solved the basic problem and are now just dealing with edge cases? Is the “just-in-time identifier” the way to go? What are the implications? Will the more basic solutions work in the interim? More critically, are we overemphasizing the role of citation to provide academic credit? I was gratified that the authors referenced the Parsons and Fox paper which questions the whole data publication metaphor, but I was surprised that they only discussed the “data as software” alternative metaphor. That is a useful metaphor, but I think the ecosystem metaphor has broader acceptance. I mention this because the authors critique the software metaphor because “using it to alter or affect the academic reward system is a tricky prospect”. Yet there is little to suggest that data publication and corresponding citation alters that system either. Indeed there is little if any evidence that data publication and citation incentivize data sharing or stewardship. As Christine Borgman suggests, we need to look more closely at who we are trying to incentivize to do what. There is no reason to assume it follows the same model as research literature publication. It may be beyond the scope of this paper to fully examine incentive structures, but it at least needs to be acknowledged that building on the current model doesn’t seem to be working.

Finally, what is the takeaway message from this essay? It ends rather abruptly with no summary, no suggested directions or immediate challenges to overcome, no call to action, no indications of things we should stop trying, and only brief mention of alternative perspectives. What do the authors want us to take away from this paper?

Overall though, this is a timely and needed essay. It is well researched and nicely written with rich metaphor. With modifications addressing the detailed comments below and better recognizing the complexity of the current data publication landscape, this will be a worthwhile review paper. With more significant modification where the authors dig deeper into the complexities and controversies and truly grapple with their implications to suggest a way forward, this could be a very influential paper. It is possible that the definitions of “publication” and “peer-review” need not be just stretched but changed or even rejected.

  • The whole paper needs a quick copy edit. There are a few typos, missing words, and wrong verb tenses. Note the word “data” is a plural noun. E.g., Data are not software, nor are they literature. (NSICD, instead of NSIDC)
  • Page 2, para 2: “citability is addressed by assigning a PID.” This is not true, as the authors discuss on page 4, para 4. Indeed, page 4, para 4 seems to contradict itself. Citation is more than a locator/identifier.
  • In the discussion of “Data independent of any paper” it is worth noting that there may often be linkages between these data and myriad papers. Indeed a looser concept of a data paper has existed for some time, where researchers request a citation to a paper even though it is not the data nor fully describes the data (e.g the CRU temp records)
  • Page 4, para 1: I’m not sure it’s entirely true that published data cannot involve requesting permission. In past work with Indigenous knowledge holders, they were willing to publish summary data and then provide the details when satisfied the use was appropriate and not exploitive. I think those data were “published” as best they could be. A nit, perhaps, but it highlights that there are few if any hard and fast rules about data publication.
  • Page 4, para 2: You may also want to mention the WDS certification effort, which is combining with the DSA via an RDA Working Group:
  • Page 4, para 2: The joint declaration of data citation principles involved many more organizations than Force11, CODATA, and DCC. Please credit them all (maybe in a footnote). The glory of the effort was that it was truly a joint effort across many groups. There is no leader. Force11 was primarily a convener.
  • Page 4, para 6: The deep citation approach recommended by ESIP is not to just to list variables or a range of data. It is to identify a “structural index” for the data and to use this to reference subsets. In Earth science this structural index is often space and time, but many other indices are possible--location in a gene sequence, file type, variable, bandwidth, viewing angle, etc. It is not just for “straightforward” data sets.
  • Page 5, para 5: I take issue with the statement that few repositories provide scientific review. I can think of a couple dozen that do just off the top of my head, and I bet most domain repositories have some level of science review. The “scientists” may not always be in house, but the repository is a team facilitator. See my general comments.
  • Page 5, para 10: The PDS system is only unusual in that it is well documented and advertised. As mentioned, this team style approach is actually fairly common.
  • Page 6, para 3: Parsons and Fox don’t just argue that the data publication metaphor is limiting. They also say it is misleading. That should be acknowledged at least, if not actively grappled with.
  • Artifact removal: Unfortunately the authors have not updated the paper with a 2x2 table showing guns and smiles by removed data points. This could dispel criticism that an asymmetrical expectation bias that has been shown to exist in similar experiments is not driving a bias leading to inappropriate conclusions.
  • Artifact removal: Unfortunately the authors have not updated the paper with a 2x2 table showing guns and smiles by removed data points. This could dispel criticism that an asymmetrical expectation bias that has been shown to exist in similar experiments is not driving a bias leading to inappropriate conclusions. This is my strongest criticism of the paper and should be easily addressed as per my previous review comment. The fact that this simple data presentation was not performed to remove a clear potential source of spurious results is disappointing.
  • The authors have added 95% CIs to figures S1 and S2. This clarifies the scope for expectation bias in these data. The addition of error bars permits the authors’ assumption of a linear trend, indicating that the effect of sequences of either guns or smiles may not skew results. Equally, there could be either a downwards or upwards trend fitting within the confidence intervals that could be indicative of a cognitive bias that may violate the assumptions of the authors, leading to spurious results. One way to remove these doubts could be to stratify the analyses by the length of sequences of identical symbols. If the results hold up in each of the strata, this potential bias could be shown to not be present in the data. If the bias is strong, particularly in longer runs, this could indicate that the positive result was due to small numbers of longer identical runs combined with a cognitive bias rather than an ability to predict future events.

Chamberlain and Szöcs present the taxize R package, a set of functions that provides interfaces to several web tools and databases, and simplifies the process of checking, updating, correcting and manipulating taxon names for researchers working with ecological/biological data. A key feature that is repeated throughout is the need for reproducibility of science workflows and taxize provides a means to achieve this within the R software ecosystem for taxonomic search.

The manuscript is well-written and nicely presented, with a good balance of descriptive text and discourse and practical illustration of package usage. A number of examples illustrate the scope of the package, something that is fully expanded upon in the two appendices, which are a welcome addition to the paper.

As to the package, I am not overly fond of long function names; the authors should consider dropping the data source abbreviations from the function names in a future update/revision of the package. Likewise there is some inconsistency in the naming conventions used. For example there is the ’tpl_search()’ function to search The Plant List, but the equivalent function to search uBio is ’ubio_namebank()’. Whilst this may reflect specific aspects of terminology in use at the respective data stores, it does not help the user gain familiarity with the package by having them remember inconsistent function names.

One advantage of taxize is that it draws together a rich selection of data stores to query. A further suggestion for a future update would be to add generic function names, that apply to a database connection/information object. The latter would describe the resource the user wants to search and any other required information, such as the API key, etc., for example:

The user function to search would then be ’search(foo, "Abies")’. Similar generically named functions would provide the primary user-interface, thus promoting a more consistent toolbox at the R level. This will become increasingly relevant as the scope of taxize increases through the addition of new data stores that the package can access.

In terms of presentation in the paper, I really don’t like the way the R code inputs merge with the R outputs. I know the author of Knitr doesn’t like the demarcation of output being polluted by the R prompt, but I do find it difficult parsing the inputs/outputs you show because often there is no space between them and users not familiar with R will have greater difficulties than I. Consider adding in more conventional indications of R outputs, or physically separate input from output by breaking up the chunks of code to have whitespace between the grey-background chunks. Related, in one location I noticed something amiss with the layout; in the first code block at the top of page 5, the printed output looks wrong here. I would expect the attributes to print on their own line and the data in the attribute to also be on its own separate line.

Note also, the inconsistency in the naming of the output object columns. For example, in the two code chunks shown in column 1 of page 4, the first block has an object printed with column names ’matched_name’ and ’data_source_title’, whilst camelCase is used in the outputs shown in the second block. As the package is revised and developed, consider this and other aspects of providing a consistent presentation to the user.

I was a little confused about the example in the section Resolve Taxonomic Names on page 4. Should the taxon name be “Helianthus annuus” or “Helianthus annus” ? In the ‘mynames’ definition you include ‘Helianthus annuus’ in the character vector but the output shown suggests that the submitted name was ‘Helianthus annus’ (1 “u”) in rows with rownames 9 and 10 in the output shown.

Other than that there were the following minor observations:

  • Abstract: replace “easy” with “simple” in “...fashion that’s easy...” , and move the details about availability and the URI to the end of the sentence.
  • Page 2, Column 1, Paragraph 2: You have “In addition, there is no one authoritative taxonomic names source...” , which is a little clumsy to read. How about “In addition, there is no one authoritative source of taxonomic names... ” ?
  • Pg 2, C1, P2-3: The abbreviated data sources are presented first (in paragraph 2) and subsequently defined (in para 3). Restructure this so that the abbreviated forms are explained upon first usage.
  • Pg 2, C2, P2: Most R packages are “in development” so I would drop the qualifier and reword the opening sentence of the paragraph.
  • Pg 2, C2, P6: Change “and more can easily be added” to “and more can be easily added” seems to flow better?
  • Pg 5, paragraph above Figure 1: You refer to converting the object to an **ape** *phylo* object and then repeat essentially the same information in the next sentence. Remove the repetition.
  • Pg 6, C1: The header may be better as “Which taxa are children of the taxon of interest” .
  • Pg 6: In the section “IUCN status”, the term “we” is used to refer to both the authors and the user. This is confusing. Reserve “we” for reference to the authors and use something else (“a user” perhaps) for the other instances. Check this throughout the entire manuscript.
  • Pg 6, C2: in the paragraph immediately below the ‘grep()’ for “RAG1”, two consecutive sentences begin with “However”.
  • Pg 7: The first sentence of “Aggregating data....” reads “In biology, one can asks questions...” . It should be “one asks” or “one can ask” .
  • Pg 7, Conclusions: The first sentence reads “information is increasingly sought out by biologists” . I would drop “out” as “sought” is sufficient on its own.
  • Appendices: Should the two figures in the Appendices have a different reference to differentiate them from Figure 1 in the main body of the paper? As it stands, the paper has two Figure 1s, one on page 5 and a second on page 12 in the Appendix.
  • On Appendix Figure 2: The individual points are a little large. Consider reducing the plotting character size. I appreciate the effect you were going for with the transparency indicating density of observation through overplotting, but the effect is weakened by the size of the individual points.
  • Should the phylogenetic trees have some scale to them? I presume the height of the stems is an indication of phylogenetic distance but the figure is hard to calibrate without an associated scale. A quick look at Paradis (2012) Analysis of Phylogenetics and Evolution with R would suggest however that a scale is not consistently applied to these trees. I am happy to be guided by the authors as they will be more familiar with the conventions than I.

Hydbring and Badalian-Very summarize in this review, the current status in the potential development of clinical applications based on miRNAs’ biology. The article gives an interesting historical and scientific perspective on a field that has only recently boomed.

Hydbring and Badalian-Very summarize in this review, the current status in the potential development of clinical applications based on miRNAs’ biology. The article gives an interesting historical and scientific perspective on a field that has only recently boomed; focusing mostly on the two main products in the pipeline of several biotech companies (in Europe and USA) which work with miRNAs-based agents, disease diagnostics and therapeutics. Interestingly, not only the specific agents that are being produced are mentioned, but also clever insights in the important cellular pathways regulated by key miRNAs are briefly discussed.

Minor points to consider in subsequent versions:

  • Page 2; paragraph ‘Genomic location and transcription of microRNAs’ : the concept of miRNA clusters and precursors could be a bit better explained.
  • Page 2; paragraph ‘Genomic location and transcription of microRNAs’ : when discussing the paper by the laboratory of Richard Young (reference 16); I think it is important to mention that that particular study refers to stem cells.
  • Page 2; paragraph ‘Processing of microRNAs’ : “Argonate” should be replaced by “Argonaute”.
  • Page 3; paragraph ‘MicroRNAs in disease diagnostics’ : are miR-15a and 16-1 two different miRNAs? I suggest mentioning them as: miR-15a and miR-16-1 and not using a slash sign (/) between them.
  • Page 4; paragraph ‘Circulating microRNAs’ : I am a bit bothered by the description of multiple sclerosis (MS) only as an autoimmune disease. Without being an expert in the field, I believe that there are other hypotheses related to the etiology of MS.
  • Page 5; paragraph ‘Clinical microRNA diagnostics’ : Does ‘hsa’ in hsa-miR-205 mean something?
  • Page 5; paragraph ‘Clinical microRNA diagnostics’ : the authors mention the company Asuragen, Austin, TX, USA but they do not really say anything about their products. I suggest to either remove the reference to that company or to include their current pipeline efforts.
  • Page 6; paragraph ‘MicroRNAs in therapeutics’ : in the first paragraph the authors suggest that miRNAs-based therapeutics should be able to be applied with “minimal side-effects”. Since one miRNA can affect a whole gene program, I found this a bit counterintuitive; I was wondering if any data has been published to support that statement. Also, in the same paragraph, the authors compare miRNAs to protein inhibitors, which are described as more specific and/or selective. I think there are now good indications to think that protein inhibitors are not always that specific and/or selective and that such a property actually could be important for their evidenced therapeutic effects.
  • Page 6; paragraph ‘MicroRNAs in therapeutics’ : I think the concept of “antagomir” is an important one and could be better highlighted in the text.
  • Throughout the text (pages 3, 5, 6, and 7): I am a bit bothered by separating the word “miRNA” or “miRNAs” at the end of a sentence in the following way: “miR-NA” or “miR-NAs”. It is a bit confusing considering the particular nomenclature used for miRNAs. That was probably done during the formatting and editing step of the paper.
  • I was wondering if the authors could develop a bit more the general concept that seems to indicate that in disease (and in particular in cancer) the expression and levels of miRNAs are in general downregulated. Maybe some papers have been published about this phenomenon?

The authors describe their attempt to reproduce a study in which it was claimed that mild acid treatment was sufficient to reprogramme postnatal splenocytes from a mouse expressing GFP in the oct4 locus to pluripotent stem cells. The authors followed a protocol that has recently become available as a technical update of the original publication.

They report obtaining no pluripotent stem cells expressing GFP driven over the same time period of several days described in the original publication. They describe observation of some green fluorescence that they attributed to autofluorescence rather than GFP since it coincided with PI positive dead cells. They confirmed the absence of oct4 expression by RT-PCR and also found no evidence for Nanog or Sox2, also markers of pluripotent stem cells.

The paper appears to be an authentic attempt to reproduce the original study, although the study might have had additional value with more controls: “failure to reproduce” studies need to be particularly well controlled.

Examples that could have been valuable to include are:

  • For the claim of autofluorescence: the emission spectrum of the samples would likely have shown a broad spectrum not coincident with that of GFP.
  • The reprogramming efficiency of postnatal mouse splenocytes using more conventional methods in the hands of the authors would have been useful as a comparison. Idem the lung fibroblasts.
  • There are no positive control samples (conventional mESC or miPSC) in the qPCR experiments for pluripotency markers. This would have indicated the biological sensitivity of the assay.
  • Although perhaps a sensitive issue, it might have been helpful if the authors had been able to obtain samples of cells (or their mRNA) from the original authors for simultaneous analysis.

In summary, this is a useful study as it is citable and confirms previous blog reports, but it could have been improved by more controls.

The article is well written, treats an actual problem (the risk of development of valvulopathy after long-term cabergoline treatment in patients with macroprolactinoma) and provides evidence about the reversibility of valvular changes after timely discontinuation of DA treatment.

Title and abstract: The title is appropriate for the content of the article. The abstract is concise and accurately summarizes the essential information of the paper although it would be better if the authors define more precisely the anatomic specificity of valvulopathy – mild mitral regurgitation.

Case report: The clinical case presentation is comprehensive and detailed but there are some minor points that should be clarified:

  • Please clarify the prolactin levels at diagnosis. In the Presentation section (line 3) “At presentation, prolactin level was found to be greater than 1000 ng/ml on diluted testing” but in the section describing the laboratory evaluation at diagnosis (line 7) “Prolactin level was 55 ng/ml”. Was the difference due to so called “hook effect”?
  • Figure 1: In the text the follow-up MR imaging is indicated to be “after 10 months of cabergoline treatment” . However, the figures 1C and 1D represent 2 years post-treatment MR images. Please clarify.
  • Figure 2: Echocardiograms 2A and 2B are defined as baseline but actually they correspond to the follow-up echocardiographic assessment at the 4th year of cabergoline treatment. Did the patient undergo a baseline (prior to dopamine agonist treatment) echocardiographic evaluation? If he did not, it should be mentioned as study limitation in the Discussion section.
  • The mitral valve thickness was mentioned to be normal. Did the echographic examination visualize increased echogenicity (hyperechogenicity) of the mitral cusps?
  • How could you explain the decrease of LV ejection fraction (from 60-65% to 50-55%) after switching from cabergoline to bromocriptine treatment and respectively its increase to 62% after doubling the bromocriptine daily dose? Was LV function estimated always by the same method during the follow-up?
  • Final paragraph: Authors conclude that early discontinuation and management with bromocriptine may be effective in reversing cardiac valvular dysfunction. Even though, regular echocardiographic follow up should be considered in patients who are expected to be on long-term high dose treatment with bromocriptine regarding its partial 5-HT2b agonist activity.

This is an interesting topic: as the authors note, the way that communicators imagine their audiences will shape their output in significant ways. And I enjoyed what clearly has the potential to be a very rich data set.

This is an interesting topic: as the authors note, the way that communicators imagine their audiences will shape their output in significant ways. And I enjoyed what clearly has the potential to be a very rich data set. But I have some reservations about the adequacy of that data set, as it currently stands, given the claims the authors make; the relevance of the analytical framework(s) they draw upon; and the extent to which their analysis has offered significant new insights ‐ by which I mean, I would be keen to see the authors push their discussion further. My suggestions are essentially that they extend the data set they are working with to ensure that their analysis is both rigorous and generalisable, an re-consider the analytical frame they use. I will make some more concrete comments below.

With regard to the data: my feeling is that 14 interviews is a rather slim data set, and that this is heightened by the fact that they were all carried out in a single location, and recruited via snowball sampling and personal contacts. What efforts have the authors made to ensure that they are not speaking to a single, small, sub-community in the much wider category of science communicators? ‐ a case study, if you like, of a particular group of science communicators in North Carolina? In addition, though the authors reference grounded theory as a method for analysis, I got little sense of the data reaching saturation. The reliance on one-off quotes, and on the stories and interests of particular individuals, left me unsure as to how representative interview extracts were. I would therefore recommend either that the data set is extended by carrying out more interviews, in a wider variety of locations (e.g. other sites in the US), or that it is redeveloped as a case study of a particular local professional community. (Which would open up some fascinating questions ‐ how many of these people know each other? What spaces, online or offline, do they interact in, and do they share knowledge, for instance about their audiences? Are there certain touchstone events or publics they communally make reference to?)

As a more minor point with regard to the data set and what the authors want it to do, there were some inconsistencies as to how the study was framed. On p.2 they variously describe the purpose as to “understand the experiences and perspectives of science communicators” and the goals as identifying “the basic interests and value orientations attributed to lay audiences by science communicators”. Later, on p.5, they note that the “research is inductive and seeks to build theory rather than generalizable claims”, while in the Discussion they talk again about having identified communicators‘ “personal motivations” (p.12). There are a number of questions left hanging: is the purpose to understand communicator experiences ‐ in which case why focus on perceptions of audiences? Where is theory being built, and in what ways can this be mobilised in future work? The way that the study is framed and argued as a whole needs, I would suggest, to be clarified.

Relatedly, my sense is that some of this confusion is derived from what I find a rather busy analytical framework. I was not convinced of the value of combining inductive and deductive coding: if the ‘human value typology’ the authors use is ‘universal’, then what is added by open coding? Or, alternatively, why let their open coding, and their findings from this, be constrained by an additional, rather rigid, framework? The addition of the considerable literature on news values to the mix makes the discussion more confusing again. I would suggest that the authors either make much more clear the value of combining these different approaches ‐ building new theory outlining how they relate, and can be jointly mobilised in practice ‐ or fix on one. (My preference would be to focus on the findings from the open coding ‐ but that reflects my own disciplinary biases.)

A more minor analytical point: the authors note that their interviewees come from slightly different professions, and communicate through different formats, have different levels of experience, and different educational backgrounds ‐ but as far as I can see there is no comparative analysis based on this. Were there noticeable differences in the interview talk based on these categorisations? Or was the data set too small to identify any potential contrasts or themes? A note explaining this would be useful.

My final point has reference to the potential that this data set has, particularly if it is extended and developed. I would like to encourage the authors to take their analysis further: at the moment, I was not particularly surprised by the ways in which the communicators referenced news values or imagined their audiences. But it seems to me that the analytical work is not yet complete. What does it mean that communicators imagine audience values and preferences in the way that they do ‐ who is included and excluded by these imaginations? One experiment might be to consider what ‘ideal type’ publics are created in the communicators’ talk. What are the characteristics of the audiences constructed in the interviews and ‐ presumably ‐ in the communicative products of interviewees? What would these people look like? There are also some tantalizing hints in the Discussion that are not really discussed in the Findings ‐ of, for instance, the way in which communicator’s personal motivations may combine with their perceptions of audiences to shape their products. How does this happen? These are, of course, suggestions. But my wider point is that the authors need to show more clearly what is original and useful in their findings ‐ what it is, exactly, that will be important to other scholars in the field.

I hope my comments make sense ‐ please do not hesitate to contact me if not.

This is an interesting article and piece of software. I think it contributes towards further alternatives to easily visualize high dimensionality data on the web. It’s simple and easy to embed into other web frameworks or applications.

a) About the software

  • CSV format . It was hard to guess the expected format. The authors need to add a syntax description of the CSV format at the help page.
  • Simple HTML example . It will be easy to test HeatmapViewer (HmV) if you add a simple downloadable example file with the minimum required HTML-JavaScript to set up a HmV (without all the CSV import code).
  • Color scale . HmV only implements a simple three point linear color scale. For me this is the major weakness of HmV. It will be very convenient that in the next HmV release the user can give as a parameter a function that manages the score to color conversion.

b) About the paper

  • http://www.broadinstitute.org/gsea (desktop)
  • http://jheatmap.github.io/jheatmap/ (website)
  • http://www.gitools.org/ (desktop)
  • http://blog.nextgenetics.net/demo/entry0044/ (website)
  • http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html (python)
  • http://matplotlib.org/api/pyplot_api.html (python)
  • Predicted protein mutability landscape: The authors say: “Without using a tool such as the HeatmapViewer, we could hardly obtain an overview of the protein mutability landscape”. This paragraph seems to suggest that you can explore the data with HmV. I think that HmV is a good tool to report your data, but not to explore it.
  • Conclusions: The authors say: “... provides a new, powerful way to generate and display matrix data in web presentations and in publications.” To use heat maps in web presentations and publications is nothing new. I think that HmV makes it easier and user-friendly, but it’s not new.

This article addresses the links between habitat condition and an endangered bird species in an important forest reserve (ASF) in eastern Kenya. It addresses an important topic, especially given ongoing anthropogenic pressures on this and similar types of forest reserves in eastern Kenya and throughout the tropics. Despite the rather small temporal and spatial extent of the study, it should make an important contribution to bird and forest conservation.

This article addresses the links between habitat condition and an endangered bird species in an important forest reserve (ASF) in eastern Kenya. It addresses an important topic, especially given ongoing anthropogenic pressures on this and similar types of forest reserves in eastern Kenya and throughout the tropics. Despite the rather small temporal and spatial extent of the study, it should make an important contribution to bird and forest conservation. There are a number of issues with the methods and analysis that need to be clarified/addressed however; furthermore, some of the conclusions overreach the data collected, while other important results are given less emphasis that they warrant. Below are more specific comments by section:

The conclusion that human-driven tree removal is an important contributor to the degradation of ASF is reasonable given the data reported in the article. Elephant damage, while clearly likely a very big contributor to habitat modification in ASF, was not the focus of the study (the authors state clearly in the Discussion that elephant damage was not systematically quantified, and thus no data were analyzed) ‐ and thus should only be mentioned in passing here ‐ if at all.

More information about the life history ecology of A. Sokokensis would provide welcome context here. A bit more detail about breeding sites as well as dispersal behavior etc. would be helpful – and especially why these and other aspects render the Pipit a good indicator species/proxy for habitat condition. This could be revisited in the Discussion as links are made between habitat conditions and occurrence of the bird (where you discuss the underlying mechanisms for why it thrives in some parts of ASF and not others, and why it’s abundance correlate strongly with some types of disturbance and not others). Again, you reference other studies that have explored other species in ASF and forest disturbance, but do not really explicitly state why the Pipit is a particularly important indicator of forest condition.

  • Bird Survey: As described, all sightings and calls were recorded and incorporated into distance analysis – but it is not clear here whether or not distances to both auditory and visual encounters were measured the same way (i.e., with the rangefinder). Please clarify.
  • Floor litter sampling: Not clear here whether or not litter cover was recorded as a continuous or categorical variable (percentage). If not, please describe percentage “categories” used.
  • Mean litter depth graph (Figure 2) and accompanying text reports the means and sd but no post-hoc comparison test (e.g. Tukey HSD) – need to report the stats on which differences were/were not significant.
  • Figure 3 – you indicate litter depth was better predictor of bird abundance than litter cover, but r-squared is higher for litter cover. Need to clarify (and also indicate why you chose only to shown depth values in Figure 3.
  • The linear equation can be put in Figure 3 caption (not necessary to include in text).
  • Figure 4 – stats aren’t presented here; also, the caption states that tree loss and leaf litter are inversely correlated – this might be taken to mean, given discussion (below) about pruning, that there could be a poaching threshold below which poaching may pay dividends to Pipits (and above which Pipits are negatively affected). This warrants further exploration/elaboration.
  • The pruning result is arguably the most important one here – this suggests an intriguing trade-off between poaching and bird conservation (in particular, the suggestion that pruning by poachers may bolster Pipit populations – or at the very least mitigate against other aspects of habitat degradation). Worth highlighting this more in Discussion.
  • Last sentence on p. 7 suggests causality (“That is because…”) – but your data only support correlation (one can imagine that there may have been other extrinsic or intrinsic drivers of population decline).
  • P. 8: discussion of classification of habitat types in ASF is certainly interesting, but could be made much more succinct in keeping with focus of this paper.
  • P. 9, top: first paragraph could be expanded – as noted before, tradeoff between poaching/pruning and Pipit abundance is worth exploring in more depth. Could your results be taken as a prescription for understory pruning as a conservation tool for the Sokoke Pipit or other threatened species? More detail here would be welcome (and also in Conclusion); in subsequent paragraph about Pipit foraging behavior and specific relationship to understory vegetation at varying heights could be incorporated into this discussion. Is there any info about optimal perch height for foraging or for flying through the understory? Linking to results of other studies in ASF, is there potential for positive correlations with optimal habitat conditions for the other important bird species in ASF in order to make more general conclusions about management?

Bierbach and co-authors investigated the topic of the evolution of the audience effect in live bearing fishes, by applying a comparative method. They specifically focused on the hypothesis that sperm competition risk, arising from male mate choice copying, and avoidance of aggressive interactions play a key role in driving the evolution of audience-induced changes in male mate choice behavior.

Bierbach and co-authors investigated the topic of the evolution of the audience effect in live bearing fishes, by applying a comparative method. They specifically focused on the hypothesis that sperm competition risk, arising from male mate choice copying, and avoidance of aggressive interactions play a key role in driving the evolution of audience-induced changes in male mate choice behavior. The authors found support to their hypothesis of an influence of SCR on the evolution of deceptive behavior as their findings at species level showed a positive correlation between mean sexual activity and the occurrence of deceptive behavior. Moreover, they found a positive correlation between mean aggressiveness and sexual activity but they did not detect a relationship between aggressiveness and audience effects.

The manuscript is certainly well written and attractive, but I have some major concerns on the data analyses that prevent me to endorse its acceptance at the present stage.

I see three main problems with the statistics that could have led to potentially wrong results and, thus, to completely misleading conclusions.

  • First of all the Authors cannot run an ANCOVA in which there is a significant interaction between factor and covariate Tab. 2 (a). Indeed, when the assumption of common slopes is violated (as in their case), all other significant terms are meaningless. They might want to consider alternative statistical procedures, e.g. Johnson—Neyman method.
  • Second, the Authors cannot retain into the model a non significant interaction term, as this may affect estimations for the factors Tab. 2 (d). They need to remove the species x treatment interaction (as they did for other non significant terms, see top left of the same page 7).
  • The third problem I see regards all the GLMs in which species are compared. Authors entered the 'species' level as fixed factor when species are clearly a random factor. Entering species as fixed factors has the effect of badly inflating the denominator degrees of freedom, making authors’ conclusions far too permissive. They should, instead, use mixed LMs, in which species are the random factor. They should also take care that the degrees of freedom are approximately equal to the number of species (not the number of trials). To do so, they can enter as random factor the interaction between treatment and species.

Data need to be re-analyzed relying on the proper statistical procedures to confirm results and conclusions.

A more theoretical objection to the authors’ interpretation of results (supposing that results will be confirmed by the new analyses) could emerge from the idea that male success in mating with the preferred female may reduce the probability of immediate female’s re-mating, and thus reduce the risk of sperm competition on the short term. As a consequence, it may be not beneficial to significantly increase the risk of losing a high quality and inseminated female for a cost that will not be paid with certainty. The authors might want to consider also this for discussion.

Lastly, I think that the scenario generated from comparative studies at species level may be explained by phylogenetic factors other than sexual selection. Only the inclusion of phylogeny, that allow to account for the shared history among species, into data analyses can lead to unequivocal adaptive explanations for the observed patterns. I see the difficulty in doing this with few species, as it is the case of the present study, but I would suggest the Authors to consider also this future perspective. Moreover, a phylogenetic comparative study would be aided by the recent development of a well-resolved phylogenetic tree for the genus Poecilia (Meredith 2011).

Page 3: the authors should specify that also part of data on male aggressiveness (3 species from Table 1) come from previous studies, as they do for data on deceptive male mating behavior.

Page 5: since data on mate choice come from other studies is it so necessary to report a detailed description of methods for this section? Maybe the authors could refer to the already published methods and only give a brief additional description.

Page 6: how do the authors explain the complete absence of aggressive displays between the focal male and the audience male during the mate choice experiments? This sounds curious if considering that in all the examined species aggressive behaviors and dominance establishment are always observed during dyadic encounters.

In their response to my previous comments, the authors have clarified that only the data from the “Experimental phase” were used to calculate prediction accuracy. However, if I now understand the analysis procedure correctly, there are serious concerns with the approach adopted.

First, let me state what I now understand the analysis procedure to be:

  • For each subject the PD values across the 20 trials were converted to z-scores.
  • For each stimulus, the mean z-score was calculated.
  • The sign of the mean z-score for each stimulus was used to make predictions.
  • For each of the 20 trials, if the sign of the z-score on that trial was the same as for the mean z-score for that stimulus, a hit (correct prediction) was assigned. In contrast, if the sign of the z-score on that trial was the opposite as for the mean z-score for that stimulus, a miss (incorrect prediction) was assigned.
  • For each stimulus the total hits and misses were calculated.
  • Average hits (correct prediction) for each stimulus was calculated across subjects.

If this is a correct description of the procedure, the problem is that the same data were used to determine the sign of the z-score that would be associated with a correct prediction and to determine the actual correct predictions. This will effectively guarantee a correct prediction rate above chance.

To check if this is true, I quickly generated random data and used the analysis procedure as laid out above (see MATLAB code below). Across 10,000 iterations of 100 random subjects, the average “prediction” accuracy was ~57% for each stimulus (standard deviation, 1.1%), remarkably similar to the values reported by the authors in their two studies. In this simulation, I assumed that all subjects contributed 20 trials, but in the actual data analyzed in the study, some subjects contributed fewer than 20 trials due to artifacts in the pupil measurements.

If the above description of the analysis procedure is correct, then I think the authors have provided no evidence to support pupil dilation prediction of random events, with the results reflecting circularity in the analysis procedure.

However, if the above description of the procedure is incorrect, the authors need to clarify exactly what the analysis procedure was, perhaps by providing their analysis scripts.

I think this paper excellent and is an important addition to the literature. I really like the conceptualization of a self-replicating cycle as it illustrates the concept that the “problem” starts with the neuron, i.e., due to one or more of a variety of insults, the neuron is negatively impacted and releases H1, which in turn activates microglia with over expression of cytokines that may, when limited, foster repair but when activated becomes chronic (as is demonstrated here with the potential of cyclic H1 release) and thus facilitates neurotoxicity. I hope the authors intend to measure cytokine expression soon, especially IL-1 and TNF in both astrocytes and microglia, and S100B in astrocytes.

In more detail, Gilthorpe and colleagues provide novel experimental data that demonstrate a new role for a specific histone protein—the linker histone, H1—in neurodegeneration. This study, which was originally designed to identify axonal chemorepellents, actually provided a previously unknown role for H1, as well as other novel and thought provoking results. Fortuitously, as sometimes happens, the authors had a pleasant surprise: their results set some old dogmas on their respective ears and opened up new avenues of approach for studying the role of histones in self-amplification of neurodegenerative cycles. In point, they show that H1 is not just a nice little partner of nuclear DNA as previously thought. H1 is released from ‘damaged’ (or leaky) neurons, kills adjacent healthy neurons, and promotes a proinflammatory profile in both microglia and astrocytes.

Interestingly, the authors’ conceptualization of a damaged neuron → H1 release → healthy neuron killing cycle does not take into account the H1-mediated proinflammatory glial response. This facet of the study opens for these investigators a new avenue they may wish to follow: the role of H1 in stimulation of neuroinflammation with overexpression of cytokines. This is interesting, as neuronal injury has been shown to set in motion an acute phase response that activates glia, increases their expression of cytokines (interleukin-1 and S100B), which, in turn, induce neurons to produce excess Alzheimer-related proteins such as βAPP and ApoE (favoring formation of mature Aβ/ApoE plaques), activated MAPK-p38 and hyperphosphorylated tau (favoring formation of neurofibrillary tangles), and α synuclein (favoring formation of Lewy bodies). To date, the neuronal response shown responsible for stimulating glia is neuronal stress related release of sAPP, but these H1 results from Gilthorpe and colleagues may contribute to or exacerbate the role of sAPP.

The email address should be the one you originally registered with F1000.

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here .

If you still need help with your Google account password, please click here .

You registered with F1000 via Facebook, so we cannot reset your password.

If you still need help with your Facebook account password, please click here .

If your email address is registered with us, we will email you instructions to reset your password.

If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.

Peer review templates, expert examples and free training courses

scientific paper review sample

Joanna Wilkinson

Learning how to write a constructive peer review is an essential step in helping to safeguard the quality and integrity of published literature. Read on for resources that will get you on the right track, including peer review templates, example reports and the Web of Science™ Academy: our free, online course that teaches you the core competencies of peer review through practical experience ( try it today ).

How to write a peer review

Understanding the principles, forms and functions of peer review will enable you to write solid, actionable review reports. It will form the basis for a comprehensive and well-structured review, and help you comment on the quality, rigor and significance of the research paper. It will also help you identify potential breaches of normal ethical practice.

This may sound daunting but it doesn’t need to be. There are plenty of peer review templates, resources and experts out there to help you, including:

Peer review training courses and in-person workshops

  • Peer review templates ( found in our Web of Science Academy )
  • Expert examples of peer review reports
  • Co-reviewing (sharing the task of peer reviewing with a senior researcher)

Other peer review resources, blogs, and guidelines

We’ll go through each one of these in turn below, but first: a quick word on why learning peer review is so important.

Why learn to peer review?

Peer reviewers and editors are gatekeepers of the research literature used to document and communicate human discovery. Reviewers, therefore, need a sound understanding of their role and obligations to ensure the integrity of this process. This also helps them maintain quality research, and to help protect the public from flawed and misleading research findings.

Learning to peer review is also an important step in improving your own professional development.

You’ll become a better writer and a more successful published author in learning to review. It gives you a critical vantage point and you’ll begin to understand what editors are looking for. It will also help you keep abreast of new research and best-practice methods in your field.

We strongly encourage you to learn the core concepts of peer review by joining a course or workshop. You can attend in-person workshops to learn from and network with experienced reviewers and editors. As an example, Sense about Science offers peer review workshops every year. To learn more about what might be in store at one of these, researcher Laura Chatland shares her experience at one of the workshops in London.

There are also plenty of free, online courses available, including courses in the Web of Science Academy such as ‘Reviewing in the Sciences’, ‘Reviewing in the Humanities’ and ‘An introduction to peer review’

The Web of Science Academy also supports co-reviewing with a mentor to teach peer review through practical experience. You learn by writing reviews of preprints, published papers, or even ‘real’ unpublished manuscripts with guidance from your mentor. You can work with one of our community mentors or your own PhD supervisor or postdoc advisor, or even a senior colleague in your department.

Go to the Web of Science Academy

Peer review templates

Peer review templates are helpful to use as you work your way through a manuscript. As part of our free Web of Science Academy courses, you’ll gain exclusive access to comprehensive guidelines and a peer review report. It offers points to consider for all aspects of the manuscript, including the abstract, methods and results sections. It also teaches you how to structure your review and will get you thinking about the overall strengths and impact of the paper at hand.

  • Web of Science Academy template (requires joining one of the free courses)
  • PLoS’s review template
  • Wiley’s peer review guide (not a template as such, but a thorough guide with questions to consider in the first and second reading of the manuscript)

Beyond following a template, it’s worth asking your editor or checking the journal’s peer review management system. That way, you’ll learn whether you need to follow a formal or specific peer review structure for that particular journal. If no such formal approach exists, try asking the editor for examples of other reviews performed for the journal. This will give you a solid understanding of what they expect from you.

Peer review examples

Understand what a constructive peer review looks like by learning from the experts.

Here’s a sample of pre and post-publication peer reviews displayed on Web of Science publication records to help guide you through your first few reviews. Some of these are transparent peer reviews , which means the entire process is open and visible — from initial review and response through to revision and final publication decision. You may wish to scroll to the bottom of these pages so you can first read the initial reviews, and make your way up the page to read the editor and author’s responses.

  • Pre-publication peer review: Patterns and mechanisms in instances of endosymbiont-induced parthenogenesis
  • Pre-publication peer review: Can Ciprofloxacin be Used for Precision Treatment of Gonorrhea in Public STD Clinics? Assessment of Ciprofloxacin Susceptibility and an Opportunity for Point-of-Care Testing
  • Transparent peer review: Towards a standard model of musical improvisation
  • Transparent peer review: Complex mosaic of sexual dichromatism and monochromatism in Pacific robins results from both gains and losses of elaborate coloration
  • Post-publication peer review: Brain state monitoring for the future prediction of migraine attacks
  • Web of Science Academy peer review: Students’ Perception on Training in Writing Research Article for Publication

F1000 has also put together a nice list of expert reviewer comments pertaining to the various aspects of a review report.

Co-reviewing

Co-reviewing (sharing peer review assignments with senior researchers) is one of the best ways to learn peer review. It gives researchers a hands-on, practical understanding of the process.

In an article in The Scientist , the team at Future of Research argues that co-reviewing can be a valuable learning experience for peer review, as long as it’s done properly and with transparency. The reason there’s a need to call out how co-reviewing works is because it does have its downsides. The practice can leave early-career researchers unaware of the core concepts of peer review. This can make it hard to later join an editor’s reviewer pool if they haven’t received adequate recognition for their share of the review work. (If you are asked to write a peer review on behalf of a senior colleague or researcher, get recognition for your efforts by asking your senior colleague to verify the collaborative co-review on your Web of Science researcher profiles).

The Web of Science Academy course ‘Co-reviewing with a mentor’ is uniquely practical in this sense. You will gain experience in peer review by practicing on real papers and working with a mentor to get feedback on how their peer review can be improved. Students submit their peer review report as their course assignment and after internal evaluation receive a course certificate, an Academy graduate badge on their Web of Science researcher profile and is put in front of top editors in their field through the Reviewer Locator at Clarivate.

Here are some external peer review resources found around the web:

  • Peer Review Resources from Sense about Science
  • Peer Review: The Nuts and Bolts by Sense about Science
  • How to review journal manuscripts by R. M. Rosenfeld for Otolaryngology – Head and Neck Surgery
  • Ethical guidelines for peer review from COPE
  • An Instructional Guide for Peer Reviewers of Biomedical Manuscripts by Callaham, Schriger & Cooper for Annals of Emergency Medicine (requires Flash or Adobe)
  • EQUATOR Network’s reporting guidelines for health researchers

And finally, we’ve written a number of blogs about handy peer review tips. Check out some of our top picks:

  • How to Write a Peer Review: 12 things you need to know
  • Want To Peer Review? Top 10 Tips To Get Noticed By Editors
  • Review a manuscript like a pro: 6 tips from a Web of Science Academy supervisor
  • How to write a structured reviewer report: 5 tips from an early-career researcher

Want to learn more? Become a master of peer review and connect with top journal editors. The Web of Science Academy – your free online hub of courses designed by expert reviewers, editors and Nobel Prize winners. Find out more today.

Related posts

Journal citation reports 2024 preview: unified rankings for more inclusive journal assessment.

scientific paper review sample

Introducing the Clarivate Academic AI Platform

scientific paper review sample

Reimagining research impact: Introducing Web of Science Research Intelligence

scientific paper review sample

The Savvy Scientist

The Savvy Scientist

Experiences of a London PhD student and beyond

My Complete Guide to Academic Peer Review: Example Comments & How to Make Paper Revisions

scientific paper review sample

Once you’ve submitted your paper to an academic journal you’re in the nerve-racking position of waiting to hear back about the fate of your work. In this post we’ll cover everything from potential responses you could receive from the editor and example peer review comments through to how to submit revisions.

My first first-author paper was reviewed by five (yes 5!) reviewers and since then I’ve published several others papers, so now I want to share the insights I’ve gained which will hopefully help you out!

This post is part of my series to help with writing and publishing your first academic journal paper. You can find the whole series here: Writing an academic journal paper .

The Peer Review Process

An overview of the academic journal peer review process.

When you submit a paper to a journal, the first thing that will happen is one of the editorial team will do an initial assessment of whether or not the article is of interest. They may decide for a number of reasons that the article isn’t suitable for the journal and may reject the submission before even sending it out to reviewers.

If this happens hopefully they’ll have let you know quickly so that you can move on and make a start targeting a different journal instead.

Handy way to check the status – Sign in to the journal’s submission website and have a look at the status of your journal article online. If you can see that the article is under review then you’ve passed that first hurdle!

When your paper is under peer review, the journal will have set out a framework to help the reviewers assess your work. Generally they’ll be deciding whether the work is to a high enough standard.

Interested in reading about what reviewers are looking for? Check out my post on being a reviewer for the first time. Peer-Reviewing Journal Articles: Should You Do It? Sharing What I Learned From My First Experiences .

Once the reviewers have made their assessments, they’ll return their comments and suggestions to the editor who will then decide how the article should proceed.

How Many People Review Each Paper?

The editor ideally wants a clear decision from the reviewers as to whether the paper should be accepted or rejected. If there is no consensus among the reviewers then the editor may send your paper out to more reviewers to better judge whether or not to accept the paper.

If you’ve got a lot of reviewers on your paper it isn’t necessarily that the reviewers disagreed about accepting your paper.

You can also end up with lots of reviewers in the following circumstance:

  • The editor asks a certain academic to review the paper but doesn’t get a response from them
  • The editor asks another academic to step in
  • The initial reviewer then responds

Next thing you know your work is being scrutinised by extra pairs of eyes!

As mentioned in the intro, my first paper ended up with five reviewers!

Potential Journal Responses

Assuming that the paper passes the editor’s initial evaluation and is sent out for peer-review, here are the potential decisions you may receive:

  • Reject the paper. Sadly the editor and reviewers decided against publishing your work. Hopefully they’ll have included feedback which you can incorporate into your submission to another journal. I’ve had some rejections and the reviewer comments were genuinely useful.
  • Accept the paper with major revisions . Good news: with some more work your paper could get published. If you make all the changes that the reviewers suggest, and they’re happy with your responses, then it should get accepted. Some people see major revisions as a disappointment but it doesn’t have to be.
  • Accept the paper with minor revisions. This is like getting a major revisions response but better! Generally minor revisions can be addressed quickly and often come down to clarifying things for the reviewers: rewording, addressing minor concerns etc and don’t require any more experiments or analysis. You stand a really good chance of getting the paper published if you’ve been given a minor revisions result.
  • Accept the paper with no revisions . I’m not sure that this ever really happens, but it is potentially possible if the reviewers are already completely happy with your paper!

Keen to know more about academic publishing? My series on publishing is now available as a free eBook. It includes my experiences being a peer reviewer. Click the image below for access.

scientific paper review sample

Example Peer Review Comments & Addressing Reviewer Feedback

If your paper has been accepted but requires revisions, the editor will forward to you the comments and concerns that the reviewers raised. You’ll have to address these points so that the reviewers are satisfied your work is of a publishable standard.

It is extremely important to take this stage seriously. If you don’t do a thorough job then the reviewers won’t recommend that your paper is accepted for publication!

You’ll have to put together a resubmission with your co-authors and there are two crucial things you must do:

  • Make revisions to your manuscript based off reviewer comments
  • Reply to the reviewers, telling them the changes you’ve made and potentially changes you’ve not made in instances where you disagree with them. Read on to see some example peer review comments and how I replied!

Before making any changes to your actual paper, I suggest having a thorough read through the reviewer comments.

Once you’ve read through the comments you might be keen to dive straight in and make the changes in your paper. Instead, I actually suggest firstly drafting your reply to the reviewers.

Why start with the reply to reviewers? Well in a way it is actually potentially more important than the changes you’re making in the manuscript.

Imagine when a reviewer receives your response to their comments: you want them to be able to read your reply document and be satisfied that their queries have largely been addressed without even having to open the updated draft of your manuscript. If you do a good job with the replies, the reviewers will be better placed to recommend the paper be accepted!

By starting with your reply to the reviewers you’ll also clarify for yourself what changes actually have to be made to the paper.

So let’s now cover how to reply to the reviewers.

1. Replying to Journal Reviewers

It is so important to make sure you do a solid job addressing your reviewers’ feedback in your reply document. If you leave anything unanswered you’re asking for trouble, which in this case means either a rejection or another round of revisions: though some journals only give you one shot! Therefore make sure you’re thorough, not just with making the changes but demonstrating the changes in your replies.

It’s no good putting in the work to revise your paper but not evidence it in your reply to the reviewers!

There may be points that reviewers raise which don’t appear to necessitate making changes to your manuscript, but this is rarely the case. Even for comments or concerns they raise which are already addressed in the paper, clearly those areas could be clarified or highlighted to ensure that future readers don’t get confused.

How to Reply to Journal Reviewers

Some journals will request a certain format for how you should structure a reply to the reviewers. If so this should be included in the email you receive from the journal’s editor. If there are no certain requirements here is what I do:

  • Copy and paste all replies into a document.
  • Separate out each point they raise onto a separate line. Often they’ll already be nicely numbered but sometimes they actually still raise separate issues in one block of text. I suggest separating it all out so that each query is addressed separately.
  • Form your reply for each point that they raise. I start by just jotting down notes for roughly how I’ll respond. Once I’m happy with the key message I’ll write it up into a scripted reply.
  • Finally, go through and format it nicely and include line number references for the changes you’ve made in the manuscript.

By the end you’ll have a document that looks something like:

Reviewer 1 Point 1: [Quote the reviewer’s comment] Response 1: [Address point 1 and say what revisions you’ve made to the paper] Point 2: [Quote the reviewer’s comment] Response 2: [Address point 2 and say what revisions you’ve made to the paper] Then repeat this for all comments by all reviewers!

What To Actually Include In Your Reply To Reviewers

For every single point raised by the reviewers, you should do the following:

  • Address their concern: Do you agree or disagree with the reviewer’s comment? Either way, make your position clear and justify any differences of opinion. If the reviewer wants more clarity on an issue, provide it. It is really important that you actually address their concerns in your reply. Don’t just say “Thanks, we’ve changed the text”. Actually include everything they want to know in your reply. Yes this means you’ll be repeating things between your reply and the revisions to the paper but that’s fine.
  • Reference changes to your manuscript in your reply. Once you’ve answered the reviewer’s question, you must show that you’re actually using this feedback to revise the manuscript. The best way to do this is to refer to where the changes have been made throughout the text. I personally do this by include line references. Make sure you save this right until the end once you’ve finished making changes!

Example Peer Review Comments & Author Replies

In order to understand how this works in practice I’d suggest reading through a few real-life example peer review comments and replies.

The good news is that published papers often now include peer-review records, including the reviewer comments and authors’ replies. So here are two feedback examples from my own papers:

Example Peer Review: Paper 1

Quantifying 3D Strain in Scaffold Implants for Regenerative Medicine, J. Clark et al. 2020 – Available here

This paper was reviewed by two academics and was given major revisions. The journal gave us only 10 days to get them done, which was a bit stressful!

  • Reviewer Comments
  • My reply to Reviewer 1
  • My reply to Reviewer 2

One round of reviews wasn’t enough for Reviewer 2…

  • My reply to Reviewer 2 – ROUND 2

Thankfully it was accepted after the second round of review, and actually ended up being selected for this accolade, whatever most notable means?!

Nice to see our recent paper highlighted as one of the most notable articles, great start to the week! Thanks @Materials_mdpi 😀 #openaccess & available here: https://t.co/AKWLcyUtpC @ICBiomechanics @julianrjones @saman_tavana pic.twitter.com/ciOX2vftVL — Jeff Clark (@savvy_scientist) December 7, 2020

Example Peer Review: Paper 2

Exploratory Full-Field Mechanical Analysis across the Osteochondral Tissue—Biomaterial Interface in an Ovine Model, J. Clark et al. 2020 – Available here

This paper was reviewed by three academics and was given minor revisions.

  • My reply to Reviewer 3

I’m pleased to say it was accepted after the first round of revisions 🙂

Things To Be Aware Of When Replying To Peer Review Comments

  • Generally, try to make a revision to your paper for every comment. No matter what the reviewer’s comment is, you can probably make a change to the paper which will improve your manuscript. For example, if the reviewer seems confused about something, improve the clarity in your paper. If you disagree with the reviewer, include better justification for your choices in the paper. It is far more favourable to take on board the reviewer’s feedback and act on it with actual changes to your draft.
  • Organise your responses. Sometimes journals will request the reply to each reviewer is sent in a separate document. Unless they ask for it this way I stick them all together in one document with subheadings eg “Reviewer 1” etc.
  • Make sure you address each and every question. If you dodge anything then the reviewer will have a valid reason to reject your resubmission. You don’t need to agree with them on every point but you do need to justify your position.
  • Be courteous. No need to go overboard with compliments but stay polite as reviewers are providing constructive feedback. I like to add in “We thank the reviewer for their suggestion” every so often where it genuinely warrants it. Remember that written language doesn’t always carry tone very well, so rather than risk coming off as abrasive if I don’t agree with the reviewer’s suggestion I’d rather be generous with friendliness throughout the reply.

2. How to Make Revisions To Your Paper

Once you’ve drafted your replies to the reviewers, you’ve actually done a lot of the ground work for making changes to the paper. Remember, you are making changes to the paper based off the reviewer comments so you should regularly be referring back to the comments to ensure you’re not getting sidetracked.

Reviewers could request modifications to any part of your paper. You may need to collect more data, do more analysis, reformat some figures, add in more references or discussion or any number of other revisions! So I can’t really help with everything, even so here is some general advice:

  • Use tracked-changes. This is so important. The editor and reviewers need to be able to see every single change you’ve made compared to your first submission. Sometimes the journal will want a clean copy too but always start with tracked-changes enabled then just save a clean copy afterwards.
  • Be thorough . Try to not leave any opportunity for the reviewers to not recommend your paper to be published. Any chance you have to satisfy their concerns, take it. For example if the reviewers are concerned about sample size and you have the means to include other experiments, consider doing so. If they want to see more justification or references, be thorough. To be clear again, this doesn’t necessarily mean making changes you don’t believe in. If you don’t want to make a change, you can justify your position to the reviewers. Either way, be thorough.
  • Use your reply to the reviewers as a guide. In your draft reply to the reviewers you should have already included a lot of details which can be incorporated into the text. If they raised a concern, you should be able to go and find references which address the concern. This reference should appear both in your reply and in the manuscript. As mentioned above I always suggest starting with the reply, then simply adding these details to your manuscript once you know what needs doing.

Putting Together Your Paper Revision Submission

  • Once you’ve drafted your reply to the reviewers and revised manuscript, make sure to give sufficient time for your co-authors to give feedback. Also give yourself time afterwards to make changes based off of their feedback. I ideally give a week for the feedback and another few days to make the changes.
  • When you’re satisfied that you’ve addressed the reviewer comments, you can think about submitting it. The journal may ask for another letter to the editor, if not I simply add to the top of the reply to reviewers something like:
“Dear [Editor], We are grateful to the reviewer for their positive and constructive comments that have led to an improved manuscript.  Here, we address their concerns/suggestions and have tracked changes throughout the revised manuscript.”

Once you’re ready to submit:

  • Double check that you’ve done everything that the editor requested in their email
  • Double check that the file names and formats are as required
  • Triple check you’ve addressed the reviewer comments adequately
  • Click submit and bask in relief!

You won’t always get the paper accepted, but if you’re thorough and present your revisions clearly then you’ll put yourself in a really good position. Remember to try as hard as possible to satisfy the reviewers’ concerns to minimise any opportunity for them to not accept your revisions!

Best of luck!

I really hope that this post has been useful to you and that the example peer review section has given you some ideas for how to respond. I know how daunting it can be to reply to reviewers, and it is really important to try to do a good job and give yourself the best chances of success. If you’d like to read other posts in my academic publishing series you can find them here:

Blog post series: Writing an academic journal paper

Subscribe below to stay up to date with new posts in the academic publishing series and other PhD content.

Share this:

  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)

Related Posts

Graphic of data from experiments written on a notepad with the title "How to manage data"

How to Master Data Management in Research

25th April 2024 27th April 2024

Graphic of a researcher writing, perhaps a thesis title

Thesis Title: Examples and Suggestions from a PhD Grad

23rd February 2024 23rd February 2024

Graphic of a zen-like man meditating, surrounded by graphics of healthy food, sport, sleep and heart-health: all in an effort to stay healthy as a student

How to Stay Healthy as a Student

25th January 2024 25th January 2024

2 Comments on “My Complete Guide to Academic Peer Review: Example Comments & How to Make Paper Revisions”

Excellent article! Thank you for the inspiration!

No worries at all, thanks for your kind comment!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Privacy Overview

Banner

Write a Critical Review of a Scientific Journal Article

1. identify how and why the research was carried out, 2. establish the research context, 3. evaluate the research, 4. establish the significance of the research.

  • Writing Your Critique

Ask Us: Chat, email, visit or call

Click to chat: contact the library

Video: How to Integrate Critical Voice into Your Literature Review

How to Integrate Critical Voice in Your Lit Review

Video: Note-taking and Writing Tips to Avoid Plagiarism

Note-taking and Writing Tips to Avoid Accidental Plagiarism

Get assistance

The library offers a range of helpful services.  All of our appointments are free of charge and confidential.

  • Book an appointment

Read the article(s) carefully and use the questions below to help you identify how and why the research was carried out. Look at the following sections: 

Introduction

  • What was the objective of the study?
  • What methods were used to accomplish this purpose (e.g., systematic recording of observations, analysis and evaluation of published research, assessment of theory, etc.)?
  • What techniques were used and how was each technique performed?
  • What kind of data can be obtained using each technique?
  • How are such data interpreted?
  • What kind of information is produced by using the technique?
  • What objective evidence was obtained from the authors’ efforts (observations, measurements, etc.)?
  • What were the results of the study? 
  • How was each technique used to obtain each result?
  • What statistical tests were used to evaluate the significance of the conclusions based on numeric or graphic data?
  • How did each result contribute to answering the question or testing the hypothesis raised in the introduction?
  • How were the results interpreted? How were they related to the original problem (authors’ view of evidence rather than objective findings)? 
  • Were the authors able to answer the question (test the hypothesis) raised?
  • Did the research provide new factual information, a new understanding of a phenomenon in the field, or a new research technique?
  • How was the significance of the work described?
  • Do the authors relate the findings of the study to literature in the field?
  • Did the reported observations or interpretations support or refute observations or interpretations made by other researchers?

These questions were adapted from the following sources:  Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site . Retrieved July 31, 2006.

Once you are familiar with the article, you can establish the research context by asking the following questions:

  • Who conducted the research? What were/are their interests?
  • When and where was the research conducted?
  • Why did the authors do this research?
  • Was this research pertinent only within the authors’ geographic locale, or did it have broader (even global) relevance?
  • Were many other laboratories pursuing related research when the reported work was done? If so, why?
  • For experimental research, what funding sources met the costs of the research?
  • On what prior observations was the research based? What was and was not known at the time?
  • How important was the research question posed by the researchers?

These questions were adapted from the following sources: Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site . Retrieved July 31, 2006.

Remember that simply disagreeing with the material is not considered to be a critical assessment of the material.  For example, stating that the sample size is insufficient is not a critical assessment.  Describing why the sample size is insufficient for the claims being made in the study would be a critical assessment.

Use the questions below to help you evaluate the quality of the authors’ research:

  • Does the title precisely state the subject of the paper?
  • Read the statement of purpose in the abstract. Does it match the one in the introduction?

Acknowledgments

  • Could the source of the research funding have influenced the research topic or conclusions?
  • Check the sequence of statements in the introduction. Does all the information lead coherently to the purpose of the study?
  • Review all methods in relation to the objective(s) of the study. Are the methods valid for studying the problem?
  • Check the methods for essential information. Could the study be duplicated from the methods and information given?
  • Check the methods for flaws. Is the sample selection adequate? Is the experimental design sound?
  • Check the sequence of statements in the methods. Does all the information belong there? Is the sequence of methods clear and pertinent?
  • Was there mention of ethics? Which research ethics board approved the study?
  • Carefully examine the data presented in the tables and diagrams. Does the title or legend accurately describe the content? 
  • Are column headings and labels accurate? 
  • Are the data organized for ready comparison and interpretation? (A table should be self-explanatory, with a title that accurately and concisely describes content and column headings that accurately describe information in the cells.)
  • Review the results as presented in the text while referring to the data in the tables and diagrams. Does the text complement, and not simply repeat data? Are there discrepancies between the results in the text and those in the tables?
  • Check all calculations and presentation of data.
  • Review the results in light of the stated objectives. Does the study reveal what the researchers intended?
  • Does the discussion clearly address the objectives and hypotheses?
  • Check the interpretation against the results. Does the discussion merely repeat the results? 
  • Does the interpretation arise logically from the data or is it too far-fetched? 
  • Have the faults, flaws, or shortcomings of the research been addressed?
  • Is the interpretation supported by other research cited in the study?
  • Does the study consider key studies in the field?
  • What is the significance of the research? Do the authors mention wider implications of the findings?
  • Is there a section on recommendations for future research? Are there other research possibilities or directions suggested? 

Consider the article as a whole

  • Reread the abstract. Does it accurately summarize the article?
  • Check the structure of the article (first headings and then paragraphing). Is all the material organized under the appropriate headings? Are sections divided logically into subsections or paragraphs?
  • Are stylistic concerns, logic, clarity, and economy of expression addressed?

These questions were adapted from the following sources:  Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site. Retrieved July 31, 2006.

After you have evaluated the research, consider whether the research has been successful. Has it led to new questions being asked, or new ways of using existing knowledge? Are other researchers citing this paper?

You should consider the following questions:

  • How did other researchers view the significance of the research reported by your authors?
  • Did the research reported in your article result in the formulation of new questions or hypotheses (by the authors or by other researchers)?
  • Have other researchers subsequently supported or refuted the observations or interpretations of these authors?
  • Did the research make a significant contribution to human knowledge?
  • Did the research produce any practical applications?
  • What are the social, political, technological, medical implications of this research?
  • How do you evaluate the significance of the research?

To answer these questions, look at review articles to find out how reviewers view this piece of research. Look at research articles and databases like Web of Science to see how other people have used this work. What range of journals have cited this article?

These questions were adapted from the following sources:

Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site . Retrieved July 31, 2006.

  • << Previous: Start Here
  • Next: Writing Your Critique >>
  • Last Updated: Jan 11, 2024 12:42 PM
  • URL: https://guides.lib.uoguelph.ca/WriteCriticalReview

Suggest an edit to this guide

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Page Content

Overview of the review report format, the first read-through, first read considerations, spotting potential major flaws, concluding the first reading, rejection after the first reading, before starting the second read-through, doing the second read-through, the second read-through: section by section guidance, how to structure your report, on presentation and style, criticisms & confidential comments to editors, the recommendation, when recommending rejection, additional resources, step by step guide to reviewing a manuscript.

When you receive an invitation to peer review, you should be sent a copy of the paper's abstract to help you decide whether you wish to do the review. Try to respond to invitations promptly - it will prevent delays. It is also important at this stage to declare any potential Conflict of Interest.

The structure of the review report varies between journals. Some follow an informal structure, while others have a more formal approach.

" Number your comments!!! " (Jonathon Halbesleben, former Editor of Journal of Occupational and Organizational Psychology)

Informal Structure

Many journals don't provide criteria for reviews beyond asking for your 'analysis of merits'. In this case, you may wish to familiarize yourself with examples of other reviews done for the journal, which the editor should be able to provide or, as you gain experience, rely on your own evolving style.

Formal Structure

Other journals require a more formal approach. Sometimes they will ask you to address specific questions in your review via a questionnaire. Or they might want you to rate the manuscript on various attributes using a scorecard. Often you can't see these until you log in to submit your review. So when you agree to the work, it's worth checking for any journal-specific guidelines and requirements. If there are formal guidelines, let them direct the structure of your review.

In Both Cases

Whether specifically required by the reporting format or not, you should expect to compile comments to authors and possibly confidential ones to editors only.

Reviewing with Empathy

Following the invitation to review, when you'll have received the article abstract, you should already understand the aims, key data and conclusions of the manuscript. If you don't, make a note now that you need to feedback on how to improve those sections.

The first read-through is a skim-read. It will help you form an initial impression of the paper and get a sense of whether your eventual recommendation will be to accept or reject the paper.

Keep a pen and paper handy when skim-reading.

Try to bear in mind the following questions - they'll help you form your overall impression:

  • What is the main question addressed by the research? Is it relevant and interesting?
  • How original is the topic? What does it add to the subject area compared with other published material?
  • Is the paper well written? Is the text clear and easy to read?
  • Are the conclusions consistent with the evidence and arguments presented? Do they address the main question posed?
  • If the author is disagreeing significantly with the current academic consensus, do they have a substantial case? If not, what would be required to make their case credible?
  • If the paper includes tables or figures, what do they add to the paper? Do they aid understanding or are they superfluous?

While you should read the whole paper, making the right choice of what to read first can save time by flagging major problems early on.

Editors say, " Specific recommendations for remedying flaws are VERY welcome ."

Examples of possibly major flaws include:

  • Drawing a conclusion that is contradicted by the author's own statistical or qualitative evidence
  • The use of a discredited method
  • Ignoring a process that is known to have a strong influence on the area under study

If experimental design features prominently in the paper, first check that the methodology is sound - if not, this is likely to be a major flaw.

You might examine:

  • The sampling in analytical papers
  • The sufficient use of control experiments
  • The precision of process data
  • The regularity of sampling in time-dependent studies
  • The validity of questions, the use of a detailed methodology and the data analysis being done systematically (in qualitative research)
  • That qualitative research extends beyond the author's opinions, with sufficient descriptive elements and appropriate quotes from interviews or focus groups

Major Flaws in Information

If methodology is less of an issue, it's often a good idea to look at the data tables, figures or images first. Especially in science research, it's all about the information gathered. If there are critical flaws in this, it's very likely the manuscript will need to be rejected. Such issues include:

  • Insufficient data
  • Unclear data tables
  • Contradictory data that either are not self-consistent or disagree with the conclusions
  • Confirmatory data that adds little, if anything, to current understanding - unless strong arguments for such repetition are made

If you find a major problem, note your reasoning and clear supporting evidence (including citations).

After the initial read and using your notes, including those of any major flaws you found, draft the first two paragraphs of your review - the first summarizing the research question addressed and the second the contribution of the work. If the journal has a prescribed reporting format, this draft will still help you compose your thoughts.

The First Paragraph

This should state the main question addressed by the research and summarize the goals, approaches, and conclusions of the paper. It should:

  • Help the editor properly contextualize the research and add weight to your judgement
  • Show the author what key messages are conveyed to the reader, so they can be sure they are achieving what they set out to do
  • Focus on successful aspects of the paper so the author gets a sense of what they've done well

The Second Paragraph

This should provide a conceptual overview of the contribution of the research. So consider:

  • Is the paper's premise interesting and important?
  • Are the methods used appropriate?
  • Do the data support the conclusions?

After drafting these two paragraphs, you should be in a position to decide whether this manuscript is seriously flawed and should be rejected (see the next section). Or whether it is publishable in principle and merits a detailed, careful read through.

Even if you are coming to the opinion that an article has serious flaws, make sure you read the whole paper. This is very important because you may find some really positive aspects that can be communicated to the author. This could help them with future submissions.

A full read-through will also make sure that any initial concerns are indeed correct and fair. After all, you need the context of the whole paper before deciding to reject. If you still intend to recommend rejection, see the section "When recommending rejection."

Once the paper has passed your first read and you've decided the article is publishable in principle, one purpose of the second, detailed read-through is to help prepare the manuscript for publication. You may still decide to recommend rejection following a second reading.

" Offer clear suggestions for how the authors can address the concerns raised. In other words, if you're going to raise a problem, provide a solution ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Preparation

To save time and simplify the review:

  • Don't rely solely upon inserting comments on the manuscript document - make separate notes
  • Try to group similar concerns or praise together
  • If using a review program to note directly onto the manuscript, still try grouping the concerns and praise in separate notes - it helps later
  • Note line numbers of text upon which your notes are based - this helps you find items again and also aids those reading your review

Now that you have completed your preparations, you're ready to spend an hour or so reading carefully through the manuscript.

As you're reading through the manuscript for a second time, you'll need to keep in mind the argument's construction, the clarity of the language and content.

With regard to the argument’s construction, you should identify:

  • Any places where the meaning is unclear or ambiguous
  • Any factual errors
  • Any invalid arguments

You may also wish to consider:

  • Does the title properly reflect the subject of the paper?
  • Does the abstract provide an accessible summary of the paper?
  • Do the keywords accurately reflect the content?
  • Is the paper an appropriate length?
  • Are the key messages short, accurate and clear?

Not every submission is well written. Part of your role is to make sure that the text’s meaning is clear.

Editors say, " If a manuscript has many English language and editing issues, please do not try and fix it. If it is too bad, note that in your review and it should be up to the authors to have the manuscript edited ."

If the article is difficult to understand, you should have rejected it already. However, if the language is poor but you understand the core message, see if you can suggest improvements to fix the problem:

  • Are there certain aspects that could be communicated better, such as parts of the discussion?
  • Should the authors consider resubmitting to the same journal after language improvements?
  • Would you consider looking at the paper again once these issues are dealt with?

On Grammar and Punctuation

Your primary role is judging the research content. Don't spend time polishing grammar or spelling. Editors will make sure that the text is at a high standard before publication. However, if you spot grammatical errors that affect clarity of meaning, then it's important to highlight these. Expect to suggest such amendments - it's rare for a manuscript to pass review with no corrections.

A 2010 study of nursing journals found that 79% of recommendations by reviewers were influenced by grammar and writing style (Shattel, et al., 2010).

1. The Introduction

A well-written introduction:

  • Sets out the argument
  • Summarizes recent research related to the topic
  • Highlights gaps in current understanding or conflicts in current knowledge
  • Establishes the originality of the research aims by demonstrating the need for investigations in the topic area
  • Gives a clear idea of the target readership, why the research was carried out and the novelty and topicality of the manuscript

Originality and Topicality

Originality and topicality can only be established in the light of recent authoritative research. For example, it's impossible to argue that there is a conflict in current understanding by referencing articles that are 10 years old.

Authors may make the case that a topic hasn't been investigated in several years and that new research is required. This point is only valid if researchers can point to recent developments in data gathering techniques or to research in indirectly related fields that suggest the topic needs revisiting. Clearly, authors can only do this by referencing recent literature. Obviously, where older research is seminal or where aspects of the methodology rely upon it, then it is perfectly appropriate for authors to cite some older papers.

Editors say, "Is the report providing new information; is it novel or just confirmatory of well-known outcomes ?"

It's common for the introduction to end by stating the research aims. By this point you should already have a good impression of them - if the explicit aims come as a surprise, then the introduction needs improvement.

2. Materials and Methods

Academic research should be replicable, repeatable and robust - and follow best practice.

Replicable Research

This makes sufficient use of:

  • Control experiments
  • Repeated analyses
  • Repeated experiments

These are used to make sure observed trends are not due to chance and that the same experiment could be repeated by other researchers - and result in the same outcome. Statistical analyses will not be sound if methods are not replicable. Where research is not replicable, the paper should be recommended for rejection.

Repeatable Methods

These give enough detail so that other researchers are able to carry out the same research. For example, equipment used or sampling methods should all be described in detail so that others could follow the same steps. Where methods are not detailed enough, it's usual to ask for the methods section to be revised.

Robust Research

This has enough data points to make sure the data are reliable. If there are insufficient data, it might be appropriate to recommend revision. You should also consider whether there is any in-built bias not nullified by the control experiments.

Best Practice

During these checks you should keep in mind best practice:

  • Standard guidelines were followed (e.g. the CONSORT Statement for reporting randomized trials)
  • The health and safety of all participants in the study was not compromised
  • Ethical standards were maintained

If the research fails to reach relevant best practice standards, it's usual to recommend rejection. What's more, you don't then need to read any further.

3. Results and Discussion

This section should tell a coherent story - What happened? What was discovered or confirmed?

Certain patterns of good reporting need to be followed by the author:

  • They should start by describing in simple terms what the data show
  • They should make reference to statistical analyses, such as significance or goodness of fit
  • Once described, they should evaluate the trends observed and explain the significance of the results to wider understanding. This can only be done by referencing published research
  • The outcome should be a critical analysis of the data collected

Discussion should always, at some point, gather all the information together into a single whole. Authors should describe and discuss the overall story formed. If there are gaps or inconsistencies in the story, they should address these and suggest ways future research might confirm the findings or take the research forward.

4. Conclusions

This section is usually no more than a few paragraphs and may be presented as part of the results and discussion, or in a separate section. The conclusions should reflect upon the aims - whether they were achieved or not - and, just like the aims, should not be surprising. If the conclusions are not evidence-based, it's appropriate to ask for them to be re-written.

5. Information Gathered: Images, Graphs and Data Tables

If you find yourself looking at a piece of information from which you cannot discern a story, then you should ask for improvements in presentation. This could be an issue with titles, labels, statistical notation or image quality.

Where information is clear, you should check that:

  • The results seem plausible, in case there is an error in data gathering
  • The trends you can see support the paper's discussion and conclusions
  • There are sufficient data. For example, in studies carried out over time are there sufficient data points to support the trends described by the author?

You should also check whether images have been edited or manipulated to emphasize the story they tell. This may be appropriate but only if authors report on how the image has been edited (e.g. by highlighting certain parts of an image). Where you feel that an image has been edited or manipulated without explanation, you should highlight this in a confidential comment to the editor in your report.

6. List of References

You will need to check referencing for accuracy, adequacy and balance.

Where a cited article is central to the author's argument, you should check the accuracy and format of the reference - and bear in mind different subject areas may use citations differently. Otherwise, it's the editor’s role to exhaustively check the reference section for accuracy and format.

You should consider if the referencing is adequate:

  • Are important parts of the argument poorly supported?
  • Are there published studies that show similar or dissimilar trends that should be discussed?
  • If a manuscript only uses half the citations typical in its field, this may be an indicator that referencing should be improved - but don't be guided solely by quantity
  • References should be relevant, recent and readily retrievable

Check for a well-balanced list of references that is:

  • Helpful to the reader
  • Fair to competing authors
  • Not over-reliant on self-citation
  • Gives due recognition to the initial discoveries and related work that led to the work under assessment

You should be able to evaluate whether the article meets the criteria for balanced referencing without looking up every reference.

7. Plagiarism

By now you will have a deep understanding of the paper's content - and you may have some concerns about plagiarism.

Identified Concern

If you find - or already knew of - a very similar paper, this may be because the author overlooked it in their own literature search. Or it may be because it is very recent or published in a journal slightly outside their usual field.

You may feel you can advise the author how to emphasize the novel aspects of their own study, so as to better differentiate it from similar research. If so, you may ask the author to discuss their aims and results, or modify their conclusions, in light of the similar article. Of course, the research similarities may be so great that they render the work unoriginal and you have no choice but to recommend rejection.

"It's very helpful when a reviewer can point out recent similar publications on the same topic by other groups, or that the authors have already published some data elsewhere ." (Editor feedback)

Suspected Concern

If you suspect plagiarism, including self-plagiarism, but cannot recall or locate exactly what is being plagiarized, notify the editor of your suspicion and ask for guidance.

Most editors have access to software that can check for plagiarism.

Editors are not out to police every paper, but when plagiarism is discovered during peer review it can be properly addressed ahead of publication. If plagiarism is discovered only after publication, the consequences are worse for both authors and readers, because a retraction may be necessary.

For detailed guidelines see COPE's Ethical guidelines for reviewers and Wiley's Best Practice Guidelines on Publishing Ethics .

8. Search Engine Optimization (SEO)

After the detailed read-through, you will be in a position to advise whether the title, abstract and key words are optimized for search purposes. In order to be effective, good SEO terms will reflect the aims of the research.

A clear title and abstract will improve the paper's search engine rankings and will influence whether the user finds and then decides to navigate to the main article. The title should contain the relevant SEO terms early on. This has a major effect on the impact of a paper, since it helps it appear in search results. A poor abstract can then lose the reader's interest and undo the benefit of an effective title - whilst the paper's abstract may appear in search results, the potential reader may go no further.

So ask yourself, while the abstract may have seemed adequate during earlier checks, does it:

  • Do justice to the manuscript in this context?
  • Highlight important findings sufficiently?
  • Present the most interesting data?

Editors say, " Does the Abstract highlight the important findings of the study ?"

If there is a formal report format, remember to follow it. This will often comprise a range of questions followed by comment sections. Try to answer all the questions. They are there because the editor felt that they are important. If you're following an informal report format you could structure your report in three sections: summary, major issues, minor issues.

  • Give positive feedback first. Authors are more likely to read your review if you do so. But don't overdo it if you will be recommending rejection
  • Briefly summarize what the paper is about and what the findings are
  • Try to put the findings of the paper into the context of the existing literature and current knowledge
  • Indicate the significance of the work and if it is novel or mainly confirmatory
  • Indicate the work's strengths, its quality and completeness
  • State any major flaws or weaknesses and note any special considerations. For example, if previously held theories are being overlooked

Major Issues

  • Are there any major flaws? State what they are and what the severity of their impact is on the paper
  • Has similar work already been published without the authors acknowledging this?
  • Are the authors presenting findings that challenge current thinking? Is the evidence they present strong enough to prove their case? Have they cited all the relevant work that would contradict their thinking and addressed it appropriately?
  • If major revisions are required, try to indicate clearly what they are
  • Are there any major presentational problems? Are figures & tables, language and manuscript structure all clear enough for you to accurately assess the work?
  • Are there any ethical issues? If you are unsure it may be better to disclose these in the confidential comments section

Minor Issues

  • Are there places where meaning is ambiguous? How can this be corrected?
  • Are the correct references cited? If not, which should be cited instead/also? Are citations excessive, limited, or biased?
  • Are there any factual, numerical or unit errors? If so, what are they?
  • Are all tables and figures appropriate, sufficient, and correctly labelled? If not, say which are not

Your review should ultimately help the author improve their article. So be polite, honest and clear. You should also try to be objective and constructive, not subjective and destructive.

You should also:

  • Write clearly and so you can be understood by people whose first language is not English
  • Avoid complex or unusual words, especially ones that would even confuse native speakers
  • Number your points and refer to page and line numbers in the manuscript when making specific comments
  • If you have been asked to only comment on specific parts or aspects of the manuscript, you should indicate clearly which these are
  • Treat the author's work the way you would like your own to be treated

Most journals give reviewers the option to provide some confidential comments to editors. Often this is where editors will want reviewers to state their recommendation - see the next section - but otherwise this area is best reserved for communicating malpractice such as suspected plagiarism, fraud, unattributed work, unethical procedures, duplicate publication, bias or other conflicts of interest.

However, this doesn't give reviewers permission to 'backstab' the author. Authors can't see this feedback and are unable to give their side of the story unless the editor asks them to. So in the spirit of fairness, write comments to editors as though authors might read them too.

Reviewers should check the preferences of individual journals as to where they want review decisions to be stated. In particular, bear in mind that some journals will not want the recommendation included in any comments to authors, as this can cause editors difficulty later - see Section 11 for more advice about working with editors.

You will normally be asked to indicate your recommendation (e.g. accept, reject, revise and resubmit, etc.) from a fixed-choice list and then to enter your comments into a separate text box.

Recommending Acceptance

If you're recommending acceptance, give details outlining why, and if there are any areas that could be improved. Don't just give a short, cursory remark such as 'great, accept'. See Improving the Manuscript

Recommending Revision

Where improvements are needed, a recommendation for major or minor revision is typical. You may also choose to state whether you opt in or out of the post-revision review too. If recommending revision, state specific changes you feel need to be made. The author can then reply to each point in turn.

Some journals offer the option to recommend rejection with the possibility of resubmission – this is most relevant where substantial, major revision is necessary.

What can reviewers do to help? " Be clear in their comments to the author (or editor) which points are absolutely critical if the paper is given an opportunity for revisio n." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Recommending Rejection

If recommending rejection or major revision, state this clearly in your review (and see the next section, 'When recommending rejection').

Where manuscripts have serious flaws you should not spend any time polishing the review you've drafted or give detailed advice on presentation.

Editors say, " If a reviewer suggests a rejection, but her/his comments are not detailed or helpful, it does not help the editor in making a decision ."

In your recommendations for the author, you should:

  • Give constructive feedback describing ways that they could improve the research
  • Keep the focus on the research and not the author. This is an extremely important part of your job as a reviewer
  • Avoid making critical confidential comments to the editor while being polite and encouraging to the author - the latter may not understand why their manuscript has been rejected. Also, they won't get feedback on how to improve their research and it could trigger an appeal

Remember to give constructive criticism even if recommending rejection. This helps developing researchers improve their work and explains to the editor why you felt the manuscript should not be published.

" When the comments seem really positive, but the recommendation is rejection…it puts the editor in a tough position of having to reject a paper when the comments make it sound like a great paper ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Visit our Wiley Author Learning and Training Channel for expert advice on peer review.

Watch the video, Ethical considerations of Peer Review

Enago Academy

How to Write a Scientific Review Article

' src=

In the biosciences, review articles written by researchers are valuable tools for those looking for a synopsis of several research studies in one place without having to spend time finding the research and results themselves. A well-presented review paper provides the reader with unbiased information on studies within the discipline and presents why the results of some research studies are or are not valid. In addition, institutions that fund research tend to use review articles to help them decide whether further research is necessary; however, their value is only as good as the objectives achieved and how the results are communicated.

The objective of a review should be “to achieve an organization and synthesis of past work around the chosen theme in order to accelerate the accumulation and assimilation of recent knowledge into the existing body of knowledge.” Importantly, a review should present results clearly and accurately—good writing is essential and must follow a strict set of rules.

In 1996, Quality of Reporting of Meta-analyses (QUOROM), which focused on meta-analyses of randomized controlled studies, was created during a conference involving a group of scientists, clinicians, and statisticians. The QUOROM statement, checklist, and flow diagram were introduced to researchers to help them better organize their reviews and ensure that specific criteria were followed. QUOROM was later updated and renamed Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) with the same values and criteria.

Types of Review Articles

A review article is not an original study. It examines previous studies and compiles their data and evidence.

Based on their structure and formulation, literature reviews are broadly classified as-

  • Narrative or Traditional Literature Reviews – This is the classic literature review that summarizes the collated literature relevant to the thesis body.
  • Scoping Reviews – Scoping reviews involves systematic searching of all the material on the topic and replicate your searches. This enables the researcher to fill in any gaps that appear in results.
  • Systematic Literature Reviews – It is a methodical approach to collate and synthesize all relevant data about a predefined research question.
  • Cochrane Reviews – These are internationally recognized systematic reviews for human health care and policy.

Although narrative reviews can be useful, they are not in depth and do not necessarily analyze data or study-group sizes for determining whether results are valid. Systematic reviews , on the other hand, are more detailed and involve a more comprehensive literature search—they are the “gold standard” of review articles. A meta-analysis is a quantitative systematic review. It combines data from several studies to reach a conclusion that is statistically stronger than any in the single studies, mainly because of having more study subjects and more diversity among subjects.

A good review usually concentrates on a theme, such as different theories, information on the progress of developing a new medical device, or how past developments influence new discoveries. A review might also ask that more resources be used to continue research in that specific field.

There are  advantages and disadvantages to writing a review . In addition to having more available data, other advantages include confirmatory data analysis and that reviews are considered to be an evidence-based resource. Some of the disadvantages are they are more time consuming and not all studies will provide the requisite amount of data. In addition, statistical functions and interpretations are more complex and authors must ensure that the populations from each study and all studies combined are heterogeneous.

Literature Searches

Previous reviews on the chosen theme using Google Scholar can provide information on any new findings, and the following points should be considered when conducting searches:

  • The author and any possible conflicting interests
  • The purpose of the article
  • The author’s hypothesis and whether it is supported
  • How the literature will contribute to your topic
  • Whether opinions expressed by the author(s) are correct

Once the inclusion and exclusion criteria have been identified based on these points, authors are ready to prepare their paper. Sources such as Popular Science and WebMD.com should be avoided. These sources, among others, are not allowed to be used as sources for review articles. Authors must ensure that the sources are legitimate research studies and that they are similar in nature (e.g., all randomized controlled trials).

Manuscript Preparation

Maximum length can vary depending on the author guidelines from the journal to which you are submitting, so authors must always check those guidelines before they begin. As a general rule, most journals ask that a specific font and size be used (e.g., Times New Roman, 12 point), that 1.0-inch margins be used on all four sides, and 1.5 line spacing be used.

The article structure should contain very specific sections, which might vary slightly according to different science disciplines. In scientific writing, the IMRAD structure (introduction, methods, results, and discussion)  is a standard format adopted by a majority of academic journals. Although specific author guidelines might vary, in most cases, the review paper should contain the following sections:

  • Main title (possibly, short title)
  • Zurich-Basel Plant Science Center suggests providing titles which are 8 to 12 words in length
  • The title must contain key elements of the subject matter .
  • Author names and affiliations should be included
  • Corresponding author details should be mentioned
  • Main points, or a synthesis , of the project should be outlined
  • Subheadings should be included if required (e.g., objective, methods, results, and conclusions)
  • The length of the abstract should be between 200 and 250 words
  • No citations included within the abstract
  • Acronyms and abbreviations should be included only if used more than once

Introduction

  • Background information on the topic should be discussed
  • Introduction must address the objective (research question)
  • Text should be written in present tense

Materials and Methods

  • Should be written in past tense
  • Should provide information necessary to repeat the review
  • Search strategies, inclusion and exclusion criteria, data sources and geographical information, characteristics of study subjects, and statistical analyses used should be included
  • Authors must include all the results
  • Their relevance to the objective should be mentioned
  • Results must include heterogeneity of the study groups or samples
  • Statistical significance should be mentioned
  • Background information and objective can be reiterated
  • Results and their relevance clearly and concisely discussed

Conclusions

  • This section should discuss the objective discussed in the introduction This section should discuss the implications of the findings, interpretations, and identify unresolved questions

Study Limitations

  • An assessment of whether the studies were adequate to reach a conclusion that can be applied to a much larger group, stating reasons
  • Suggestions for future studies should be provided

Acknowledgements

  • Authors may thank the people or institutions who have supported the work

  References

  • Only those references cited in the text should be listed
  • 50 to 100 references are allowed
  • Internet sources are usually not allowed

' src=

Very informative and helped in me understanding the do and donts of writing a review…. A big motivational and knowledgeable article for those qho want to get motivation to begin the process of ones thought into practical work and take the first stwp in this regard

Rate this article Cancel Reply

Your email address will not be published.

scientific paper review sample

Enago Academy's Most Popular Articles

scientific paper review sample

  • Old Webinars
  • Webinar Mobile App

Improving Your Chances of Publication in International Peer-reviewed Journals

Types of literature reviews Tips for writing review articles Role of meta-analysis Reporting guidelines

scientific paper review sample

Introduction to Review Articles: Writing Systematic and Narrative Reviews

scientific paper review sample

综述文章简介:如何撰写系统综述与叙述性综述文章

学术出版中综述文章的概述和意义 不同类型文献综述的比较分析 写好系统综述与叙述性综述的技巧 整合分析(meta-analysis)的作用

scientific paper review sample

了解国际SCI期刊对综述论文作者的要求

综述论文的种类-系统综述与叙述性综述 PRISMA 检核表及流程图 综述论文的组成 为您的综述选择合适的期刊以发表

How to Author a Review Article

Systematic and Non-Systematic Reviews PRISMA Flowcharts and Checklists Parts of a Review Article Drafting a…

What Is a Systematic Review in Research?

Systematic Review: Structure and Process

New Physics Framework by Dan S. Correnti: A Book Review

How Scholarly Book Review Differs from an Article Review

scientific paper review sample

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

scientific paper review sample

As a researcher, what do you consider most when choosing an image manipulation detector?

  • Primary Antibodies
  • Conjugated Antibodies for IF
  • Conjugated Antibodies for FC
  • Secondary Antibodies
  • Antibody Labeling Kits New
  • Magnetic Cell Separation Kits
  • Cytokines & Growth Factors
  • Neutralizing/activating Antibodies
  • Nanobody-based Reagents
  • Accessory Products and Kits
  • Fusion Proteins
  • Atlantic Blue™
  • Cardinal Red™
  • CoraLite® Plus 405
  • CoraLite® Plus 488
  • CoraLite® Plus 647
  • CoraLite® Plus 750
  • CoraLite®488
  • CoraLite®532
  • CoraLite®555
  • CoraLite®568
  • CoraLite®594
  • Alexa Fluor® 488
  • Alexa Fluor® 568
  • Alexa Fluor® 647
  • CoraLite®647
  • CoraLite Plus 405
  • FITC Plus NEW
  • CoraLite Plus 488
  • CoraLite Plus 555
  • CoraLite Plus 647
  • CoraLite Plus 750

How to Review a Scientific Paper in 10 Easy Steps

Summary of how to perform an invited review of a paper for publication.

Blog written by Jaime Fernández Sobaberas , 3 rd year PhD student in Biochemistry at Heidelberg University, Germany.

Reviewing scientific papers is an essential part of academic research and the publication process. It allows experts to assess the quality, validity, and significance of research findings before they are disseminated to the broader scientific community. Writing a comprehensive and constructive review contributes to the overall improvement of scientific knowledge. In this blog, we will discuss the key steps and considerations involved in reviewing a scientific paper.

Graphic summary of how to do an invited review of a scientific paper

1. Understand the Purpose of the Review:

Before you begin the review process, it’s important to understand the purpose of the review. Ask yourself why you have been asked to review the paper and what specific aspects you should focus on. Keep in mind that the goal is to provide a fair, unbiased, and constructive critique that helps the authors improve their work.

2. Familiarize Yourself with the Paper:

Start by reading the paper thoroughly and gaining a clear understanding of its content. Take note of the research question, methodology, data analysis, results, and conclusions. Identify any areas where you have expertise or concerns.

3. Evaluate the Paper’s Structure and Clarity:

Assess the paper’s overall structure, organization, and clarity of writing. Consider whether the abstract provides a concise summary of the study and whether the introduction effectively establishes the research context. Evaluate the logical flow of ideas, the use of headings and subheadings, and the clarity of the language. Note any sections that could benefit from additional clarification or restructuring.

4. Evaluate the Research Methods:

Assess the appropriateness and rigor of the research methods employed. Evaluate the study design, sample size, data collection techniques, and statistical analyses. Check whether the methods are adequately described, allowing for replication by other researchers. Identify any potential flaws or limitations in the methodology that could affect the validity of the results. Ensure that unique identifiers have been for used all reagents; for example, catalog numbers of RRIDs for all antibodies used.

5. Evaluate the Results and Analysis:

Examine the results and analysis presented in the paper. Assess whether the data supports the research question and if the statistical analysis is appropriate. Look for any inconsistencies or gaps in the data or areas where data may have been deliberately misled. Consider the significance and implications of the results and whether they are supported by the evidence presented. Read all the supporting/supplemental resources (if available) to confirm they show enough evidence to support their main findings.

6. Assess the Discussion and Conclusions:

Evaluate the interpretation of the results in the discussion section. Consider whether the authors have provided a balanced and objective analysis of the findings. Assess the extent to which the conclusions align with the research question and the overall study objectives. Note any alternative interpretations or potential avenues for future research.

7. Consider Ethical Considerations:

While reviewing a scientific paper, it’s important to be mindful of ethical considerations. Evaluate whether the study adheres to ethical guidelines and standards, such as obtaining informed consent, maintaining participant confidentiality, and minimizing potential harm. Assess whether the study design and methods align with ethical principles, particularly when human or animal subjects are involved. If you identify any ethical concerns, highlight them in your review and suggest potential remedies or improvements.

8. Verify References and Citations:

Ensure that the references and citations provided in the paper are accurate, relevant, and up-to-date. Verify that all sources mentioned in the text are included in the reference list and vice versa. Check the quality and credibility of the references, assessing whether they are from reputable sources and contribute to the overall strength of the paper. If you notice any missing or inaccurate references, point them out in your review and suggest appropriate replacements if necessary.

9. Provide Constructive Feedback:

When writing your review, be constructive and respectful in your feedback. Clearly outline both the strengths and weaknesses of the paper, offering specific suggestions for improvement. Be specific and provide references to relevant literature to support your comments. Avoid making personal attacks or using derogatory language.

10. Summarize Your Review:

Conclude your review with a concise summary of your main points. Highlight the paper’s strengths, such as novel contributions or a well-executed methodology. Discuss the key weaknesses and areas that need improvement. Finally, provide an overall recommendation regarding the acceptance, revision, or rejection of the paper.

Reviewing a scientific paper is a critical process that contributes to the quality and integrity of scientific research. By following the steps outlined in this guide, you can provide valuable feedback to authors, help improve the quality of the research, and contribute to the advancement of scientific knowledge. Remember to approach the review process with objectivity, fairness, and a commitment to fostering scientific excellence.

Related Content

Read the blog in Spanish

How to make a scientific conference poster

How to write a  PhD thesis- 10 top tips

How to navigate finishing your PhD remotely

ChatGPT and BioGPT as tools for life science research

scientific paper review sample

Pathway Posters Library

Early Career Researcher Hub

Newsletter Signup

Stay up-to-date with our latest news and events. New to Proteintech? Get 10% off your first order when you sign up.

Reviewer comments: examples for common peer review decisions

Photo of Master Academia

Peer-reviewing an academic manuscript is not an easy task. Especially if you are unsure about how to formulate your feedback. Examples of reviewer comment s can help! Here you can find an overview of sample comments and examples for the most common review decisions: ‘minor revisions’, ‘major revisions’, ‘revise and resubmit’ and ‘reject’ decisions.

Examples of ‘minor revisions’ reviewer comments

Examples of ‘major revisions’ reviewer comments, examples of ‘revise and resubmit’ reviewer comments, examples of ‘reject’ reviewer comments.

  • “This is a well-written manuscript that only needs to undergo a few minor changes. First, …”
  • “The manuscript is based on impressive empirical evidence and makes an original contribution. Only minor revisions are needed before it can be published.”
  • “I thoroughly enjoyed reviewing this manuscript and only have some minor requests for revision.”
  • “The authors develop a unique theoretical framework, and I believe that they should highlight their originality much more.”
  • “The authors conduct very relevant research, but fail to emphasise the relevance in their introduction.”
  • “The authors draw on extensive empirical evidence. I believe that they can put forward their arguments much more confidently.”
  • “The authors adequately addressed my feedback from the first round of peer review. I only have some minor comments for final improvements.”
  • “To improve the readability of the paper, I suggest dividing the analysis into several subsections.”
  • “Figure 3 is difficult to read and should be adjusted.”
  • “Table 1 and 2 can be combined to create a better overview.”
  • “The abstract is too long and should be shortened.”
  • “I had difficulties understanding the first paragraph on page 5, and suggest that the authors reformulate and simplify it.”
  • “The manuscript contains an elaborate literature review, but definitions of the key concepts are needed in the introduction.”
  • “Throughout the manuscript, there are several language mistakes. Therefore, I recommend a professional round of language editing before the paper is published.”
  • “The paper should undergo professional language editing before it can be published.”

If you want to learn more about common reasons for a ‘minor revisions’ decision and see examples of how an actual peer review might look like, check out this post on ‘minor revisions’ .

  • “The manuscript shows a lot of promise, but some major issues need to be addressed before it can be published.”
  • “This manuscript addresses a timely topic and makes a relevant contribution to the field. However, some major revisions are needed before it can be published.”
  • “I enjoyed reading this manuscript, and believe that it is very promising. At the same time, I identified several issues that require the authors’ attention.”
  • “The manuscript sheds light on an interesting phenomenon. However, it also has several shortcomings. I strongly encourage the authors to address the following points.”
  • “The authors of this manuscript have an ambitious objective and draw on an interesting dataset. However, their main argument is unclear.”
  • “The key argument needs to be worked out and formulated much more clearly.”
  • “The theoretical framework is promising but incomplete. In my opinion, the authors cannot make their current claims without considering writings on… “
  • “The literature review is promising, but disregards recent publications in the field of…”
  • “The empirical evidence is at times insufficient to support the authors’ claims. For instance, in section…”
  • “I encourage the authors to provide more in-depth evidence. For instance, I would like to see more interview quotes and a more transparent statistical analysis.”
  • “The authors work with an interesting dataset. However, I was missing more detailed insights in the actual results. I believe that several additional tables and figures can improve the authors’ argumentation. “
  • “I believe that the manuscript addresses a relevant topic and includes a timely discussion. However, I struggled to understand section 3.1.”
  • “I think that the manuscript can be improved by removing section 4 and integrating it into section 5.”
  • “The discussion and conclusions are difficult to follow and need to be rewritten to highlight the key contributions of this manuscript.”
  • “The line of argumentation should be improved by dividing the manuscript into clear sections with subheadings.”

If you want to learn more about common reasons for a ‘major revisions’ decision and see examples of how an actual peer review might look like, check out this post on ‘major revisions’ .

  • “I encourage the authors to revise their manuscript and to resubmit it to the journal.”
  • “In its current form, this paper cannot be considered for publication. However, I see value in the research approach and encourage the authors to revise and resubmit their manuscript.”
  • “ With the right changes, I believe that this manuscript can make a valuable contribution to the field of …”
  • “The paper addresses a valuable topic and raises interesting questions. However, the logic of the argument is difficult to follow. “
  • “The manuscript tries to achieve too many things at the same time. The authors need to narrow down their research focus.”
  • “The authors raise many interesting points, which makes it difficult for the reader to follow their main argument. I recommend that the authors determine what their main argument is, and structure their manuscript accordingly.”
  • “The literature review raises interesting theoretical debates. However, in its current form, it does not provide a good framework for the empirical analysis.”
  • “A clearer theoretical stance will increase the quality of the paper.”
  • “The manuscript draws on impressive data, as described in the methodology. However, the wealth of data does not come across in the analysis. My recommendation is to increase the number of interview quotes, figures and statistics in the empirical analysis.”
  • “The authors draw several conclusions which are hard to connect to their empirical findings. “
  • The authors are advised to critically reflect on the generalizability of their research findings.”
  • “The manuscript needs to better emphasise the research relevance and its practical implications.”
  • “It is unclear what the authors consider their main contribution to the academic literature, and what they envisage in terms of recommendations for further research.”

If you want to learn more about common reasons for a ‘revise and resubmit’ decision and see examples of how an actual peer review might look like, check out this post on ‘revise and resubmit’ .

  • “I do not believe that this journal is a good fit for this paper.”
  • “While the paper addresses an interesting issue, it is not publishable in its current form.”
  • “In its current state, I do not recommend accepting this paper.”
  • “Unfortunately, the literature review is inadequate. It lacks..”
  • “The paper lacks a convincing theoretical framework ,  which is necessary to be considered for publication.”
  • “Unfortunately, the empirical data does not meet disciplinary standards.”
  • “While I applaud the authors’ efforts, the paper does not provide sufficient empirical evidence.”
  • “The empirical material is too underdeveloped to consider this paper for publication.”
  • “The paper has too many structural issues, which makes it hard to follow the argument.”
  • “There is a strong mismatch between the literature review and the empirical analysis.”
  • “The main contribution of this paper is unclear.”
  • “It is unclear what the paper contributes to the existing academic literature.”
  • “The originality of this paper needs to be worked out before it can be considered for publication.”
  • “Unfortunately, the language and sentence structures of this manuscript are at times incomprehensible. The paper needs rewriting and thorough language editing to allow for a proper peer review.”

If you want to learn more about common reasons for a ‘reject’ decision and see examples of how an actual peer review might look like, check out this post on ‘reject’ decisions .

Photo of Master Academia

Master Academia

Get new content delivered directly to your inbox.

Subscribe and receive Master Academia's quarterly newsletter.

Minor revisions: Sample peer review comments and examples

5 proven ways to become an academic peer reviewer, related articles.

scientific paper review sample

How to benefit from ChatGPT as an academic

scientific paper review sample

24 popular academic phrases to write your abstract (+ real examples)

scientific paper review sample

How to write effective cover letters for a paper submission

Featured blog post image for Types of editorial decisions after peer review (+ how to react)

Types of editorial decisions after peer review (+ how to react)

How to review a scientific paper

Affiliation.

  • 1 University of Florida, USA. Electronic address: [email protected].
  • PMID: 25248566
  • DOI: 10.1016/j.ajp.2014.08.007

Scientific observations must survive the scrutiny of experts before they are disseminated to the broader community because their publication in a scientific journal provides a stamp of validity. Although critical review of a manuscript by peers prior to publication in a scientific journal is a central element in this process, virtually no formal guidance is provided to reviewers about the nature of the task. In this article, the essence of peer review is described and critical steps in the process are summarized. The role of the peer reviewer as an intermediary and arbiter in the process of scientific communication between the authors and the readers via the vehicle of the particular journal is discussed and the responsibilities of the reviewer to each of the three parties (the author/s, readers, and the Journal editor) are defined. The two formal products of this activity are separate sets of reviewer comments to the editor and the authors and these are described. Ethical aspects of the process are considered and rewards accruing to the reviewer summarized.

Copyright © 2014 Elsevier B.V. All rights reserved.

Publication types

  • Peer Review / methods*
  • Periodicals as Topic*
  • Research Design

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.25(3); 2014 Oct

Logo of ejifcc

How to Write a Scientific Paper: Practical Guidelines

Edgard delvin.

1 Centre de recherche, CHU Sainte-Justine

2 Département de Biochimie, Université de Montréal, Montréal, Canada

Tahir S. Pillay

3 Department of Chemical Pathology, Faculty of Health Sciences, University of Pretoria

4 Division of Chemical Pathology, University of Cape Town

5 National Health Laboratory Service, CTshwane Academic Division, Pretoria, South Africa

Anthony Newman

6 Life Sciences Department, Elsevier, Amsterdam, The Netherlands

Precise, accurate and clear writing is essential for communicating in health sciences, as publication is an important component in the university criteria for academic promotion and in obtaining funding to support research. In spite of this, the development of writing skills is a subject infrequently included in the curricula of faculties of medicine and allied health sciences. Therefore clinical investigators require tools to fill this gap. The present paper presents a brief historical background to medical publication and practical guidelines for writing scientific papers for acceptance in good journals.

INTRODUCTION

A scientific paper is the formal lasting record of a research process. It is meant to document research protocols, methods, results and conclusions derived from an initial working hypothesis. The first medical accounts date back to antiquity. Imhotep, Pharaoh of the 3 rd Dynasty, could be considered the founder of ancient Egyptian medicine as he has been credited with being the original author of what is now known as the Edwin Smith Papyrus ( Figure 1 ). The Papyrus, by giving some details on cures and anatomical observations, sets the basis of the examination, diagnosis, treatment, and prognosis of numerous diseases. Closer to the Common Era, in 460 BCE, Hippocrates wrote 70 books on medicine. In 1020, the Golden age of the Muslim Culture, Ibn Sina, known as Avicenna ( Figure 2a ), recorded the Canon of medicine that was to become the most used medical text in Europe and Middle East for almost half a millennium. This was followed in the beginning of the 12 th Century bytheextensivetreatiseofMaimonides( Figure 2b ) (Moses ben Maimon) on Greek and Middle Eastern medicine. Of interest, by the end of the 11 th Century Trotula di Ruggiero, a woman physician, wrote several influential books on women’s ailment. A number of other hallmark treatises also became more accessible, thanks to the introduction of the printing press that allowed standardization of the texts. One example is the De Humani Corporis Fabrica by Vesalius which contains hundreds of illustrations of human dissection. Thomas A Lang provides an excellent concise history of scientific publications [ 1 ]. These were the days when writing and publishing scientific or philosophical works were the privilege of the few and hence there was no or little competition and no recorded peer reviewing system. Times have however changed, and contemporary scientists have to compose with an increasingly harsh competition in attracting editors and publishers attention. As an example, the number of reports and reviews on obesity and diabetes has increased from 400 to close to 4000/year and 50 to 600/year respectively over a period of 20 years ( Figure 3 ). The present article, essentially based on TA Lang’s guide for writing a scientific paper [ 1 ], will summarize the steps involved in the process of writing a scientific report and in increasing the likelihood of its acceptance.

This manuscript, written in 1600 BCE, is regarded as a copy of several earlier works ( 3000 BCE). It is part of a textbook on surgery the examination, diagnosis, treatment, and prognosis of numerous ailments. BCE: Before the Common Era.

The Edwin Smith Papyrus (≈3000 BCE)

Figure 2a Avicenna 973-1037 C.E.Figure 2b Maimonides, 1135-1204 C.E.

Avicenna and Maimonides

Orange columns: original research papers; Green columns: reviews

Annual publication load in the field of obesity and diabetes over 20 years.

Reasons for publishing are varied. One may write to achieve a post-graduate degree, to obtain funding for pursuing research or for academic promotion. While all 3 reasons are perfectly legitimate, one must ask whether they are sufficient to be considered by editors, publishers and reviewers. Why then should the scientist write? The main reason is to provide to the scientific community data based on hypotheses that are innovative and thus to advance the understanding in a specific domain. One word of caution however, is that if a set of experiments has not been done or reported, it does not mean that it should be. It may simply reflect a lack of interest in it.

DECIDING ON PUBLISHING AND TARGETING THE JOURNAL

In order to assist with the decision process, pres-ent your work orally first to colleagues in your field who may be more experienced in publishing. This step will help you in gauging whether your work is publishable and in shaping the paper.

Targeting the journal, in which you want to present your data, is also a critical step and should be done before starting to write. One hint is to look for journals that have published similar work to yours, and that aims readers most likely to be interested in your research. This will allow your article to be well read and cited. These journals are also those that you are most likely to read on a regular basis and to cite abundantly. The next step is to decide whether you submit your manuscript to a top-ranking impact factor journal or to a journal of lower prestige. Although it is tempting to test the waters, or to obtain reviewers comments, be realistic about the contribution your work provides and submit to a journal with an appropriate rank.

Do not forget that each rejection delays publication and that the basin of reviewers within your specialty is shallow. Thus repeated submission to different journals could likely result in having your work submitted for review to the same re-viewer.

DECIDING ON THE TYPE OF MANUSCRIPT

There are several types of scientific reports: observational, experimental, methodological, theoretical and review. Observational studies include 1) single-case report, 2) collective case reports on a series of patients having for example common signs and symptoms or being followed-up with similar protocols, 3) cross-sectional, 4) cohort studies, and 5) case-control studies. The latter 3 could be perceived as epidemiological studies as they may help establishing the prevalence of a condition, and identify a defined population with and without a particular condition (disease, injury, surgical complication). Experimental reports deal with research that tests a research hypothesis through an established protocol, and, in the case of health sciences, formulate plausible explanations for changes in biological systems. Methodological reports address for example advances in analytical technology, statistical methods and diagnostic approach. Theoretical reports suggest new working hypotheses and principles that have to be supported or disproved through experimental protocols. The review category can be sub-classified as narrative, systematic and meta-analytic. Narrative reviews are often broad overviews that could be biased as they are based on the personal experience of an expert relying on articles of his or her own choice. Systematic reviews and meta-analyses are based on reproducible procedures and on high quality data. Researchers systematically identify and analyze all data collected in articles that test the same working hypothesis, avoiding selection bias, and report the data in a systematic fashion. They are particularly helpful in asking important questions in the field of healthcare and are often the initial step for innovative research. Rules or guidelines in writing such report must be followed if a quality systematic review is to be published.

For clinical research trials and systematic reviews or meta-analyses, use the Consort Statement (Consolidated Standards Of Reporting Trials) and the PRISMA Statement (Preferred Reporting Items for Systematic reviews and Meta-Analyses) respectively [ 2 , 3 ]. This assures the editors and the reviewers that essential elements of the trials and of the reviews were tackled. It also speeds the peer review process. There are several other Statements that apply to epidemiological studies [ 4 ], non-randomized clinical trials [ 5 ], diagnostic test development ( 6 ) and genetic association studies ( 7 ). The Consortium of Laboratory Medicine Journal Editors has also published guidelines for reporting industry-sponsored laboratory research ( 8 ).

INITIAL STEPS IN THE PROCESS OF WRITING A SCIENTIFIC DOCUMENT

Literature review is the initial and essential step before starting your study and writing the scientific report based on it. In this process use multiple databases, multiple keyword combinations. It will allow you to track the latest development in your field and thus avoid you to find out that someone else has performed the study before you, and hence decrease the originality of your study. Do not forget that high-ranking research journals publish results of enough importance and interest to merit their publication.

Determining the authorship and the order of authorship, an ethical issue, is the second essential step, and is unfortunately often neglected. This step may avoid later conflicts as, despite existing guidelines, it remains a sensitive issue owing to personal biases and the internal politics of institutions. The International Committee of Medical Editors has adopted the following guidelines for the biomedical sciences ( 9 ).

“Authorship credit should be based only on: 1) Substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; 2) Drafting the article or revising it critically for important intellectual content; and 3) Final approval of the version to be published. Conditions 1, 2 and 3 must be all met. Acquisition of funding, the collections of data, or general supervision of the research group, by themselves, do not justify authorship.” ( 9 , 10 )

The order of authorship should reflect the individual contribution to the research and to the publication, from most to least ( 11 ). The first author usually carries out the lead for the project reported. However the last author is often mistakenly perceived as the senior author. This is perpetuated from the European tradition and is discouraged. As there are divergent conventions among journals, the order of authorship order may or may not reflect the individual contributions; with the exception that the first author should be the one most responsible for the work.

WRITING EFFECTIVELY

Effective writing requires that the text helps the readers 1) understand the content and the context, 2) remember what the salient points are, 3) find the information rapidly and, 4) use or apply the information given. These cardinal qualities should be adorned with the precise usage of the language, clarity of the text, inclu-siveness of the information, and conciseness. Effective writing also means that you have to focus on the potential readers’ needs. Readers in science are informed individuals who are not passive, and who will formulate their own opinion of your writing whether or not the meaning is clear. Therefore you need to know who your audience is. The following 4 questions should help you writing a reader-based text, meaning written to meet the information needs of readers [ 12 ].

What do you assume your readers already know? In other words, which terms and concepts can you use without explanation, and which do you have to define?

What do they want to know? Readers in science will read only if they think they will learn something of value.

What do they need to know? Your text must contain all the information necessary for the reader to understand it, even if you think this information id obvious to them.

What do they think they know that is not so? Correcting misconceptions can be an important function of communication, and persuading readers to change their minds can be a challenging task.

WRITING THE SCIENTIFIC PAPER

Babbs and Tacker ’ s advice to write as much of the paper before performing the research project or experimental protocol may, at first sight, seem unexpected and counterintuitive [ 13 ], but in fact it is exactly what is being done when writing a research grant application. It will allow you to define the authorship alluded to before. The following section will briefly review the structure of the different sections of a manuscript and describe their purpose.

Reading the instructions to authors of the Journal you have decided to submit your manuscript is the first important step. They provide you with the specific requirements such as the way of listing the authors, type of abstract, word, figure or table limits and citation style. The Mulford Library of University of Toledo website contains instructions to authors for over 3000 journals ( http://mulford.meduoiho.edu/instr/ ).

The general organization of an article follows the IMRAD format (Introduction, Methods, Results, and Discussion). These may however vary. For instance, in clinical research or epidemiology studies, the methods section will include details on the subjects included, and there will be a statement of the limitation of the study. Although conclusions may not always be part of the structure, we believe that it should, even in methodological reports.

The tile page provides essential information so that the editor, reviewers, and readers will identify the manuscript and the authors at a glance as well as enabling them to classify the field to which the article pertains.

The title page must contain the following:

  • The tile of the article – it is an important part of the manuscript as it is the most often read and will induce the interested readers to pursue further. Therefore the title should be precise, accurate, specific and truthful;
  • Each author’s given name (it may be the full name or initials) and family name;
  • Each author’s affiliation;
  • Some journals ask for highest academic degree;
  • A running title that is usually limited to a number of characters. It must relate to the full title;
  • Key words that will serve for indexing;
  • For clinical studies, the trial’s registration number;
  • The name of the corresponding author with full contact information.

The abstract is also an important section of your manuscript. Importantly, the abstract is the part of the article that your peers will see when consulting publication databases such as PubMed. It is the advertisement to your work and will strongly influence the editor deciding whether it will be submitted to reviewers or not. It will also help the readers decide to read the full article. Hence it has to be comprehensible on its own. Writing an abstract is challenging. You have to carefully select the content and, while being concise, assure to deliver the essence of your manuscript.

Without going into details, there are 3 types of abstracts: descriptive, informative and structured. The descriptive abstract is particularly used for theoretical, methodological or review articles. It usually consists of a single paragraph of 150 words or less. The informative abstract, the most common one, contains specific information given in the article and, are organized with an introduction (background, objectives), methods, results and discussion with or without conclusion. They usually are 150 to 250 words in length. The structured abstract is in essence an informative abstract with sections labeled with headings. They may also be longer and are limited to 250 to 300 words. Recent technology also allows for graphical or even video abstracts. The latter are interesting in the context of cell biology as they enable the investigator to illustrate ex vivo experiment results (phagocytosis process for example).

Qualities of abstracts:

  • Understood without reading the full paper. Shoul dcontain no abbreviations.lf abbreviations are used, they must be defined. This however removes space for more important information;
  • Contains information consistent with the full report. Conclusions in the abstract must match those given in the full report;
  • Is attractive and contains information needed to decide whether to read the full report.

Introduction

The introduction has 3 main goals: to establish the need and importance of your research, to indicate how you have filled the knowledge gap in your field and to give your readers a hint of what they will learn when reading your paper. To fulfil these goals, a four-part introduction consisting of a background statement, a problem statement, an activity statement and a forecasting statement, is best suited. Poorly defined background information and problem setting are the 2 most common weaknesses encountered in introductions. They stem from the false perception that peer readers know what the issue is and why the study to solve it is necessary. Although not a strict rule, the introduction in clinical science journals should target only references needed to establish the rationale for the study and the research protocol. This differ from more basic science or cell biology journals, for which a longer and elaborate introduction may be justified because the research at hand consists of several approaches each requiring background and justification.

The 4-part introduction consists of:

  • A background statement that provides the context and the approach of the research;
  • A problem statement that describes the nature, scope and importance of the problem or the knowledge gap;
  • An activity statement, that details the research question, sets the hypothesis and actions undertaken for the investigation;
  • A forecasting statement telling the readers whattheywillfìndwhen readingyourarticle [ 14 ].

Methods section

This section may be named “Materials and Methods”, “Experimental section” or “Patients and Methods” depending upon the type of journal. Its purpose to allow your readers to provide enough information on the methods used for your research and to judge on their adequacy. Although clinical and “basic” research protocols differ, the principles involved in describing the methods share similar features. Hence, the breadth of what is being studied and how the study can be performed is common to both. What differ are the specific settings. For example, when a study is conducted on humans, you must provide, up front, assurance that it has received the approval of you Institution Ethics Review Board (IRB) and that participants have provided full and informed consent. Similarly when the study involves animals, you must affirm that you have the agreement from your Institutional Animal Care and Use Committee (IACUC). These are too often forgotten, and Journals (most of them) abiding to the rules of the Committee on Publication Ethics (COPE) and World Association of Medical Editors (WAME) will require such statement. Although journals publishing research reports in more fundamental science may not require such assurance, they do however also follow to strict ethics rules related to scientific misconduct or fraud such as data fabrication, data falsification. For clinical research papers, you have to provide information on how the participants were selected, identify the possible sources of bias and confounding factors and how they were diminished.

In terms of the measurements, you have to clearly identify the materials used as well as the suppliers with their location. You should also be unambiguous when describing the analytical method. If the method has already been published, give a brief account and refer to the original publication (not a review in which the method is mentioned without a description). If you have modified it, you have to provide a detailed account of the modifications and you have to validate its accuracy, precision and repeatability. Mention the units in which results are reported and, if necessary, include the conversion factors [mass units versus “système international” (S.I.)]. In clinical research, surrogate end-points are often used as biomarkers. Under those circumstances, you must show their validity or refer to a study that has already shown that are valid.

In cases of clinical trials, the Methods section should include the study design, the patient selection mode, interventions, type of outcomes.

Statistics are important in assuring the quality of the research project. Hence, you should consult a biostatistician at the time of devising the research protocol and not after having performed the experiments or the clinical trial.

The components of the section on statistics should include:

  • The way the data will be reported (mean, median, centiles for continuous data);
  • Details on participant assignments to the different groups (random allocation, consecutive entry);
  • Statistical comparison tools (parametric or non parametric statistics, paired or unpaired t-tests for normally distributed data and so on);
  • The statistical power calculation when determining the sample size to obtain valid and significant comparisons together with the a level;
  • The statistical software package used in the analysis.

Results section

The main purpose of the results section is to report the data that were collected and their relationship. It should also provide information on the modifications that have taken place because of unforeseen events leading to a modification of the initial protocol (loss of participants, reagent substitution, loss of data).

  • Report results as tables and figures whenever possible, avoid duplication in the text. The text should summarize the findings;
  • Report the data with the appropriate descriptive statistics;
  • Report any unanticipated events that could affect the results;
  • Report a complete account of observations and explanations for missing data (patient lost).

The discussion should set your research in context, reinforce its importance and show how your results have contributed to the further understanding of the problem posed. This should appear in the concluding remarks. The following organization could be helpful.

  • Briefly summarize the main results of your study in one or two paragraphs, and how they support your working hypothesis;
  • Provide an interpretation of your results and show how they logically fit in an overall scheme (biological or clinical);
  • Describe how your results compare with those of other investigators, explain the differences observed;
  • Discuss how your results may lead to a new hypothesis and further experimentation, or how they could enhance the diagnostic procedures.
  • Provide the limitations of your study and steps taken to reduce them. This could be placed in the concluding remarks.

Acknowledgements

The acknowledgements are important as they identify and thank the contributors to the study, who do not meet the criteria as co-authors. They also include the recognition of the granting agency. In this case the grant award number and source is usually included.

Declaration of competing interests

Competing interests arise when the author has more than one role that may lead to a situation where there is a conflict of interest. This is observed when the investigator has a simultaneous industrial consulting and academic position. In that case the results may not be agreeable to the industrial sponsor, who may impose a veto on publication or strongly suggest modifications to the conclusions. The investigator must clear this issue before starting the contracted research. In addition, the investigator may own shares or stock in the company whose product forms the basis of the study. Such conflicts of interest must be declared so that they are apparent to the readers.

Acknowledgments

The authors thank Thomas A Lang, for his advice in the preparation of this manuscript.

  • Open access
  • Published: 21 May 2024

The bright side of sports: a systematic review on well-being, positive emotions and performance

  • David Peris-Delcampo 1 ,
  • Antonio Núñez 2 ,
  • Paula Ortiz-Marholz 3 ,
  • Aurelio Olmedilla 4 ,
  • Enrique Cantón 1 ,
  • Javier Ponseti 2 &
  • Alejandro Garcia-Mas 2  

BMC Psychology volume  12 , Article number:  284 ( 2024 ) Cite this article

Metrics details

The objective of this study is to conduct a systematic review regarding the relationship between positive psychological factors, such as psychological well-being and pleasant emotions, and sports performance.

This study, carried out through a systematic review using PRISMA guidelines considering the Web of Science, PsycINFO, PubMed and SPORT Discus databases, seeks to highlight the relationship between other more ‘positive’ factors, such as well-being, positive emotions and sports performance.

The keywords will be decided by a Delphi Method in two rounds with sport psychology experts.

Participants

There are no participants in the present research.

The main exclusion criteria were: Non-sport thema, sample younger or older than 20–65 years old, qualitative or other methodology studies, COVID-related, journals not exclusively about Psychology.

Main outcomes measures

We obtained a first sample of 238 papers, and finally, this sample was reduced to the final sample of 11 papers.

The results obtained are intended to be a representation of the ‘bright side’ of sports practice, and as a complement or mediator of the negative variables that have an impact on athletes’ and coaches’ performance.

Conclusions

Clear recognition that acting on intrinsic motivation continues to be the best and most effective way to motivate oneself to obtain the highest levels of performance, a good perception of competence and a source of personal satisfaction.

Peer Review reports

Introduction

In recent decades, research in the psychology of sport and physical exercise has focused on the analysis of psychological variables that could have a disturbing, unfavourable or detrimental role, including emotions that are considered ‘negative’, such as anxiety/stress, sadness or anger, concentrating on their unfavourable relationship with sports performance [ 1 , 2 , 3 , 4 ], sports injuries [ 5 , 6 , 7 ] or, more generally, damage to the athlete’s health [ 8 , 9 , 10 ]. The study of ‘positive’ emotions such as happiness or, more broadly, psychological well-being, has been postponed at this time, although in recent years this has seen an increase that reveals a field of study of great interest to researchers and professionals [ 11 , 12 , 13 ] including physiological, psychological, moral and social beneficial effects of the physical activity in comic book heroes such as Tintin, a team leader, which can serve as a model for promoting healthy lifestyles, or seeking ‘eternal youth’ [ 14 ].

Emotions in relation to their effects on sports practice and performance rarely go in one direction, being either negative or positive—generally positive and negative emotions do not act alone [ 15 ]. Athletes experience different emotions simultaneously, even if they are in opposition and especially if they are of mild or moderate intensity [ 16 ]. The athlete can feel satisfied and happy and at the same time perceive a high level of stress or anxiety before a specific test or competition. Some studies [ 17 ] have shown how sports participation and the perceived value of elite sports positively affect the subjective well-being of the athlete. This also seems to be the case in non-elite sports practice. The review by Mansfield et al. [ 18 ] showed that the published literature suggests that practising sports and dance, in a group or supported by peers, can improve the subjective well-being of the participants, and also identifies negative feelings towards competence and ability, although the quantity and quality of the evidence published is low, requiring better designed studies. All these investigations are also supported by the development of the concept of eudaimonic well-being [ 19 ], which is linked to the development of intrinsic motivation, not only in its aspect of enjoyment but also in its relationship with the perception of competition and overcoming and achieving goals, even if this is accompanied by other unpleasant hedonic emotions or even physical discomfort. Shortly after a person has practised sports, he will remember those feelings of exhaustion and possibly stiffness, linked to feelings of satisfaction and even enjoyment.

Furthermore, the mediating role of parents, coaches and other psychosocial agents can be significant. In this sense, Lemelin et al. [ 20 ], with the aim of investigating the role of autonomy support from parents and coaches in the prediction of well-being and performance of athletes, found that autonomy support from parents and coaches has positive relationships with the well-being of the athlete, but that only coach autonomy support is associated with sports performance. This research suggests that parents and coaches play important but distinct roles in athlete well-being and that coach autonomy support could help athletes achieve high levels of performance.

On the other hand, an analysis of emotions in the sociocultural environment in which they arise and gain meaning is always interesting, both from an individual perspective and from a sports team perspective. Adler et al. [ 21 ] in a study with military teams showed that teams with a strong emotional culture of optimism were better positioned to recover from poor performance, suggesting that organisations that promote an optimistic culture develop more resilient teams. Pekrun et al. [ 22 ] observed with mathematics students that individual success boosts emotional well-being, while placing people in high-performance groups can undermine it, which is of great interest in investigating the effectiveness and adjustment of the individual in sports teams.

There is still little scientific literature in the field of positive emotions and their relationship with sports practice and athlete performance, although their approach has long had its clear supporters [ 23 , 24 ]. It is comforting to observe the significant increase in studies in this field, since some authors (e.g [ 25 , 26 ]). . , point out the need to overcome certain methodological and conceptual problems, paying special attention to the development of specific instruments for the evaluation of well-being in the sports field and evaluation methodologies.

As McCarthy [ 15 ] indicates, positive emotions (hedonically pleasant) can be the catalysts for excellence in sport and deserve a space in our research and in professional intervention to raise the level of athletes’ performance. From a holistic perspective, positive emotions are permanently linked to psychological well-being and research in this field is necessary: firstly because of the leading role they play in human behaviour, cognition and affection, and secondly, because after a few years of international uncertainty due to the COVID-19 pandemic and wars, it seems ‘healthy and intelligent’ to encourage positive emotions for our athletes. An additional reason is that they are known to improve motivational processes, reducing abandonment and negative emotional costs [ 11 ]. In this vein, concepts such as emotional intelligence make sense and can help to identify and properly manage emotions in the sports field and determine their relationship with performance [ 27 ] that facilitates the inclusion of emotional training programmes based on the ‘bright side’ of sports practice [ 28 ].

Based on all of the above, one might wonder how these positive emotions are related to a given event and what role each one of them plays in the athlete’s performance. Do they directly affect performance, or do they affect other psychological variables such as concentration, motivation and self-efficacy? Do they favour the availability and competent performance of the athlete in a competition? How can they be regulated, controlled for their own benefit? How can other psychosocial agents, such as parents or coaches, help to increase the well-being of their athletes?

This work aims to enhance the leading role, not the secondary, of the ‘good and pleasant side’ of sports practice, either with its own entity, or as a complement or mediator of the negative variables that have an impact on the performance of athletes and coaches. Therefore, the objective of this study is to conduct a systematic review regarding the relationship between positive psychological factors, such as psychological well-being and pleasant emotions, and sports performance. For this, the methodological criteria that constitute the systematic review procedure will be followed.

Materials and methods

This study was carried out through a systematic review using PRISMA (Preferred Reporting Items for Systematic Reviews) guidelines considering the Web of Science (WoS) and Psycinfo databases. These two databases were selected using the Delphi method [ 29 ]. It does not include a meta-analysis because there is great data dispersion due to the different methodologies used [ 30 ].

The keywords will be decided by the Delphi Method in two rounds with sport psychology experts. The results obtained are intended to be a representation of the ‘bright side’ of sports practice, and as a complement or mediator of the negative variables that have an impact on athletes’ and coaches’ performance.

It was determined that the main construct was to be psychological well-being, and that it was to be paired with optimism, healthy practice, realisation, positive mood, and performance and sport. The search period was limited to papers published between 2000 and 2023, and the final list of papers was obtained on February 13 , 2023. This research was conducted in two languages—English and Spanish—and was limited to psychological journals and specifically those articles where the sample was formed by athletes.

Each word was searched for in each database, followed by searches involving combinations of the same in pairs and then in trios. In relation to the results obtained, it was decided that the best approach was to group the words connected to positive psychology on the one hand, and on the other, those related to self-realisation/performance/health. In this way, it used parentheses to group words (psychological well-being; or optimism; or positive mood) with the Boolean ‘or’ between them (all three refer to positive psychology); and on the other hand, it grouped those related to performance/health/realisation (realisation; or healthy practice or performance), separating both sets of parentheses by the Boolean ‘and’’. To further filter the search, a keyword included in the title and in the inclusion criteria was added, which was ‘sport’ with the Boolean ‘and’’. In this way, the search achieved results that combined at least one of the three positive psychology terms and one of the other three.

Results (first phase)

The mentioned keywords were cross-matched, obtaining the combination with a sufficient number of papers. From the first research phase, the total number of papers obtained was 238. Then screening was carried out by 4 well-differentiated phases that are summarised in Fig.  1 . These phases helped to reduce the original sample to a more accurate one.

figure 1

Phases of the selection process for the final sample. Four phases were carried out to select the final sample of articles. The first phase allowed the elimination of duplicates. In the second stage, those that, by title or abstract, did not fit the objectives of the article were eliminated. Previously selected exclusion criteria were applied to the remaining sample. Thus, in phase 4, the final sample of 11 selected articles was obtained

Results (second phase)

The first screening examined the title, and the abstract if needed, excluding the papers that were duplicated, contained errors or someone with formal problems, low N or case studies. This screening allowed the initial sample to be reduced to a more accurate one with 109 papers selected.

Results (third phase)

This was followed by the second screening to examine the abstract and full texts, excluding if necessary papers related to non-sports themes, samples that were too old or too young for our interests, papers using qualitative methodologies, articles related to the COVID period, or others published in non-psychological journals. Furthermore, papers related to ‘negative psychological variables’’ were also excluded.

Results (fourth phase)

At the end of this second screening the remaining number of papers was 11. In this final phase we tried to organise the main characteristics and their main conclusions/results in a comprehensible list (Table  1 ). Moreover, in order to enrich our sample of papers, we decided to include some articles from other sources, mainly those presented in the introduction to sustain the conceptual framework of the concept ‘bright side’ of sports.

The usual position of the researcher of psychological variables that affect sports performance is to look for relationships between ‘negative’ variables, first in the form of basic psychological processes, or distorting cognitive behavioural, unpleasant or evaluable as deficiencies or problems, in a psychology for the ‘risk’ society, which emphasises the rehabilitation that stems from overcoming personal and social pathologies [ 31 ], and, lately, regarding the affectation of the athlete’s mental health [ 32 ]. This fact seems to be true in many cases and situations and to openly contradict the proclaimed psychological benefits of practising sports (among others: Cantón [ 33 ], ; Froment and González [ 34 ]; Jürgens [ 35 ]).

However, it is possible to adopt another approach focused on the ‘positive’ variables, also in relation to the athlete’s performance. This has been the main objective of this systematic review of the existing literature and far from being a novel approach, although a minority one, it fits perfectly with the definition of our area of knowledge in the broad field of health, as has been pointed out for some time [ 36 , 37 ].

After carrying out the aforementioned systematic review, a relatively low number of articles were identified by experts that met the established conditions—according to the PRISMA method [ 37 , 38 , 39 , 40 ]—regarding databases, keywords, and exclusion and inclusion criteria. These precautions were taken to obtain the most accurate results possible, and thus guarantee the quality of the conclusions.

The first clear result that stands out is the great difficulty in finding articles in which sports ‘performance’ is treated as a well-defined study variable adapted to the situation and the athletes studied. In fact, among the results (11 papers), only 3 associate one or several positive psychological variables with performance (which is evaluated in very different ways, combining objective measures with other subjective ones). This result is not surprising, since in several previous studies (e.g. Nuñez et al. [ 41 ]) using a systematic review, this relationship is found to be very weak and nuanced by the role of different mediating factors, such as previous sports experience or the competitive level (e.g. Rascado, et al. [ 42 ]; Reche, Cepero & Rojas [ 43 ]), despite the belief—even among professional and academic circles—that there is a strong relationship between negative variables and poor performance, and vice versa, with respect to the positive variables.

Regarding what has been evidenced in relation to the latter, even with these restrictions in the inclusion and exclusion criteria, and the filters applied to the first findings, a true ‘galaxy’ of variables is obtained, which also belong to different categories and levels of psychological complexity.

A preliminary consideration regarding the current paradigm of sport psychology: although it is true that some recent works have already announced the swing of the pendulum on the objects of study of PD, by returning to the study of traits and dispositions, and even to the personality of athletes [ 43 , 44 , 45 , 46 ], our results fully corroborate this trend. Faced with five variables present in the studies selected at the end of the systematic review, a total of three traits/dispositions were found, which were also the most repeated—optimism being present in four articles, mental toughness present in three, and finally, perfectionism—as the representative concepts of this field of psychology, which lately, as has already been indicated, is significantly represented in the field of research in this area [ 46 , 47 , 48 , 49 , 50 , 51 , 52 ]. In short, the psychological variables that finally appear in the selected articles are: psychological well-being (PWB) [ 53 ]; self-compassion, which has recently been gaining much relevance with respect to the positive attributional resolution of personal behaviours [ 54 ], satisfaction with life (balance between sports practice, its results, and life and personal fulfilment [ 55 ], the existence of approach-achievement goals [ 56 ], and perceived social support [ 57 ]). This last concept is maintained transversally in several theoretical frameworks, such as Sports Commitment [ 58 ].

The most relevant concept, both quantitatively and qualitatively, supported by the fact that it is found in combination with different variables and situations, is not a basic psychological process, but a high-level cognitive construct: psychological well-being, in its eudaimonic aspect, first defined in the general population by Carol Ryff [ 59 , 60 ] and introduced at the beginning of this century in sport (e.g., Romero, Brustad & García-Mas [ 13 ], ; Romero, García-Mas & Brustad [ 61 ]). It is important to note that this concept understands psychological well-being as multifactorial, including autonomy, control of the environment in which the activity takes place, social relationships, etc.), meaning personal fulfilment through a determined activity and the achievement or progress towards goals and one’s own objectives, without having any direct relationship with simpler concepts, such as vitality or fun. In the selected studies, PWB appears in five of them, and is related to several of the other variables/traits.

The most relevant result regarding this variable is its link with motivational aspects, as a central axis that relates to different concepts, hence its connection to sports performance, as a goal of constant improvement that requires resistance, perseverance, management of errors and great confidence in the possibility that achievements can be attained, that is, associated with ideas of optimism, which is reflected in expectations of effectiveness.

If we detail the relationships more specifically, we can first review this relationship with the ‘way of being’, understood as personality traits or behavioural tendencies, depending on whether more or less emphasis is placed on their possibilities for change and learning. In these cases, well-being derives from satisfaction with progress towards the desired goal, for which resistance (mental toughness) and confidence (optimism) are needed. When, in addition, the search for improvement is constant and aiming for excellence, its relationship with perfectionism is clear, although it is a factor that should be explored further due to its potential negative effect, at least in the long term.

The relationship between well-being and satisfaction with life is almost tautological, in the precise sense that what produces well-being is the perception of a relationship or positive balance between effort (or the perception of control, if we use stricter terminology) and the results thereof (or the effectiveness of such control). This direct link is especially important when assessing achievement in personally relevant activities, which, in the case of the subjects evaluated in the papers, specifically concern athletes of a certain level of performance, which makes it a more valuable objective than would surely be found in the general population. And precisely because of this effect of the value of performance for athletes of a certain level, it also allows us to understand how well-being is linked to self-compassion, since as a psychological concept it is very close to that of self-esteem, but with a lower ‘demand’ or a greater ‘generosity’, when we encounter failures, mistakes or even defeats along the way, which offers us greater protection from the risk of abandonment and therefore reinforces persistence, a key element for any successful sports career [ 62 ].

It also has a very direct relationship with approach-achievement goals, since precisely one of the central aspects characterising this eudaimonic well-being and differentiating it from hedonic well-being is specifically its relationship with self-determined and persistent progress towards goals or achievements with incentive value for the person, as is sports performance evidently [ 63 ].

Finally, it is interesting to see how we can also find a facet or link relating to the aspects that are more closely-related to the need for human affiliation, with feeling part of a group or human collective, where we can recognise others and recognise ourselves in the achievements obtained and the social reinforcement of those themselves, as indicated by their relationship with perceived social support. This construct is very labile, in fact it is common to find results in which the pressure of social support is hardly differentiated, for example, from the parents of athletes and/or their coaches [ 64 ]. However, its relevance within this set of psychological variables and traits is proof of its possible conceptual validity.

Analysing the results obtained, the first conclusion is that in no case is an integrated model based solely on ‘positive’ variables or traits obtained, since some ‘negative’ ones appear (anxiety, stress, irrational thoughts), affecting the former.

The second conclusion is that among the positive elements the variable coping strategies (their use, or the perception of their effectiveness) and the traits of optimism, perfectionism and self-compassion prevail, since mental strength or psychological well-being (which also appear as important, but with a more complex nature) are seen to be participated in by the aforementioned traits.

Finally, it must be taken into account that the generation of positive elements, such as resilience, or the learning of coping strategies, are directly affected by the educational style received, or by the culture in which the athlete is immersed. Thus, the applied potential of these findings is great, but it must be calibrated according to the educational and/or cultural features of the specific setting.

Limitations

The limitations of this study are those evident and common in SR methodology using the PRISMA system, since the selection of keywords (and their logical connections used in the search), the databases, and the inclusion/exclusion criteria bias the work in its entirety and, therefore, constrain the generalisation of the results obtained.

Likewise, the conclusions must—based on the above and the results obtained—be made with the greatest concreteness and simplicity possible. Although we have tried to reduce these limitations as much as possible through the use of experts in the first steps of the method, they remain and must be considered in terms of the use of the results.

Future developments

Undoubtedly, progress is needed in research to more precisely elucidate the role of well-being, as it has been proposed here, from a bidirectional perspective: as a motivational element to push towards improvement and the achievement of goals, and as a product or effect of the self-determined and competent behaviour of the person, in relation to different factors, such as that indicated here of ‘perfectionism’ or the potential interference of material and social rewards, which are linked to sports performance—in our case—and that could act as a risk factor so that our achievements, far from being a source of well-being and satisfaction, become an insatiable demand in the search to obtain more and more frequent rewards.

From a practical point of view, an empirical investigation should be conducted to see if these relationships hold from a statistical point of view, either in the classical (correlational) or in the probabilistic (Bayesian Networks) plane.

The results obtained in this study, exclusively researched from the desk, force the authors to develop subsequent empirical and/or experimental studies in two senses: (1) what interrelationships exist between the so called ‘positive’ and ‘negative’ psychological variables and traits in sport, and in what sense are each of them produced; and, (2) from a global, motivational point of view, can currently accepted theoretical frameworks, such as SDT, easily accommodate this duality, which is becoming increasingly evident in applied work?

Finally, these studies should lead to proposals applied to the two fields that have appeared to be relevant: educational and cultural.

Application/transfer of results

A clear application of these results is aimed at guiding the training of sports and physical exercise practitioners, directing it towards strategies for assessing achievements, improvements and failure management, which keep them in line with well-being enhancement, eudaimonic, intrinsic and self-determined, which enhances the quality of their learning and their results and also favours personal health and social relationships.

Data availability

There are no further external data.

Cantón E, Checa I. Los estados emocionales y su relación con las atribuciones y las expectativas de autoeficacia en El deporte. Revista De Psicología Del Deporte. 2012;21(1):171–6.

Google Scholar  

Cantón E, Checa I, Espejo B. (2015). Evidencias de validez convergente y test-criterio en la aplicación del Instrumento de Evaluación de Emociones en la Competición Deportiva. 24(2), 311–313.

Olmedilla A, Martins B, Ponseti-Verdaguer FJ, Ruiz-Barquín R, García-Mas A. It is not just stress: a bayesian Approach to the shape of the Negative Psychological Features Associated with Sport injuries. Healthcare. 2022;10(2):236. https://doi.org/10.3390/healthcare10020236 .

Article   Google Scholar  

Ong NCH, Chua JHE. Effects of psychological interventions on competitive anxiety in sport: a meta-analysis. Psycholy Sport Exerc. 2015;52:101836. https://doi.org/10.1016/j.psychsport.2020.101836 .

Candel MJ, Mompeán R, Olmedilla A, Giménez-Egido JM. Pensamiento catastrofista y evolución del estado de ánimo en futbolistas lesionados (Catastrophic thinking and temporary evolf mood state in injured football players). Retos. 2023;47:710–9.

Li C, Ivarsson A, Lam LT, Sun J. Basic Psychological needs satisfaction and frustration, stress, and sports Injury among University athletes: a Four-Wave prospective survey. Front Psychol. 2019;26:10. https://doi.org/10.3389/fpsyg.2019.00665 .

Wiese-Bjornstal DM. Psychological predictors and consequences of injuries in sport settings. In: Anshel MH, Petrie TA, Steinfelt JA, editors. APA handbook of sport and exercise psychology, volume 1: Sport psychology. Volume 1. Washington: American Psychological Association; 2019. pp. 699–725. https://doi.org/10.1037/0000123035 .

Chapter   Google Scholar  

Godoy PS, Redondo AB, Olmedilla A. (2022). Indicadores De Salud mental en jugadoras de fútbol en función de la edad. J Univers Mov Perform 21(5).

Golding L, Gillingham RG, Perera NKP. The prevalence of depressive symptoms in high-performance athletes: a systematic review. Physician Sportsmed. 2020;48(3):247–58. https://doi.org/10.1080/00913847.2020.1713708 .

Xanthopoulos MS, Benton T, Lewis J, Case JA, Master CL. Mental Health in the Young Athlete. Curr Psychiatry Rep. 2020;22(11):1–15. https://doi.org/10.1007/s11920-020-01185-w .

Cantón E, Checa I, Vellisca-González MY. Bienestar psicológico Y ansiedad competitiva: El Papel De las estrategias de afrontamiento / competitive anxiety and Psychological Well-being: the role of coping strategies. Revista Costarricense De Psicología. 2015;34(2):71–8.

Hahn E. Emotions in sports. In: Hackfort D, Spielberg CD, editors. Anxiety in Sports. Taylor & Francis; 2021. pp. 153–62. ISBN: 9781315781594.

Carrasco A, Brustad R, García-Mas A. Bienestar psicológico Y Su uso en la psicología del ejercicio, la actividad física y El Deporte. Revista Iberoamericana De psicología del ejercicio y El Deporte. 2007;2(2):31–52.

García-Mas A, Olmedilla A, Laffage-Cosnier S, Cruz J, Descamps Y, Vivier C. Forever Young! Tintin’s adventures as an Example of Physical Activity and Sport. Sustainability. 2021;13(4):2349. https://doi.org/10.3390/su13042349 .

McCarthy P. Positive emotion in sport performance: current status and future directions. Int Rev Sport Exerc Psycholy. 2011;4(1):50–69. https://doi.org/10.1080/1750984X.2011.560955 .

Cerin E. Predictors of competitive anxiety direction in male Tae Kwon do practitioners: a multilevel mixed idiographic/nomothetic interactional approach. Psychol Sport Exerc. 2004;5(4):497–516. https://doi.org/10.1016/S1469-0292(03)00041-4 .

Silva A, Monteiro D, Sobreiro P. Effects of sports participation and the perceived value of elite sport on subjective well-being. Sport Soc. 2020;23(7):1202–16. https://doi.org/10.1080/17430437.2019.1613376 .

Mansfield L, Kay T, Meads C, Grigsby-Duffy L, Lane J, John A, et al. Sport and dance interventions for healthy young people (15–24 years) to promote subjective well-being: a systematic review. BMJ Open. 2018;8(7). https://doi.org/10.1136/bmjopen-2017-020959 . e020959.

Ryff CD. Happiness is everything, or is it? Explorations on the meaning of psychological well-being. J Personal Soc Psychol. 1989;57(6):1069–81. https://doi.org/10.1037/0022-3514.57.6.1069 .

Lemelin E, Verner-Filion J, Carpentier J, Carbonneau N, Mageau G. Autonomy support in sport contexts: the role of parents and coaches in the promotion of athlete well-being and performance. Sport Exerc Perform Psychol. 2022;11(3):305–19. https://doi.org/10.1037/spy0000287 .

Adler AB, Bliese PD, Barsade SG, Sowden WJ. Hitting the mark: the influence of emotional culture on resilient performance. J Appl Psychol. 2022;107(2):319–27. https://doi.org/10.1037/apl0000897 .

Article   PubMed   Google Scholar  

Pekrun R, Murayama K, Marsh HW, Goetz T, Frenzel AC. Happy fish in little ponds: testing a reference group model of achievement and emotion. J Personal Soc Psychol. 2019;117(1):166–85. https://doi.org/10.1037/pspp0000230 .

Seligman M. Authentic happiness. New York: Free Press/Simon and Schuster; 2002.

Seligman M, Florecer. La Nueva psicología positiva y la búsqueda del bienestar. Editorial Océano; 2016.

Giles S, Fletcher D, Arnold R, Ashfield A, Harrison J. Measuring well-being in Sport performers: where are we now and how do we Progress? Sports Med. 2020;50(7):1255–70. https://doi.org/10.1007/s40279-020-01274-z .

Article   PubMed   PubMed Central   Google Scholar  

Piñeiro-Cossio J, Fernández-Martínez A, Nuviala A, Pérez-Ordás R. Psychological wellbeing in Physical Education and School sports: a systematic review. Int J Environ Res Public Health. 2021;18(3):864. https://doi.org/10.3390/ijerph18030864 .

Gómez-García L, Olmedilla-Zafra A, Peris-Delcampo D. Inteligencia emocional y características psicológicas relevantes en mujeres futbolistas profesionales. Revista De Psicología Aplicada Al Deporte Y El Ejercicio Físico. 2023;15(72). https://doi.org/10.5093/rpadef2022a9 .

Balk YA, Englert C. Recovery self-regulation in sport: Theory, research, and practice. International Journal of Sports Science and Coaching. SAGE Publications Inc.; 2020. https://doi.org/10.1177/1747954119897528 .

King PR Jr, Beehler GP, Donnelly K, Funderburk JS, Wray LO. A practical guide to applying the Delphi Technique in Mental Health Treatment Adaptation: the example of enhanced problem-solving training (E-PST). Prof Psychol Res Pract. 2021;52(4):376–86. https://doi.org/10.1037/pro0000371 .

Glass G. Primary, secondary, and Meta-Analysis of Research. Educational Researcher. 1976;5(10):3. https://doi.org/10.3102/0013189X005010003 .

Gillham J, Seligman M. Footsteps on the road to a positive psychology. Behav Res Ther. 1999;37:163–73. https://doi.org/10.1016/s0005-7967( . 99)00055 – 8.

Castillo J. Salud mental en El Deporte individual: importancia de estrategias de afrontamiento eficaces. Fundación Universitaria Católica Lumen Gentium; 2021.

Cantón E. Deporte, salud, bienestar y calidad de vida. Cuad De Psicología Del Deporte. 2001;1(1):27–38.

Froment F, García-González A. Retos. 2017;33:3–9. https://doi.org/10.47197/retos.v0i33.50969 . Beneficios de la actividad física sobre la autoestima y la calidad de vida de personas mayores (Benefits of physical activity on self-esteem and quality of life of older people).

Jürgens I. Práctica deportiva y percepción de calidad de vida. Revista Int De Med Y Ciencias De La Actividad Física Y Del Deporte. 2006;6(22):62–74.

Carpintero H. (2004). Psicología, Comportamiento Y Salud. El Lugar De La Psicología en Los campos de conocimiento. Infocop Num Extr, 93–101.

Page M, McKenzie J, Bossuyt P, Boutron I, Hoffmann T, Mulrow C, et al. Declaración PRISMA 2020: una guía actualizada para la publicación de revisiones sistemáticas. Rev Esp Cardiol. 2001;74(9):790–9.

Royo M, Biblio-Guías. Revisiones sistemáticas: PRISMA 2020: guías oficiales para informar (redactar) una revisión sistemática. Universidad De Navarra. 2020. https://doi.org/10.1016/j.recesp.2021.06.016 .

Urrútia G, Bonfill X. PRISMA declaration: a proposal to improve the publication of systematic reviews and meta-analyses. Medicina Clínica. 2010;135(11):507–11. https://doi.org/10.1016/j.medcli.2010.01.015 .

Núñez A, Ponseti FX, Sesé A, Garcia-Mas A. Anxiety and perceived performance in athletes and musicians: revisiting Martens. Revista De Psicología. Del Deporte/Journal Sport Psychol. 2020;29(1):21–8.

Rascado S, Rial-Boubeta A, Folgar M, Fernández D. Niveles De rendimiento y factores psicológicos en deportistas en formación. Reflexiones para entender la exigencia psicológica del alto rendimiento. Revista Iberoamericana De Psicología Del Ejercicio Y El Deporte. 2014;9(2):373–92.

Reche-García C, Cepero M, Rojas F. Efecto De La Experiencia deportiva en las habilidades psicológicas de esgrimistas del ranking nacional español. Cuad De Psicología Del Deporte. 2010;10(2):33–42.

Kang C, Bennett G, Welty-Peachey J. Five dimensions of brand personality traits in sport. Sport Manage Rev. 2016;19(4):441–53. https://doi.org/10.1016/j.smr.2016.01.004 .

De Vries R. The main dimensions of Sport personality traits: a Lexical Approach. Front Psychol. 2020;23:11. https://doi.org/10.3389/fpsyg.2020.02211 .

Laborde S, Allen M, Katschak K, Mattonet K, Lachner N. Trait personality in sport and exercise psychology: a mapping review and research agenda. Int J Sport Exerc Psychol. 2020;18(6):701–16. https://doi.org/10.1080/1612197X.2019.1570536 .

Stamp E, Crust L, Swann C, Perry J, Clough P, Marchant D. Relationships between mental toughness and psychological wellbeing in undergraduate students. Pers Indiv Differ. 2015;75:170–4. https://doi.org/10.1016/j.paid.2014.11.038 .

Nicholls A, Polman R, Levy A, Backhouse S. Mental toughness, optimism, pessimism, and coping among athletes. Personality Individ Differences. 2008;44(5):1182–92. https://doi.org/10.1016/j.paid.2007.11.011 .

Weissensteiner JR, Abernethy B, Farrow D, Gross J. Distinguishing psychological characteristics of expert cricket batsmen. J Sci Med Sport. 2012;15(1):74–9. https://doi.org/10.1016/j.jsams.2011.07.003 .

García-Naveira A, Díaz-Morales J. Relationship between optimism/dispositional pessimism, performance and age in competitive soccer players. Revista Iberoamericana De Psicología Del Ejercicio Y El Deporte. 2010;5(1):45–59.

Reche C, Gómez-Díaz M, Martínez-Rodríguez A, Tutte V. Optimism as contribution to sports resilience. Revista Iberoamericana De Psicología Del Ejercicio Y El Deporte. 2018;13(1):131–6.

Lizmore MR, Dunn JGH, Causgrove Dunn J. Perfectionistic strivings, perfectionistic concerns, and reactions to poor personal performances among intercollegiate athletes. Psychol Sport Exerc. 2017;33:75–84. https://doi.org/10.1016/j.psychsport.2017.07.010 .

Mansell P. Stress mindset in athletes: investigating the relationships between beliefs, challenge and threat with psychological wellbeing. Psychol Sport Exerc. 2021;57:102020. https://doi.org/10.1016/j.psychsport.2021.102020 .

Reis N, Kowalski K, Mosewich A, Ferguson L. Exploring Self-Compassion and versions of masculinity in men athletes. J Sport Exerc Psychol. 2019;41(6):368–79. https://doi.org/10.1123/jsep.2019-0061 .

Cantón E, Checa I, Budzynska N, Canton E, Esquiva Iy, Budzynska N. (2013). Coping, optimism and satisfaction with life among Spanish and Polish football players: a preliminary study. Revista de Psicología del Deporte. 22(2), 337–43.

Mulvenna M, Adie J, Sage L, Wilson N, Howat D. Approach-achievement goals and motivational context on psycho-physiological functioning and performance among novice basketball players. Psychol Sport Exerc. 2020;51:101714. https://doi.org/10.1016/j.psychsport.2020.101714 .

Malinauskas R, Malinauskiene V. The mediation effect of Perceived Social support and perceived stress on the relationship between Emotional Intelligence and Psychological Wellbeing in male athletes. Jorunal Hum Kinetics. 2018;65(1):291–303. https://doi.org/10.2478/hukin-2018-0017 .

Scanlan T, Carpenter PJ, Simons J, Schmidt G, Keeler B. An introduction to the Sport Commitment Model. J Sport Exerc Psychol. 1993;1(1):1–15. https://doi.org/10.1123/jsep.15.1.1 .

Ryff CD. Eudaimonic well-being, inequality, and health: recent findings and future directions. Int Rev Econ. 2017;64(2):159–78. https://doi.org/10.1007/s12232-017-0277-4 .

Ryff CD, Singer B. The contours of positive human health. Psychol Inq. 1998;9(1):1–28. https://doi.org/10.1207/s15327965pli0901_1 .

Romero-Carrasco A, García-Mas A, Brustad RJ. Estado del arte, y perspectiva actual del concepto de bienestar psicológico en psicología del deporte. Revista Latinoam De Psicología. 2009;41(2):335–47.

James IA, Medea B, Harding M, Glover D, Carraça B. The use of self-compassion techniques in elite footballers: mistakes as opportunities to learn. Cogn Behav Therapist. 2022;15:e43. https://doi.org/10.1017/S1754470X22000411 .

Fernández-Río J, Cecchini JA, Méndez-Giménez A, Terrados N, García M. Understanding olympic champions and their achievement goal orientation, dominance and pursuit and motivational regulations: a case study. Psicothema. 2018;30(1):46–52. https://doi.org/10.7334/psicothema2017.302 .

Ortiz-Marholz P, Chirosa LJ, Martín I, Reigal R, García-Mas A. Compromiso Deportivo a través del clima motivacional creado por madre, padre y entrenador en jóvenes futbolistas. J Sport Psychol. 2016;25(2):245–52.

Ortiz-Marholz P, Gómez-López M, Martín I, Reigal R, García-Mas A, Chirosa LJ. Role played by the coach in the adolescent players’ commitment. Studia Physiol. 2016;58(3):184–98. https://doi.org/10.21909/sp.2016.03.716 .

Download references

This research received no external funding.

Author information

Authors and affiliations.

General Psychology Department, Valencia University, Valencia, 46010, Spain

David Peris-Delcampo & Enrique Cantón

Basic Psychology and Pedagogy Departments, Balearic Islands University, Palma de Mallorca, 07122, Spain

Antonio Núñez, Javier Ponseti & Alejandro Garcia-Mas

Education and Social Sciences Faculty, Andres Bello University, Santiago, 7550000, Chile

Paula Ortiz-Marholz

Personality, Evaluation and Psychological Treatment Deparment, Murcia University, Campus MareNostrum, Murcia, 30100, Spain

Aurelio Olmedilla

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, AGM, EC and ANP.; planification, AO; methodology, ANP, AGM and PO.; software, ANP, DP and PO.; validation, ANP and PO.; formal analysis, DP, PO and ANP; investigation, DP, PO and ANP.; resources, DVP and JP; data curation, AO and DP.; writing—original draft preparation, ANP, DP and AGM; writing—review and editing, EC and JP.; visualization, ANP and PO.; supervision, AGM.; project administration, DP.; funding acquisition, DP and JP. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Antonio Núñez .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Informed consent statement

Consent for publication, competing interests.

The authors declare no conflict of interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Peris-Delcampo, D., Núñez, A., Ortiz-Marholz, P. et al. The bright side of sports: a systematic review on well-being, positive emotions and performance. BMC Psychol 12 , 284 (2024). https://doi.org/10.1186/s40359-024-01769-8

Download citation

Received : 04 October 2023

Accepted : 07 May 2024

Published : 21 May 2024

DOI : https://doi.org/10.1186/s40359-024-01769-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Positive emotions
  • Sports performance

BMC Psychology

ISSN: 2050-7283

scientific paper review sample

IMAGES

  1. How to Write a Scientific Paper

    scientific paper review sample

  2. 😍 How to write an scientific paper. How to Write a Scientific Paper: 8

    scientific paper review sample

  3. Format Example Of Scientific Paper / Scientific journal article example

    scientific paper review sample

  4. How to review a Scientific paper by Padhu Pattabiraman

    scientific paper review sample

  5. Writing a Research Paper Literature Review in APA or MLA

    scientific paper review sample

  6. Writing a scientific review article

    scientific paper review sample

VIDEO

  1. Example of scientific paper review for Ecology

  2. Complete guide on how to find research paper

  3. SCIENCE SAMPLE PAPER FOR FINAL EXAMS CLASS8 / CBSE 2024

  4. Discussion about the review of scientific paper

  5. CA final Law May23 paper review || Sample MCQ asked || chapter wise weightage || tips for Nov23 ✅

  6. 🔴LIVE

COMMENTS

  1. Writing a Scientific Review Article: Comprehensive Insights for

    2. Benefits of Review Articles to the Author. Analysing literature gives an overview of the "WHs": WHat has been reported in a particular field or topic, WHo the key writers are, WHat are the prevailing theories and hypotheses, WHat questions are being asked (and answered), and WHat methods and methodologies are appropriate and useful [].For new or aspiring researchers in a particular ...

  2. How to Write a Peer Review

    Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom. Here's how your outline might look: 1. Summary of the research and your overall impression. In your own words, summarize what the manuscript ...

  3. A Step-by-Step Guide to Writing a Scientific Review Article

    Structure of a Scientific Review Article. Writing a high-quality scientific review article is "a balancing act between the scientific rigor needed to select and critically appraise original studies, and the art of telling a story by providing context, exploring the known and the unknown, and pointing the way forward" . The ideal scientific ...

  4. How to write a good scientific review article

    Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up-to-date with developments in a particular area of research.

  5. How to review a paper

    22 Sep 2016. By Elisabeth Pain. Share: A good peer review requires disciplinary expertise, a keen and critical eye, and a diplomatic and constructive approach. Credit: dmark/iStockphoto. As junior scientists develop their expertise and make names for themselves, they are increasingly likely to receive invitations to review research manuscripts.

  6. How to write a superb literature review

    The best proposals are timely and clearly explain why readers should pay attention to the proposed topic. It is not enough for a review to be a summary of the latest growth in the literature: the ...

  7. PDF sci article review

    Summaries and critiques are two ways to write a review of a scientific journal article. Both types of writing ask you first to read and understand an article from the primary literature about your topic. The summary involves briefly but accurately stating the key points of the article for a reader who has not read the original article.

  8. How to write a review paper

    Include this information when writing up the method for your review. 5 Look for previous reviews on the topic. Use them as a springboard for your own review, critiquing the earlier reviews, adding more recently published material, and pos-sibly exploring a different perspective. Exploit their refer-ences as another entry point into the literature.

  9. How to write a good scientific review article

    A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the ...

  10. How to write a thorough peer review

    4. Other, lesser suggestions and final comments. Now, read your review carefully, and preferably aloud: if you stumble when reciting your own text, then readers will probably do the same. Reading ...

  11. Peer Review Template

    Sample outline. Summary of the research and your overall impression. In your own words, summarize the main research question, claims, and conclusions of the study. Provide context for how this research fits within the existing literature. Discuss the manuscript's strengths and weaknesses and your overall recommendation.

  12. Peer Review Examples

    The paper by Chevalier et al. analyzed whether late sodium current (I NaL) can be assessed using an automated patch-clamp device.To this end, the I NaL effects of ranolazine (a well known I NaL inhibitor) and veratridine (an I NaL activator) were described. The authors tested the CytoPatch automated patch-clamp equipment and performed whole-cell recordings in HEK293 cells stably transfected ...

  13. How to write a peer review: practical templates, expert examples, and

    Here's a sample of pre and post-publication peer reviews displayed on Web of Science publication records to help guide you through your first few reviews. Some of these are transparent peer reviews , which means the entire process is open and visible — from initial review and response through to revision and final publication decision.

  14. Writing Critical Reviews: A Step-by-Step Guide

    Ev en better you might. consider doing an argument map (see Chapter 9, Critical thinking). Step 5: Put the article aside and think about what you have read. Good critical review. writing requires ...

  15. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  16. My Complete Guide to Academic Peer Review: Example Comments & How to

    So here are two feedback examples from my own papers: Example Peer Review: Paper 1. Quantifying 3D Strain in Scaffold Implants for Regenerative Medicine, J. Clark et al. 2020 - Available here. This paper was reviewed by two academics and was given major revisions. The journal gave us only 10 days to get them done, which was a bit stressful!

  17. Write a Critical Review of a Scientific Journal Article

    To answer these questions, look at review articles to find out how reviewers view this piece of research. Look at research articles and databases like Web of Science to see how other people have used this work. What range of journals have cited this article? These questions were adapted from the following sources: Kuyper, B.J. (1991).

  18. Step by Step Guide to Reviewing a Manuscript

    Step by step. guide to reviewing a manuscript. When you receive an invitation to peer review, you should be sent a copy of the paper's abstract to help you decide whether you wish to do the review. Try to respond to invitations promptly - it will prevent delays. It is also important at this stage to declare any potential Conflict of Interest.

  19. How to Write a Scientific Review Article

    As a general rule, most journals ask that a specific font and size be used (e.g., Times New Roman, 12 point), that 1.0-inch margins be used on all four sides, and 1.5 line spacing be used. The article structure should contain very specific sections, which might vary slightly according to different science disciplines.

  20. How to Review a Scientific Paper in 10 Easy Steps

    Start by reading the paper thoroughly and gaining a clear understanding of its content. Take note of the research question, methodology, data analysis, results, and conclusions. Identify any areas where you have expertise or concerns. 3. Evaluate the Paper's Structure and Clarity:

  21. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  22. Reviewer comments: examples for common peer review decisions

    Examples of 'reject' reviewer comments. "I do not believe that this journal is a good fit for this paper.". "While the paper addresses an interesting issue, it is not publishable in its current form.". "In its current state, I do not recommend accepting this paper.". "Unfortunately, the literature review is inadequate.

  23. How to review a scientific paper

    The role of the peer reviewer as an intermediary and arbiter in the process of scientific communication between the authors and the readers via the vehicle of the particular journal is discussed and the responsibilities of the reviewer to each of the three parties (the author/s, readers, and the Journal editor) are defined.

  24. How to Write a Scientific Paper: Practical Guidelines

    The present article, essentially based on TA Lang's guide for writing a scientific paper [ 1 ], will summarize the steps involved in the process of writing a scientific report and in increasing the likelihood of its acceptance. Figure 1. The Edwin Smith Papyrus (≈3000 BCE) Figure 2.

  25. The bright side of sports: a systematic review on well-being, positive

    The objective of this study is to conduct a systematic review regarding the relationship between positive psychological factors, such as psychological well-being and pleasant emotions, and sports performance. This study, carried out through a systematic review using PRISMA guidelines considering the Web of Science, PsycINFO, PubMed and SPORT Discus databases, seeks to highlight the ...