Credit: Daniel Kulinski/Moment/Getty

Before accepting a manuscript, we perform dozens of checks. Several of these checks are intended to ensure the accuracy and transparency of the manuscript, as well as its accessibility for a broad audience. Here, we highlight some of the key checks we perform that authors need to address before their manuscript can be accepted for publication (Box 1). Although we may have raised some of these issues earlier in the consideration process (for example, before sending a manuscript out to review or when inviting a revision), this is our final opportunity to make sure everything is in order. This means that we spend considerable time evaluating the final draft of manuscripts we are accepting in principle, and this process usually involves one or more rounds of detailed editorial checks.

Titles and abstracts

We pay close attention to titles and abstracts. Frequently, these are the only parts of a published article that readers will access, and it is essential to get them right.

Titles are occasionally too general, overstate the results, make novelty claims or unwarranted causal claims, or omit the population or populations studied (where relevant). We frequently suggest different titles that more closely reflect the scope and evidence of a paper.

Abstracts are a mini version of the paper and can be challenging to write given length constraints (150 words for most manuscript types). We may recommend modifications to the abstract or suggest a different abstract to ensure that it summarizes past research and the current work accurately, provides all key information (including sample sizes, effect sizes and confidence intervals, where relevant), and does not overstate the significance or implications of the work1.

Novelty claims and exaggerated language

In most disciplines, novelty or priority is impossible to ascertain. As a policy, we ask authors not to make novelty or priority claims in their manuscripts (except in genetics and archaeology) and to avoid qualitative characterizations of their own work2. Before accepting manuscripts for publication, we read them closely to make sure they are free of novelty claims or exaggerated claims of significance, scale or quality. A piece of research is more credible if readers are allowed to form their own opinion of the value of the work.

Causal claims

For quantitative studies, one of the most common issues we encounter is the interpretation of associations as causal evidence3. Descriptive or predictive work is important and valuable, but it should be presented as such. We ask authors to ensure that the language they use is appropriate for the type of evidence they report and to remove or revise wording that implies directionality when no causal evidence is presented.

Inclusive language

Several of the manuscripts we publish report research on human population groups based on social characteristics such as gender, race or ethnicity, religion or socioeconomic status. Inclusive language is important in these manuscripts to avoid stigmatizing the studied groups or perpetuating societal prejudice. We rely on the American Psychological Association’s inclusive language guidelines and guide to bias-free language, as well as internal resources, to recommend changes to language that may be unintentionally harmful for the groups studied.

Statistics

Statistical reporting is the area in which we encounter the greatest number of issues. In a recent editorial, we explained our requirements for statistical reporting4. We perform thorough checks on statistics to ensure compliance with our requirements.

Appropriate interpretation of the results

Occasionally, the interpretation of a study’s findings is not aligned with the statistical evidence. For example, the results of a Bayes factor test with a value close to 1 may be interpreted as providing support for the hypothesis, but would be more appropriately interpreted as largely inconclusive. In other cases, a numerical difference between two conditions may be interpreted as theoretically or practically meaningful even though the null-hypothesis significance test did not yield statistically significant results. Or a very small but statistically significant effect size is interpreted as important without any consideration of effect size.

We read the results and discussion sections of our manuscripts carefully to make sure that all claims and interpretations are directly supported by the statistical analyses that the authors performed.

Limitations

All studies have limitations. Being transparent about limitations and discussing them appropriately (without minimizing their potential impact) does not detract from the value of the work. On the contrary, it enhancesits credibility.

We require the inclusion of a discussion of limitations all our manuscripts. We may also make suggestions on limitations that are necessary to discuss or how limitations are presented. In cases in which there is a risk of misinterpretation of the work, we suggest that key limitations are also mentioned in the abstract.

Preregistration

We prioritize preregistered research for peer review and ultimate publication5. However, for preregistration to be meaningful, it needs to be followed6.

For manuscripts that report preregistered research, we read the preregistration carefully and compare it to what is reported in the manuscript. We pay close attention to the hypotheses and outcome measures in the preregistration and whether they match those reported in the manuscript. We scrutinize the analysis plan and check whether it was followed in the manuscript.

In cases of deviation from the preregistration, we ask authors to report hypotheses and outcome measures as they were preregistered. If the authors deviated from their analysis plan, we require that authors declare the deviations and report the preregistered analyses alongside the ones reported in the manuscript, except if those analyses proved to be unfeasible or wrong.

Figures

Figures are a key part of empirical research manuscripts: they tell the ‘story’ of a manuscript visually and enable a closer examination of the empirical claims in the work. Although we comment on several aspects of the figures (and their legends), there are some frequent requests that we make.

We discourage the use of bar charts for continuous variables. Instead, we recommend visual representation formats that show data distribution clearly: for example, dot plots, box-and-whisker plots or violin plots. If the authors prefer to use bar charts, we ask that they overlay the corresponding data points (as dot plots) whenever possible and always for n ≤ 10.

For figures that present graphs, we ask that graph axes start at 0 and are not altered in scale to exaggerate effects (a discontinuity symbol can be used if necessary, but it should be prominent to avoid misinterpretation of effect size).

Finally, we do not allow ‘data not shown’. We ask authors to either show the data or remove any mention of them.

Ethics

If a manuscript reports the results of research with human participants or nonhuman animals, we ask that the Methods section starts with a statement that confirms that the research complies with all relevant ethical regulations and naming the board and institution that approved the study protocol (including protocol number).

For research with human participants, we also ask that authors confirm that informed consent was obtained from all participants. Information on participant compensation should also be included. Manuscripts reporting research with non-human animals should additionally confirm that the ARRIVE (Animal Research: Reporting of In Vivo Research) guidelines were used to report the research.

A lot of the work that we publish involves the use of secondary, administrative, proprietary or digital data (such as from social media platforms). In most jurisdictions, this work is exempt from ethics review but poses some of the same ethical challenges as research with primary data7. In these cases, we typically have already obtained ethics or legal advice from an appropriate expert in earlier stages of processing a manuscript. However, before acceptance, we make sure that the datasets used comply with considerations of informed consent, privacy and minimization of harm. If the data were obtained on application, we ask for confirmation that the use to which the data were put complies with the content of the application.

Data and code availability

The availability of code is essential for verification and reproduction of the evidence and claims in the manuscript. For this reason, we do not publish papers for which the underlying data cannot be made available or cannot be obtained by a third party under any circumstances.

We encourage the public deposition of datasets if there are no ethical or legal constraints. When the data are only available ‘upon request’, we ask that authors explain in their data availability statement why this is so. We also do not allow the inclusion of the word ‘reasonable’ (‘upon reasonable request’) in such statements. What is considered ‘reasonable’ is subjective and the statement could be used to deny legitimate requests for data sharing. Specific restrictions (for example, the requirement for ethics approval or a data sharing agreement) should be described.

For manuscripts for which code is a central part of the work, we also require that authors provide a code availability statement. The same requirements for access and availability to code apply as for data availability.

Competing interests and role of the funders

A competing interest is not disqualifying for primary research. However, it is crucial that all competing interests — both financial and nonfinancial — be transparently reported in sufficient detail. We also ask authors to declare what role, if any, the funders had in the conduct of their study, the content of the manuscript or the decision to publish. Funder involvement in the work should be declared as a competing interest.

Clinical trials and systematic reviews

For clinical trials, systematic reviews and meta-analyses, we perform several additional checks. We mandate prospective registration in a clinical trial registry for all clinical trials. We also ask that authors of clinical trials include a CONSORT (Consolidated Standard of Reporting Trials) checklist with their manuscripts. We require a suitable PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist, or equivalent, for systematic reviews and meta-analyses. In these cases, we check compliance and completeness of reporting against the requisite checklist.

For editors, the pre-acceptance process is quite involved and time-consuming. The process is equally demanding for authors, who may need to make several revisions to their manuscripts before they can be accepted. However, the process invariably improves the rigour, transparency and accessibility of the work that we publish.