To the Editor — There is shared support from Riley et al.1 and Wolfe and Kanwisher2 for the principles of increasing the transparency, rigour and reproducibility of science and “[fulfilling] an inherent commitment to study participants and the public”1. These principles are motivating the expansion of the National Institutes of Health (NIH) guidelines to include study registration and outcome reporting for basic science. Prospective study registration (that is, ‘preregistration’) distinguishes confirmatory tests of predictions from discoveries resulting from exploration3. Unintentionally conflating these modes of research increases the publishability of findings at the expense of their credibility4. Furthermore, outcome reporting, whether or not the study is ultimately published, addresses publication bias and selective ignoring of null results5. Widespread preregistration and outcome reporting may address key contributing causes of the so-called reproducibility crisis6, and would increase the interpretability of most empirical research.

The differences between Riley et al. and Wolfe and Kanwisher are primarily in the definition of ‘clinical trial’ and the practical challenges of translating the shared principles into practice. To us, the definition of a clinical trial is confusing and is ultimately a distraction from the aims of the guidelines. Registration and outcome reporting are a means for improving the reproducibility of all research. Applying the guidelines to a subset of NIH-funded research limits the policy impact.

The practical concerns about implementation are critically important. The new practices are intended to improve and accelerate knowledge accumulation, not interfere with it. Riley et al. address some implementation challenges, and Wolfe and Kanwisher identify others for which the existing NIH registration processes for clinical trials could impede efficiency in basic science.

Wolfe and Kanwisher note that basic science grant applications often contain many proposed experiments, and what research actually occurs changes because initial experiments inform the direction of subsequent experiments. As a consequence, it would be impractical to require registration of every proposed study. An effective registration process will embrace that dynamism in response to discovery and support iterative registration of multiple experiments. This can be accomplished with registration templates for designs with shared features, and a system for automated, just-in-time registration management that is active throughout the grant period.

Riley et al. and Wolfe and Kanwisher also note that the most useful content of a preregistration can differ by topic. Over application of a format for large-scale clinical trials may be burdensome and not meet the policy objectives. An effective registration process will enable customization to maximize the value of preregistration for specific research applications while maintaining alignment of common core content across topics.

The Center for Open Science maintains the free, open source Open Science Framework (OSF; http://osf.io/) that provides registration services for the basic science communities. The OSF has more than 10,000 registrations already and supports multiple registration formats and iterative registration. The OSF is adaptable to Wolfe and Kanwisher’s expressed needs for basic research and could be connected to the NIH’s systems to support efficient workflows for basic scientists and meet NIH reporting requirements without additional burden.

Whatever the implementation solution, we support the NIH’s move towards greater transparency, rigour and reproducibility, and expect that collaboration with the basic science community will yield shared benefit, particularly to the taxpaying public awaiting our solutions, cures and knowledge to advance human health.