As we discussed in last month’s Editorial, Nature Methods welcomes manuscript submissions describing new technology, tool and methodology developments from across a broad swath of basic biology research. While new methods are our bread and butter, we also publish as research papers Analyses (performance comparisons of previously published tools or methods) and Resources.

Perhaps because “Methods” is in our journal name, researchers sometimes express surprise that we’re interested in resource studies, especially data-oriented Resources. We are! Such studies can have great value for various research communities, and are an important component of researchers’ toolboxes. Resource studies can sometimes be challenging to publish in journals that focus on novel biological results; we want to help champion and provide a home for this important work. For Nature Methods, the most important editorial criteria are the depth and/or breadth of the Resource, its quality, and the potential impact that it will have on a broad community. Exciting biological findings are always a plus, but are not essential.

Our Resource format is flexible. One type of such a paper describes a physical collection of tools, such as a set of reagents or mouse lines. Another type of Resource describes a computational platform or database, often with a suite of analytical and visualization functions. A third type of Resource describes a large dataset or atlas that other researchers will find broadly useful as a reference, for mining for potential new insights, or for serving as a gold standard for benchmarking studies and testing new methods.

Some examples of physical collections of tools that we have published as Resources include genetic tools for understudied marine protists, a cell-based library of displayed glycosaminoglycans and mouse lines for multicolor imaging of neurons in the brain.

A recently published example of a computational Resource is one announcing the Spatial Omics DataBase (SODB), a web-based platform that maintains over 2,400 experiments collected using a wide variety of spatial omics technologies, made freely accessible via a unified data format. SODB also contains interactive data analysis modules and a viewer called SOView. The Outbreak.info genome surveillance Resource is a good example of reporting a platform that serves as a data aggregator; it tracks millions of combinations of SARS-CoV-2 lineages and individual mutations across 7,000 global locations.

Our data Resources may present large experimentally generated datasets or atlases, such as this MRI-based atlas of development of the human brain in babies from 2 weeks to 24 months, or this quantitative mass spectrometry-based draft of the mouse proteome and phosphoproteome. Resources containing predicted data may serve as a useful hypothesis-generating tool, as in the AlphaFill databank, which displays predicted AlphaFold protein structure models with ‘transplanted’ small-molecule ligands from experimental structures.

In this issue we feature two Resource studies. Manubens-Gil et al. present the BigNeuron project, which describes a collection of about 30,000 single-neuron images from different species that were generated with a variety of light microscopy techniques. The reconstructions from the Gold166 subset of these neurons can serve as a gold standard for testing automated tracing algorithms, as shown in a performance comparison of 35 such tools.

Čapek et al. describe EmbryoNet, a deep convolutional neural network trained on a very large dataset comprising zebrafish, medaka and stickleback embryogenesis images under normal growth conditions and perturbations. In addition to sharing these rich and readily mineable time-lapse data of the critically important process of fish development, this work shows that deep learning methods trained on these data can perform phenotyping that exceeds the abilities of human experts in terms of sensitivity, speed and accuracy.

Resource papers we publish sometimes will also describe some method development or optimization, as exemplified by EmbryoNet, or may include method performance comparisons, as exemplified by BigNeuron. So how do you know whether your paper is a Resource or an Article or an Analysis? Authors need not worry about choosing exactly the right format at the initial submission stage; if our interest is piqued enough to send the paper out for peer review, your editors will determine the most appropriate format. Typically, it will depend on what aspect of the paper — the method or the resource — has the greatest scientific value and novelty. We are also happy to take presubmission inquiries via our submission system, or you may simply reach out to any one of us at a conference or by e-mail.

It should go almost without saying that it is essential that the tools or data reported in a Resource be made available to the broader community in a readily accessible way. Datasets ideally will be hosted in a stable and recognizable repository, along with full metadata details, and the paper should describe in detail how the data were acquired. Algorithms and code underlying computational platforms must be provided, most ideally in a repository such as GitHub; a DOI should be minted using a tool such as Zenodo or Code Ocean; an open source license should be provided; and full method details should be described in the paper, along with benchmarking data. For sets of experimental tools or constructs, details must be provided in the paper about how interested readers may obtain such tools, at reasonable cost. Authors should make plasmids, cell lines and mouse strains available via established repositories when possible — for example, via Addgene, the American Type Culture Collection or the Jackson Laboratory. Transparency and accessibility are important for all research projects, but reusability is paramount when the research output of significance is the resource itself.