Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Brief Communication
  • Published:

CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation

Abstract

As biomedical imaging datasets expand, deep neural networks are considered vital for image processing, yet community access is still limited by setting up complex computational environments and availability of high-performance computing resources. We address these bottlenecks with CDeep3M, a ready-to-use image segmentation solution employing a cloud-based deep convolutional neural network. We benchmark CDeep3M on large and complex two-dimensional and three-dimensional imaging datasets from light, X-ray, and electron microscopy.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Image segmentation workflow with CDeep3M.
Fig. 2: Multimodal image segmentation using CDeep3M.
Fig. 3: Synaptic vesicle counts on SBEM data using CDeep3M.

Similar content being viewed by others

Data availability

Example data and a mitochondria pretrained model are included in the GitHub release (https://github.com/CRBS/cdeep3m) and several trained models with example data are released on the Cell Image Library (CIL, http://cellimagelibrary.org/cdeep3m). Further data will be made available from the corresponding authors upon reasonable request.

References

  1. Chen, B. C. et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).

    Article  Google Scholar 

  2. Bock, D. D. et al. Network anatomy and in vivo physiology of visual cortical neurons. Nature 471, 177–184 (2011).

    Article  CAS  Google Scholar 

  3. Briggman, K. L., Helmstaedter, M. & Denk, W. Wiring specificity in the direction-selectivity circuit of the retina. Nature 471, 183–190 (2011).

    Article  CAS  Google Scholar 

  4. Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-net: learning dense volumetric segmentation from sparse annotation. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 9901 LNCS, 424–432 (2016)..

  5. Quan, T. M., Hildebrand, D. G. C. & Jeong, W.-K. FusionNet: a deep fully residual convolutional neural network for image segmentation in connectomics. Preprint in arXiv, https://arxiv.org/abs/1612.05360v2 (2016).

  6. Badrinarayanan, V., Kendall, A. & Cipolla, R. SegNet: a deep convolutional encoder-decoder architecture forimage segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).

    Article  Google Scholar 

  7. Zeng, T., Wu, B. & Ji, S. DeepEM3D: approaching human-level performance on 3D anisotropic EM image segmentation. Bioinformatics 33, 2555–2562 (2017).

    Article  CAS  Google Scholar 

  8. Jia, Y. et al. Caffe: convolutional architecture for fast feature embedding. Proceedings of the 22nd ACM International Conference on Multimedia, New York, NY, 675–678 (ACM) (2014). https://doi.org/10.1145/2647868.2654889

  9. Jinno, S. & Kosaka, T. Stereological estimation of numerical densities of glutamatergic principal neurons in the mouse hippocampus. Hippocampus 20, 829–840 (2010).

    PubMed  Google Scholar 

  10. Abusaad, I. et al. Stereological estimation of the total number of neurons in the murine hippocampus using the optical disector. J. Comp. Neurol. 408, 560–566 (1999).

    Article  CAS  Google Scholar 

  11. Sommer, C., Straehle, C., Kothe, U. & Hamprecht, F. A. Ilastik: Interactive learning and segmentation toolkit. in Proceedings of the IEEE International Symposium on Biomedical Imaging 230–233 (2011). https://doi.org/10.1109/ISBI.2011.5872394

  12. Perez, A. J. et al. A workflow for the automatic segmentation of organelles in electron microscopy image stacks. Front. Neuroanat. 8, 126 (2014).

    Article  Google Scholar 

  13. Lucchi, A., Becker, C., Márquez Neila, P. & Fua, P. Exploiting enclosing membranes and contextual cues for mitochondria segmentation. in Medical Image Computing and Computer-Assisted InterventionMICCAI 2014 (eds. Golland, P., Hata, N., Barillot, C., Hornegger, J. & Howe, R.) 65–72 (Springer International Publishing, 2014).

  14. Pan, S. J. & Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 1345–1359 (2010).

    Article  Google Scholar 

  15. Kasthuri, N. et al. Saturated reconstruction of a volume of neocortex. Cell 162, 648–661 (2015).

    Article  CAS  Google Scholar 

  16. Deerinck, T. et al. Enhancing serial block-face scanning electron microscopy to enable high resolution 3-Dnanohistology of cells and tissues. Microsc. Microanal. 16, 1138–1139 (2010).

    Article  CAS  Google Scholar 

  17. Phan, S. et al. 3D reconstruction of biological structures: automated procedures for alignment andreconstruction of multiple tilt series in electron tomography. Adv. Struct. Chem.Imaging. 2, 8 (2017).

    Article  Google Scholar 

Download references

Acknowledgements

We thank the DIVE lab for making DeepEM3D publicly available. We thank T. Zeng, A. Lucchi and P. Fua for initial discussions and S. Viana da Silva for critical feedback on the manuscript. We thank S. Yeon, N. Allaway, and C. Nava-Gonzales for help with ground truth segmentations for the membrane training data and mitochondria segmentations and C. Li, J. Shergill, I. Tang, M.M., and R.A. for synaptic vesicle annotations. M.G.H. and R.A. proof edited and performed other ground-truth segmentations. Research published in this manuscript leveraged multiple NIH grants 5R01DA038896 and 5P01NS083514 as well as 5P41GM103412, 5P41GM103426, and 5R01GM082949 supporting the National Center for Microscopy and Imaging Research (NCMIR), the National Biomedical Computation Resource (NBCR), and the Cell Image Library (CIL), respectively. M.G.H. was supported by a postdoctoral fellowship from an interdisciplinary seed program at UCSD to build multiscale 3D maps of whole cells, called the Visible Molecular Cell Consortium. This work benefitted from the use of compute cycles on the Comet cluster, a resource of the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. This research benefitted from the use of credits from the National Institutes of Health (NIH) Cloud Credits Model Pilot, a component of the NIH Big Data to Knowledge (BD2K) program.

Author information

Authors and Affiliations

Authors

Contributions

M.G.H. and M.H.E. conceived and designed the project. M.G.H., C.C., L.T., and M.M. wrote code and analyzed data. M.G.H., D.B., S.P., E.A.B., and T.J.D., performed experiments and acquired images. M.G.H., R.A., and M.M. annotated training data. M.G.H., C.C., S.T.P., and M.H.E. wrote the manuscript with feedback from all authors.

Corresponding authors

Correspondence to Matthias G. Haberl or Mark H. Ellisman.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Integrated supplementary information

Supplementary Figure 1 Cell density in the mouse hippocampus from the dentate gyrus to the molecular layer using XRM scanning.

a, Densities (number of cells/mm3) were measured by using a sliding volume of 20-µm diameter (* 385 µm * 132 µm) across 394 µm, after extracting 3D connected components in CDeep3M predictions of nuclei in the hippocampal XRM scan. Representative images are shown in be from the center of the suprapyramidal blade of the dentate gyrus (sDG) (b), to the molecular layer e. Scale bar, 50 µm.

Supplementary Figure 2 Comparison of membrane segmentation on ssET data using CDeep3M and three widely used machine learning algorithms.

a, Challenging segmentation tasks, such as recognition of membranes in the electron tomography dataset, cannot be solved with sufficient accuracy using widely used machine learning tools, such as CHM (Front. Neuroanat. 8, 2014), Ilastik (Proc. Int. Symposium Biomed. Imaging 230–233, 2011), or the Trainable Weka Segmentation (Bioinformatics 33, 2424–2426, 2017) (resulting in large missing membranes or widespread introduction of false positive signal) and requires deep learning tools, such as CDeep3M, to achieve a high level of accuracy. b, Because of the high accuracy of the predictions of CDeep3M, simpler postprocessing such as watershed and region-growing algorithms can be used to accomplish dense segmentation on a small scale. In comparison, we were unable to produce meaningful results using this approach on the prediction maps of the aforementioned machine learning tools. On a larger scale, more sophisticated region agglomeration techniques should be used (Nat. Methods 14, 101–102, 2017; A deep structured learning approach towards automating connectome reconstruction from 3D electron micrographs. Preprint at arXiv, https://arxiv.org/abs/1709.02974 (2017)) and will allow one to take full advantage of the membrane segmentation.

Supplementary Figure 3 Determination of the accuracy of CDeep3M mitochondria segmentation, based on one FIB–SEM and one SBEM dataset.

As noted in Lucchi et al., human ‘ground truth’ segmentations are typically inaccurate around the borders of an object. a, An exclusion zone of 1–2 voxels can compensate for this effect and avoid erroneously assigning those pixels. b, We determined the accuracy of CDeep3M prediction on an FIB–SEM hippocampal dataset, using the same metrics as described in Ref. 8. CDeep3M outperformed the three-class CRF in all metrics (Jaccard: CDeep3M: 0.8361 versus 3C-CFR: 0.741; two-voxel exclusion zone: CDeep3M: 0.9266, 3C-CFR: 0.85; five-voxel exclusion zone: CDeep3M: 0.9437, 3C-CFR: ~0.92). Both the Jaccard index and the F1 value (the harmonic mean of precision and recall) increase once the erroneously missing object boundaries in the human segmentation are masked by the exclusion zone. The remaining error was largely caused by a single large object in the test data, which resembled the appearance of a mitochondrion and which was absent from the training data. c, d, Similarly, we used the SBEM data shown in c (scale bars: left, 500 nm; right, 200 nm) to compare computer versus repeated human performance. d, The consensus of three ‘ground truth’ segmentations of expert human annotators was used to determine the performance of CDeep3M and compare the individual performance of each human annotator to the consensus. CDeep3M performed similar to the human experts (exclusion zone of 1 voxel; Jaccard index: CDeep3M: 0.954, humans (mean): 0.983; F1 value: CDeep3M: 0.976, humans (mean): 0.966).

Supplementary Figure 4 Training and validation loss and accuracy.

Training and validation loss (left panels) and validation accuracy (right panels) are shown for training performed on the mitochondria dataset evaluated in Supplementary Fig. 2b. All three models generalize well and improve on the unseen validation dataset until the end of the training.

Supplementary Figure 5 Membrane segmentation using transfer learning on a pre-trained model.

The 1fm model was trained for 16,000 iterations before segmenting an image from a different dataset (upper left panel). 2000 additional iterations of training were performed using training data of a new image dataset to adapt the trained model to the new image parameters (staining intensity and new features in the image). The segmentation quality substantially improved (lower middle panel), whereas it remained faulty with continued training on the first dataset without domain adaption (lower left panel). Similar improvements are seen for all trained models (1fm, 3fm domain adaption from 14,454 iterations until 15,757 iterations, and 5fm; Fig. 2).

Supplementary information

Supplementary Text and Figures

Supplementary Figures 1–5 and Supplementary Note

Reporting Summary

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Haberl, M.G., Churas, C., Tindall, L. et al. CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation. Nat Methods 15, 677–680 (2018). https://doi.org/10.1038/s41592-018-0106-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-018-0106-z

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing