Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

A clinically applicable approach to continuous prediction of future acute kidney injury

Abstract

The early prediction of deterioration could have an important role in supporting healthcare professionals, as an estimated 11% of deaths in hospital follow a failure to promptly recognize and treat deteriorating patients1. To achieve this goal requires predictions of patient risk that are continuously updated and accurate, and delivered at an individual level with sufficient context and enough time to act. Here we develop a deep learning approach for the continuous risk prediction of future deterioration in patients, building on recent work that models adverse events from electronic health records2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 and using acute kidney injury—a common and potentially life-threatening condition18—as an exemplar. Our model was developed on a large, longitudinal dataset of electronic health records that cover diverse clinical environments, comprising 703,782 adult patients across 172 inpatient and 1,062 outpatient sites. Our model predicts 55.8% of all inpatient episodes of acute kidney injury, and 90.2% of all acute kidney injuries that required subsequent administration of dialysis, with a lead time of up to 48 h and a ratio of 2 false alerts for every true alert. In addition to predicting future acute kidney injury, our model provides confidence assessments and a list of the clinical features that are most salient to each prediction, alongside predicted future trajectories for clinically relevant blood tests9. Although the recognition and prompt treatment of acute kidney injury is known to be challenging, our approach may offer opportunities for identifying patients at risk within a time window that enables early treatment.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Illustrative example of risk prediction, uncertainty and predicted future laboratory values.
Fig. 2: Model performance illustrated by receiver operating characteristic and precision–recall curves.
Fig. 3: The time between model prediction and the actual AKI event.

Similar content being viewed by others

Data availability

The clinical data used for the training, validation and test sets were collected at the US Department of Veterans Affairs and transferred to a secure data centre with strict access controls in de-identified format. Data were used with both local and national permissions. It is not publicly available and restrictions apply to its use. The de-identified dataset (or a test subset) may be available from the US Department of Veterans Affairs, subject to local and national ethical approvals.

Code availability

We make use of several open-source libraries to conduct our experiments: the machine learning framework TensorFlow (https://github.com/tensorflow/tensorflow) along with the TensorFlow library Sonnet (https://github.com/deepmind/sonnet), which provides implementations of individual model components58. Our experimental framework makes use of proprietary libraries and we are unable to publicly release this code. We detail the experiments and implementation details in the Methods and Supplementary Information to allow for independent replication.

References

  1. Thomson, R., Luettel, D., Healey, F. & Scobie, S. Safer Care for the Acutely Ill Patient: Learning from Serious Incidents (National Patient Safety Agency, 2007).

  2. Henry, K. E., Hager, D. N., Pronovost, P. J. & Saria, S. A targeted real-time early warning score (TREWscore) for septic shock. Sci. Transl. Med. 7, 299ra122 (2015).

    Article  Google Scholar 

  3. Rajkomar, A. et al. Scalable and accurate deep learning with electronic health records. npj Digit. Med. 1, 18 (2018).

    Google Scholar 

  4. Koyner, J. L., Adhikari, R., Edelson, D. P. & Churpek, M. M. Development of a multicenter ward-based AKI prediction model. Clin. J. Am. Soc. Nephrol. 11, 1935–1943 (2016).

    Article  Google Scholar 

  5. Cheng, P., Waitman, L. R., Hu, Y. & Liu, M. Predicting inpatient acute kidney injury over different time horizons: how early and accurate? In AMIA Annual Symposium Proceedings 565 (American Medical Informatics Association, 2017).

  6. Koyner, J. L., Carey, K. A., Edelson, D. P. & Churpek, M. M. The development of a machine learning inpatient acute kidney injury prediction model. Crit. Care Med. 46, 1070–1077 (2018).

    Article  Google Scholar 

  7. Komorowski, M., Celi, L. A., Badawi, O., Gordon, A. C. & Faisal, A. A. The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care. Nat. Med. 24, 1716–1720 (2018).

    Article  CAS  Google Scholar 

  8. Avati, A. et al. Improving palliative care with deep learning. In 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 311–316 (2017).

  9. Lim, B. & van der Schaar, M. Disease-Atlas: navigating disease trajectories with deep learning. Proc. Mach. Learn. Res. 85, 137–160 (2018).

    Google Scholar 

  10. Futoma, J., Hariharan, S. & Heller, K. A. Learning to detect sepsis with a multitask Gaussian process RNN classifier. In Proc. International Conference on Machine Learning (eds Precup, D. & Teh, Y. W.) 1174–1182 (2017).

  11. Miotto, R., Li, L., Kidd, B. A. & Dudley, J. T. Deep Patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci. Rep. 6, 26094 (2016).

    Article  ADS  CAS  Google Scholar 

  12. Lipton, Z. C., Kale, D. C., Elkan, C. & Wetzel, R. Learning to diagnose with LSTM recurrent neural networks. Preprint at https://arxiv.org/abs/1511.03677 (2016).

  13. Cheng, Y. P. Z. J. H. & Wang, F. Risk prediction with electronic health records: a deep learning approach. In Proc. SIAM International Conference on Data Mining (eds Venkatasubramanian, S. C. & Meria, W.) 432–440 (2016).

  14. Soleimani, H., Subbaswamy, A. & Saria, S. Treatment-response models for counterfactual reasoning with continuous-time, continuous-valued interventions. In Proc. 33rd Conference on Uncertainty in Artificial Intelligence (AUAI Press Corvallis, 2017).

  15. Alaa, A. M., Yoon, J., Hu, S. & van der Schaar, M. Personalized risk scoring for critical care prognosis using mixtures of Gaussian process experts. IEEE Trans. Biomed. Eng. 65, 207–218 (2018).

  16. Perotte, A., Ranganath, R., Hirsch, J. S., Blei, D. & Elhadad, N. Risk prediction for chronic kidney disease progression using heterogeneous electronic health record data and time series analysis. J. Am. Med. Inform. Assoc. 22, 872–880 (2015).

    Article  Google Scholar 

  17. Bihorac, A. et al. MySurgeryRisk: development and validation of a machine-learning risk algorithm for major complications and death after surgery. Ann. Surg. 269, 652–662 (2019).

    Article  Google Scholar 

  18. Khwaja, A. KDIGO clinical practice guidelines for acute kidney injury. Nephron Clin. Pract. 120, c179–c184 (2012).

    Google Scholar 

  19. Stenhouse, C., Coates, S., Tivey, M., Allsop, P. & Parker, T. Prospective evaluation of a modified early warning score to aid earlier detection of patients developing critical illness on a general surgical ward. Br. J. Anaesth. 84, 663P (2000).

    Article  Google Scholar 

  20. Alge, J. L. & Arthur, J. M. Biomarkers of AKI: a review of mechanistic relevance and potential therapeutic implications. Clin. J. Am. Soc. Nephrol. 10, 147–155 (2015).

    Article  CAS  Google Scholar 

  21. Wang, H. E., Muntner, P., Chertow, G. M. & Warnock, D. G. Acute kidney injury and mortality in hospitalized patients. Am. J. Nephrol. 35, 349–355 (2012).

    Article  Google Scholar 

  22. MacLeod, A. NCEPOD report on acute kidney injury—must do better. Lancet 374, 1405–1406 (2009).

    Article  Google Scholar 

  23. Lachance, P. et al. Association between e-alert implementation for detection of acute kidney injury and outcomes: a systematic review. Nephrol. Dial. Transplant. 32, 265–272 (2017).

    Article  Google Scholar 

  24. Johnson, A. E. W. et al. Machine learning and decision support in critical care. Proc. IEEE Inst. Electr. Electron Eng. 104, 444–466 (2016).

    Article  Google Scholar 

  25. Mohamadlou, H. et al. Prediction of acute kidney injury with a machine learning algorithm using electronic health record data. Can. J. Kidney Health Dis. 5, 1–9 (2018).

    Article  Google Scholar 

  26. Pan, Z. et al. A self-correcting deep learning approach to predict acute conditions in critical care. Preprint at https://arxiv.org/abs/1901.04364 (2019).

  27. Park, S. et al. Impact of electronic acute kidney injury (AKI) alerts with automated nephrologist consultation on detection and severity of AKI: a quality improvement study. Am. J. Kidney Dis. 71, 9–19 (2018).

    Article  Google Scholar 

  28. Chen, I., Johansson, F. D. & Sontag, D. Why is my classifier discriminatory? Preprint at https://arxiv.org/abs/1805.12002 (2018).

  29. Schulam, P. & Saria, S. Reliable decision support using counterfactual models. In Advances in Neural Information Processing Systems 30 (eds Guyon, I. et al.) 1697–1708 (2017).

  30. Telenti, A., Steinhubl, S. R. & Topol, E. J. Rethinking the medical record. Lancet 391, 1013 (2018).

    Article  Google Scholar 

  31. Department of Veterans Affairs. Veterans Health Administration: Providing Health Care for Veterans. https://www.va.gov/health/ (accessed 9 November 2018).

  32. Razavian, N. & Sontag, D. Temporal convolutional neural networks for diagnosis from lab tests. In 4th Int. Conf. Learn. Representations (2016).

  33. Zadrozny, B. & Elkan, C. Transforming classifier scores into accurate multiclass probability estimates. In Proc. 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (eds, Zaïane, O. R. et al.) 694–699 (ACM, 2002).

  34. Zilly, J. G., Srivastava, R. K., Koutník, J. & Schmidhuber, J. Recurrent highway networks. In Proc. International Conference on Machine Learning (vol. 70) (eds Precup, D. & Teh, Y. W.) 4189–4198 (2017).

  35. Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).

    Article  CAS  Google Scholar 

  36. Collins, J., Sohl-Dickstein, J. & Sussillo, D. Capacity and trainability in recurrent neural networks. In International Conference on Learning Representations (eds Bengio, Y. & LeCun, Y.) https://openreview.net/forum?id=BydARw9ex (2017).

  37. Bradbury, J., Merity, S., Xiong, C. & Socher, R. Quasi-recurrent neural networks. In International Conference on Learning Representations (eds Bengio, Y. & LeCun, Y.) https://openreview.net/forum?id=H1zJ-v5xl (2017).

  38. Lei, T. & Zhang, Y. Training RNNs as fast as CNNs. Preprint at https://arxiv.org/abs/1709.02755v1 (2017).

  39. Chung, J., Gulcehre, C., Cho, K. & Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modelling. Preprint at https://arxiv.org/abs/1412.3555 (2014).

  40. Graves, A., Wayne, G. & Danihelka, I. Neural Turing machines. Preprint at https://arxiv.org/abs/1410.5401 (2014).

  41. Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D. & Lillicrap, T. Meta-learning with memory-augmented neural networks. In Proc. International Conference on Machine Learning (eds Balcan, M. F. & Weinberger, K. Q.) 1842–1850 (2016).

  42. Graves, A. et al. Hybrid computing using a neural network with dynamic external memory. Nature 538, 471–476 (2016).

    Article  ADS  Google Scholar 

  43. Santoro, A. et al. Relational recurrent neural networks. In Advances in Neural Information Processing Systems 31 (eds Bengio, S. et al.) 7310–7321 (2018).

  44. Caruana, R., Baluja, S. & Mitchell, T. in Advances in Neural Information Processing Systems (eds Mozer, M. et al.) 959–965 (1996).

  45. Wiens, J., Guttag, J. & Horvitz, E. Patient risk stratification with time-varying parameters: a multitask learning approach. J. Mach. Learn. Res. 17, 1–23 (2016).

    MathSciNet  MATH  Google Scholar 

  46. Ding, D. Y. et al. The effectiveness of multitask learning for phenotyping with electronic health records data. Preprint at https://arxiv.org/abs/1808.03331v1 (2018).

  47. Glorot, X. & Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics (vol. 9) (eds Tehand, Y. W. & Titterington, M.) 249–256 (2010).

  48. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In International Conference on Learning Representations (eds Bengio, Y. & LeCun, Y.) https://dblp.org/rec/bib/journals/corr/KingmaB14 (2015).

  49. Guo, C., Pleiss, G., Sun, Y. & Weinberger, K. Q. On calibration of modern neural networks. In Proc. International Conference on Machine Learning (eds Precup, D. & Teh, Y. W.) 1321–1330 (2017).

  50. Platt, J. C. in Advances in Large-Margin Classifiers (eds Smola, A. et al.) 61–74 (MIT Press, 1999).

  51. Brier, G. W. Verification of forecasts expressed in terms of probability. Mon. Weath. Rev. 78, 1–3 (1950).

    Article  ADS  Google Scholar 

  52. Niculescu-Mizil, A. & Caruana, R. Predicting good probabilities with supervised learning. In Proc. International Conference on Machine Learning (eds Raedt, L. D. & Wrobel, S.) 625–632 (ACM, 2005).

  53. Saito, T. & Rehmsmeier, M. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS ONE 10, e0118432 (2015).

    Article  Google Scholar 

  54. Efron, B. & Tibshirani, R. J. An Introduction to the Bootstrap (CRC, 1994).

  55. Mann, H. B. & Whitney, D. R. On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18, 50–60 (1947).

    Article  MathSciNet  Google Scholar 

  56. Lakshminarayanan, B., Pritzel, A. & Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems (eds Guyon, I. et al.) 6402–6413 (2017).

  57. De Fauw, J. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24, 1342–1350 (2018).

    Article  Google Scholar 

  58. Abadi, M. et al. Tensorflow: large-scale machine learning on heterogeneous distributed systems. Preprint at https://arxiv.org/abs/1603.04467 (2015).

Download references

Acknowledgements

We thank the veterans and their families under the care of the US Department of Veterans Affairs. We thank A. Graves, O. Vinyals, K. Kavukcuoglu, S. Chiappa, T. Lillicrap, R. Raine, P. Keane, M. Seneviratne, A. Schlosberg, O. Ronneberger, J. De Fauw, K. Ruark, M. Jones, J. Quinn, D. Chou, C. Meaden, G. Screen, W. West, R. West, P. Sundberg and the Google AI team, J. Besley, M. Bawn, K. Ayoub and R. Ahmed. Finally, we thank the many physicians, administrators and researchers of the US Department of Veterans Affairs who worked on the data collection, and the rest of the DeepMind team for their support, ideas and encouragement. G.R. and H.M. were supported by University College London and the National Institute for Health Research (NIHR) University College London Hospitals Biomedical Research Centre. The views expressed are those of these author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Author information

Authors and Affiliations

Authors

Contributions

M.S., T.B., J.C., J.R.L., N.T., C.N., D.H. and R.R. initiated the project. N.T., X.G., H.A., A.S., J.R.L., C.N., C.R.B. and K.P. created the dataset. N.T., X.G., A.S., H.A., J.W.R., M.Z., A.M., I.P. and S.M. contributed to software engineering. N.T., X.G., A.M., J.W.R., M.Z., A.S., C.B., S.M., J.R.L. and C.N. analysed the results. N.T., X.G., A.M., J.W.R., M.Z., A.S., H.A., J.C., C.O.H., C.R.B., T.B., C.N., S.M. and J.R.L. contributed to the overall experimental design. N.T., X.G., A.M., J.W.R., M.Z., S.R. and S.M. designed the model architectures. J.R.L., G.R., H.M., C.L., A.C., A.K., C.O.H., D.K. and C.N. contributed clinical expertise. A.M., N.T., M.Z. and J.W.R. contributed to experiments into model confidence. M.Z., N.T., A.S., A.M. and J.W.R. contributed to model calibration. N.T., M.Z., A.M., A.S., X.G. and J.R.L. contributed to false-positive analysis. N.T., X.G., A.M., J.W.R., M.Z., A.S., S.R. and S.M. contributed to comparison of different architectures. N.T., A.M., X.G., A.S., M.Z., J.R.L. and S.M. contributed to experiments on auxiliary prediction targets. A.M., N.T., X.G., M.Z., A.S., J.R.L. and S.M. contributed to experiments into model generalizability. M.Z., A.M., N.T., T.B. and J.R.L. contributed to subgroup analyses. J.W.R., N.T., A.S., M.Z. and S.M. contributed to ablation experiments. N.T., A.S. and J.R.L. contributed to experiments into how to handle renal replacement therapy in the data. J.W.R., X.G., N.T., A.M., A.C., C.N., K.P., C.R.B., M.Z., A.S. and J.R.L. contributed to analysing salient clinical features. A.M., M.Z. and N.T. contributed to experiments into the influence of data recency on model performance. C.M., S.M., H.A., C.N., J.R.L. and T.B. managed the project. N.T., J.R.L., J.W.R., M.Z., A.M., H.M., C.R.B., S.M. and G.R. wrote the paper.

Corresponding authors

Correspondence to Nenad Tomašev or Joseph R. Ledsam.

Ethics declarations

Competing interests

G.R., H.M. and C.L. are paid contractors of DeepMind. The authors have no other competing interests to disclose.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Peer review information Nature thanks Lui G. Forni, Suchi Saria and Eric Topol for their contribution to the peer review of this work.

Extended data figures and tables

Extended Data Fig. 1 Sequential representation of electronic health record data.

All electronic health record data available for each patient were structured into a sequential history for both inpatient and outpatient events in six-hourly blocks, shown here as circles. In each 24-h period, events without a recorded time were included in a fifth block. Apart from the data present at the current time step, the models optionally receive an embedding of the previous 48 h and the longer history of 6 months or 5 years.

Extended Data Fig. 2 Architecture of the proposed model.

The best performance was achieved by a multi-task deep recurrent highway network architecture on top of an L1-regularized deep residual embedding component that learns the best data representation end-to-end without pre-training.

Extended Data Fig. 3 Calibration.

a, b, The predictions were recalibrated using isotonic regression before (a) and after (b) calibration. Model predictions were grouped into 20 buckets, with a mean model risk prediction plotted against the percentage of positive labels in that bucket. The diagonal line demonstrates the ideal calibration.

Source Data

Extended Data Fig. 4 Analysis of false-positive predictions.

a, For prediction of any AKI within 48 h at 33% precision, nearly half of all predictions are trailing, after the AKI has already occurred (orange bars) or early, more than 48 h prior (blue bars). The histogram shows the distribution of these trailing and early false positives for prediction. Incorrect predictions are mapped to their closest preceding or following episode of AKI (whichever is closer) if that episode occurs in an admission. For ±1 day, 15.2% of false positives correspond to observed AKI events within 1 day after the prediction (model reacted too early) and 2.9% correspond to observed AKI events within 1 day before the prediction (model reacted too late). b, Subgroup analysis for all false-positive alerts. In addition to the 49% of false-positive alerts that were made in admissions during which there was at least one episode of AKI, many of the remaining false-positive alerts were made in patients who had evidence of clinical risk factors present in their available electronic health record data. These risk factors are shown here for the proposed model that predicts any stage of AKI occurring within the next 48 h.

Source Data

Extended Data Table 1 Model performance for predicting AKI within the full range of possible prediction windows from 6 to 72 h
Extended Data Table 2 Daily frequency of true- and false-positive alerts when predicting different stages of AKI
Extended Data Table 3 Model performance on patients who required subsequent dialysis
Extended Data Table 4 Operating points for predicting AKI up to 48 h ahead of time
Extended Data Table 5 Future and cross-site generalizability experiments
Extended Data Table 6 Summary statistics for the data

Supplementary information

Supplementary Information

Supplementary Sections A-K, including Supplementary Figures 1-12 and Supplementary Tables 1-12. Supplementary Section A: Supplementary figures showing the visual examples from five systematically selected success cases and five systematically selected failure cases from the predictive model. Supplementary Section B: Supplementary analysis of the auxiliary numerical prediction tasks. Supplementary Section C: Additional analysis from an experiment into the significance of individual features in our trained models based on occlusion analysis. Supplementary Section D: Supplementary results and methods from the comparison of broad comparison of available models on the AKI prediction task. Supplementary Section E: Comparison of our models performance to baseline models trained on features that have been chosen by clinicians as being relevant for modelling kidney function. Supplementary Section F: The results of literature reviews into risk prediction of AKI and machine learning on electronic health records. Supplementary Section G: Supplementary analyses and results of individual subgroups of the patient population studied. Supplementary Section H: Supplementary analysis of the influence of data recency on model performance. Supplementary Section I: Analysis of the contribution of the aspects of our model’s design to its overall performance through an ablation study that removes specific components of the model, training it fully, and then comparing the simplified model’s PR AUC on the validation set. Supplementary Section J: Supplementary methods and results from the hyperparameter sweeps described in the Methods section. Supplementary Section K: Additional analysis from an experiment into the relationship between model confidence and prediction accuracy.

Reporting Summary

Supplementary Data

This file contains Source Data for Supplementary Figure 1.

Source data

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tomašev, N., Glorot, X., Rae, J.W. et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature 572, 116–119 (2019). https://doi.org/10.1038/s41586-019-1390-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41586-019-1390-1

This article is cited by

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing: Translational Research

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

Get what matters in translational research, free to your inbox weekly. Sign up for Nature Briefing: Translational Research