Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

Should artificial intelligence be interpretable to humans?

As artificial intelligence (AI) makes increasingly impressive contributions to science, scientists increasingly want to understand how AI reaches its conclusions. Matthew D. Schwartz discusses what it means to understand AI and whether such a goal is achievable — or even needed.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: The evolution of biological and artificial intelligence takes place on dramatically different timescales.

References

  1. Dyson, F. J. Time without end: Physics and biology in an open universe. Rev. Mod. Phys. 51, 447 (1979).

    Article  ADS  Google Scholar 

  2. Chowdhery, A. et al. PaLM: Scaling language modeling with pathways. Preprint at https://doi.org/10.48550/arXiv.2204.02311 (2022).

  3. Lewkowycz, A. Solving quantitative reasoning problems with Language models. Preprint at https://doi.org/10.48550/arXiv.2206.14858 (2022).

  4. Wei, J. et al. Chain of thought prompting elicits reasoning in large language models. Preprint at https://doi.org/10.48550/arXiv.2201.11903 (2022).

  5. Schwartz, M. D. Modern machine learning and particle physics. Harvard Data Sci. Rev. https://doi.org/10.1162/99608f92.beeb1183 (2021).

    Article  Google Scholar 

  6. Grojean, C. et al. Lessons on interpretable machine learning from particle physics. Nat. Rev. Phys. 4, 284–286 (2022).

    Article  Google Scholar 

  7. Krenn, M. et al. On scientific understanding with artificial intelligence. Nat. Rev. Phys. https://doi.org/10.1038/s42254-022-00518-3 (2022).

    Article  Google Scholar 

  8. Nagel, T. What is it like to be a bat? Philos. Rev. 83, 435–450 (1974).

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew D. Schwartz.

Ethics declarations

Competing interests

The author declares no competing interests.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schwartz, M.D. Should artificial intelligence be interpretable to humans?. Nat Rev Phys 4, 741–742 (2022). https://doi.org/10.1038/s42254-022-00538-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42254-022-00538-z

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics