Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Generalization – the ability of AI systems to apply and/or extrapolate their knowledge to new data which might differ from the original training data – is a major challenge for the effective and responsible implementation of human-centric AI applications. Current debate in bioethics proposes selective prediction as a solution. Here we explore data-based reasons for generalization challenges and look at how selective predictions might be implemented technically, focusing on clinical AI applications in real-world healthcare settings.
AI holds the potential to transform healthcare, promising improvements in patient care. Yet, realizing this potential is hampered by over-reliance on limited datasets and a lack of transparency in validation processes. To overcome these obstacles, we advocate the creation of a detailed registry for AI algorithms. This registry would document the development, training, and validation of AI models, ensuring scientific integrity and transparency. Additionally, it would serve as a platform for peer review and ethical oversight. By bridging the gap between scientific validation and regulatory approval, such as by the FDA, we aim to enhance the integrity and trustworthiness of AI applications in healthcare.