Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • ADVERTISEMENT FEATURE Advertiser retains sole responsibility for the content of this article

Making artificial intelligence more transparent

A more transparent artificial-intelligence system for self-driving developed by AISIN identifies cues that correlate with what a human driver would pick up on.© darekm101/RooM/Getty Images

As artificial intelligence (AI) increasingly impacts everyday life, there is a push to establish a regulatory framework. Leading tech CEOs have made calls to set up regulations before the technology matures, while the European Union recently proposed the first legal framework for AI, with the aim of establishing a gold standard for the world.

Key to such efforts is ensuring that the technology is trustworthy. “To make AI more ethically sound and practical, it’s crucial to enhance explainability in deep neural networks,” says Yoshihide Sawada, principal investigator at Aisin.

Deep neural networks are known for their capacity to process complex and unstructured data. But they do this by forming generalizations using hidden layers, leaving the inner workings a black box. For Aisin’s business, this has resulted in clients questioning AI output for factory automation and self-driving systems. This could give rise to unintended social consequences; for instance, in 2019, a study found that major algorithms used for medical insurance mirrored systematic racial assumptions in the real world.

Trust through transparency

Based in Aisin’s Tokyo Research Center, Sawada and his team are undertaking research on ‘white-box’ neural networks, which make decision-making criteria apparent.

The team is building on the concept bottleneck model, in which an AI model first predicts crucial decision-making characteristics before generating the final output. Such attributes could include wing colour or beak length when classifying species of birds, for example. “The accuracy of concept bottleneck models depends greatly on the number of such criteria,” explains Sawada. “In addition to improving accuracy with input from experts, we’re working to have the model predict tacit knowledge, or the unrecognized concepts that improve the accuracy of the results.”

The preliminary results have been promising. “In the context of driving, our model is able to identify and focus on cues that align with human judgement, such as traffic lights, pedestrians or cars,” says Sawada.

Emphasis on basic research

Sawada, who previously developed AI for healthcare, had always felt the need for basic research on white-box neural networks. “Physicians asked us for reasons when there were mismatches between their diagnoses and AI results. We didn’t have a good explanation, but doctors need to understand because they have to explain to patients,” he recalls. “Without progress in basic research, we would just come up against dead ends.”

Basic research in promising new areas such as spiking neural networks, which reduce computing power, and knowledge transfer, which mimic how humans draw on previous experience, are also part of Aisin’s project repertoire. “The Tokyo Research Center welcomes researchers with the initiative to make academic contributions to the field,” says Sawada.

Search

Quick links