Jim Davies's suggestion that we programme ethics into artificial intelligence meta-systems as a safeguard could well backfire — by compromising our abilities to judge ethical implications (Nature 538, 291; 2016).
In an earlier version of the future, robot lawnmowers and kitchen appliances promised us more leisure time. We now face the spectre of mass human displacement from a consumption-based economy by equipment that can do things much more efficiently than people can.
The 'age of information' promised global connectivity, but this has wrought distraction to a point at which only lurid excesses can focus our undivided attention on the society to which we all belong.
And as computer-generated imagery colonizes our imaginations, many are barely swayed by real violence (the wanton destruction of Syrian cities comes to mind). There is even evidence that video gaming driven by computer-generated imagery can alter a player's perception of acceleration and gravity (see, for example, A. B. Ortiz de Gortari and M. D. Griffiths Int. J. Hum. Comput. Interact. 30, 95–105; 2014) — compromising their decision-making skills in a world where real physics is the law. Such trends don't bode well for 'ethical' computers.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Stocker, M. Be wary of 'ethical' artificial intelligence. Nature 540, 525 (2016). https://doi.org/10.1038/540525b
Published:
Issue Date:
DOI: https://doi.org/10.1038/540525b
This article is cited by
-
A reference framework and overall planning of industrial artificial intelligence (I-AI) for new application scenarios
The International Journal of Advanced Manufacturing Technology (2019)