Jim Davies's suggestion that we programme ethics into artificial intelligence meta-systems as a safeguard could well backfire — by compromising our abilities to judge ethical implications (Nature 538, 291; 2016).

In an earlier version of the future, robot lawnmowers and kitchen appliances promised us more leisure time. We now face the spectre of mass human displacement from a consumption-based economy by equipment that can do things much more efficiently than people can.

The 'age of information' promised global connectivity, but this has wrought distraction to a point at which only lurid excesses can focus our undivided attention on the society to which we all belong.

And as computer-generated imagery colonizes our imaginations, many are barely swayed by real violence (the wanton destruction of Syrian cities comes to mind). There is even evidence that video gaming driven by computer-generated imagery can alter a player's perception of acceleration and gravity (see, for example, A. B. Ortiz de Gortari and M. D. Griffiths Int. J. Hum. Comput. Interact. 30, 95–105; 2014) — compromising their decision-making skills in a world where real physics is the law. Such trends don't bode well for 'ethical' computers.