In our haste to measure everything in order to wring out evidence that non-specialists can understand and to secure funding, we forget that predicting the impact of research is akin to quantifying dreams (Nature 502, 271; 2013). There are no short cuts for proper research assessment.

The impact of research on society is a composite of many strands of work, usually by different scientists and engineers, which — often serendipitously — culminate years later in changing some aspect of our lives. But attempting to disentangle those threads is a hopeless task.

There is probably little prognostic value in counting research-paper downloads, for example. In fact, such metrics are but surrogates of real research impact and can generate goals of their own. They encourage 'gaming', or manipulation of data to artificially improve metrics.

When used over time within institutions, metrics can be useful guides — we all need external measures of some sort. It is when they are used as a form of currency in their own right that we get into trouble.