There is, in the human mind, a desire to simplify complex problems — to replace the messy weighing up of many elements with the calculation of just one. Witness the modern aim to capture the value of a research journal with an impact factor, or to assess a scientific career with a single index. Multi-dimensional thinking is hard — can't we replace it with simple counting?

Or consider cost–benefit reasoning as it is pervasively applied to decisions of all kinds. The standard recipe in the economics literature — widely taken to be the correct framework for thinking about issues such as climate change — begins with a mathematical expression for the total expected value of the infinite future, calculated by summing over all of it. It then tries to calculate how this value will be altered if certain costly steps were taken now to avoid climate change. Will our future wealth be higher or lower?

Accept the premise that reducing hugely complex matters to a problem in one-dimensional calculus is legitimate and you're largely stuck. Trapped by the larger framework, you can only argue over details. But what if the entire set-up is faulty? Could there be problems for which the cost–benefit approach makes no sense?

I suspect there are many. A study by psychologists a few years ago found that people making decisions about complex tasks, such as choosing a new house, did significantly worse when encouraged to make decisions by counting costs and benefits. The process of calculation made them focus too much on one dimension at the cost of excluding others. The people did better when trusting their gut instincts, which enabled an integration of multiple factors.

The cost–benefit framework is a recipe for dangerous confusion when we face risks from rare but costly events.

There is, however, an even worse problem for the cost–benefit approach. When we face risks from rare but exceptionally costly events — runaway climate change, for example, or any possibly irreversible catastrophe — the cost–benefit framework is a recipe for dangerous confusion.

A team of scientists from New York University recently framed the issue quite clearly (N. N. Taleb et al., preprint at http://arxiv.org/abs/1410.5787; 2014). The problem is irreversibility. If the expected benefits of some action outweigh the costs, and the potential costs aren't too large, then going after the positive expected gain makes sense. The law of large numbers means you can expect to recoup losses from temporary bad luck with future winnings. Not so if there are rare catastrophic outcomes of game-ending severity. Given sufficient time, the game has a guaranteed, unhappy ending. Mathematicians refer to such problems as 'ruin' problems, and the expected gain from any one try, even if hugely positive, is a poor guide to judgement.

As Taleb et al. point out, recognition of this problem is the legitimate motivation for the so-called precautionary principle — the idea that, as the authors put it, “if an action or policy has a suspected risk of causing severe harm to the public domain (affecting general health or the environment globally), the action should not be taken in the absence of scientific near-certainty about its safety. The burden of proof falls on those proposing an action, not those opposing it. The precautionary principle is intended to deal with uncertainty and risk in cases where the absence of evidence and the incompleteness of scientific knowledge carries profound implications.”

The precautionary principle is controversial. Many people seem to think that it's somehow unscientific and almost crazy. Taleb and his co-authors make the case that it's most certainly not, if you think clearly about the mathematics of extreme risks. And they suggest that consideration of the nature of extreme risks ought to give the precautionary principle more influence over public decision making.

The most important issue is to identify the category into which an issue belongs. If there are clearly no possible catastrophic risks, then ordinary cost–benefit thinking makes sense. This may usually be the case. But it takes careful thinking when facing trickier issues such as nuclear energy, genetic engineering or near-earth asteroids. Taleb and colleagues suggest that science has to be the best guide to identifying potentially global risks. Do the consequences in question have the potential to spread, or are they instead naturally localized? This way of thinking, they show, has some surprising implications.

For example, nuclear energy has certainly raised global concerns about safety. But the nature of nuclear reactors is such that the risks they pose really do seem to be capped; negative, yes, but not in a global sense. The sudden meltdown of a large reactor might do an awful lot of damage locally, yet global, catastrophic spreading seems exceedingly unlikely. We know a huge amount about the mechanisms of nuclear physics, and they don't support such an outcome.

In contrast, however, Taleb et al. argue that our uncertainty is far more profound in other areas. In particular, they point to the wide use of genetically modified organisms in food production, which many people are convinced is scientifically 'safe'. It's hard to see why. Despite all precautions, genes from modified organisms inevitably spread into natural populations, and from there they have the potential to spread uncontrollably through the genetic ecosystem. There is little reason to expect damage to be essentially localized.

Moreover, as they point out, the most common counter-argument — that breeding hasn't proven to be dangerous, but is itself a form of genetic engineering — doesn't really make sense. Breeding between plant species, for example, involves mild genetic tinkering in organisms that remain living in conditions similar to those in which they evolved. New organisms get tested locally, before spreading more widely. In contrast, genetic engineering — the insertion of a gene from a bacterium into a plant, for example — reaches in and changes the genome in a much more radical way. Genetic engineering, as they put it, alters “large sets of interdependent factors at the same time, with dramatic risks of unintended consequences.” Moreover, modern agribusiness strives to spread these organisms far and wide as quickly as possible.

I'm happy to see this analysis, and not only because it concurs with my own gut feelings about the possible risks of genetically modified foods (especially to ecosystems). The important thing is getting the logic straight and understanding what good science really implies, which means taking the consequences of extreme risks and 'ruin scenarios' seriously. Ignoring such possibilities is anything but a rational response.