For four years, officials in the United States have been working on modified rules to regulate research with human participants. At the heart of the issue — and one likely reason why the discussions are taking so long — is the question of risk, and how to assess it. In fact, the goal of the new rules is to more effectively link how risky a project is perceived to be for the people taking part with how it should be regulated. Hence the name: risk-based regulation.

How to assess risk in research with humans has been a persistent problem. These decisions are typically taken by institutional review boards (IRBs), but studies have shown that different boards, and indeed different members of the same IRBs, judge the risk of identical procedures in very different ways.

This variation in risk assessment contributes to multiple problems as reflected in the title of the 2011 US advanced notice of proposed rule-making: Enhancing Protections for Research Subjects and Reducing Burden, Delay, and Ambiguity for Investigators (see go.nature.com/x8rfth).

The current US regulatory framework known as the common rule calls for the evaluation of ‘probability and magnitude of harm or discomfort’. This language implies that IRBs can make objective assessments of risk and wrongly assumes that accurate predictions about experimental outcomes will be possible. Moreover, verdicts such as ‘minimal risk’ and ‘minor increase over minimal risk’ do not distinguish whether the increase is because of the perceived extent of added risk or the uncertainty of that added risk. Two IRB members, in other words, can reach the same decision for very different reasons. This can make it harder for groups to reach a consensus.

A good starting point would be to ensure that risk is considered in context and not an abstract sense.

How should we think differently about risk? A good starting point would be to ensure that the risk to humans is always considered in context and not in an abstract sense. For instance, chairs of IRBs have been shown to grade the risk of lumbar puncture in 11-year-old children without sedation differently if they are told the children are healthy or ill and have undergone the procedure previously (S. Shah et al. JAMA 291, 476–482; 2004).

In my view, a better way to assess and discuss risk is by using a method of inquiry called post-normal science (PNS). Introduced in the 1990s, PNS was put forth as a heuristic risk-assessment framework to assist decision-making at the interface between environmental science and public policy (S. O. Funtowicz and J. R. Ravetz Futures 25, 739–755; 1993). Designed to address specific scenarios, it always focuses on contextual rather than abstract risk.

Importantly, PNS makes no attempt to merge magnitude and probability of risk, which are referred to as ‘decision stakes’ and ‘system uncertainties’ — to reflect the emphasis on value, not quantitative, judgements, which arerequired of regulatory groups. As an example of PNS applied to human research, the decision stakes (added risk) for a highly toxic intervention that is first undergoing testing would be higher for healthy people than for patients who have the condition under investigation. In this example, the system uncertainties (uncertainty about the occurrence of added risk) are similar for both groups.

However, the system uncertainties would be higher, say, if a procedure such as cardiac catheterization in healthy people was to be performed by a physician with little experience, compared with one with a highly successful track record. (The decision stakes, added risk, would be similar.)

Under the PNS approach, research administrators and IRB members would assess both criteria to allocate proposed research to one of three separate risk domains, which demand different levels of scrutiny and regulation.

PNS designates the lowest of these domains as ‘applied science’. Research currently typically categorizedas exempt or expedited has relatively low decision stakes and low system uncertainty and would fall into this domain in which the principal investigator is given the responsibility to carry out the work with minimal oversight.

As levels of decision stakes and system uncertainty rise to intermediate, proposals are placed in the second category, which in PNS is called professional consultancy. This is the domain of ongoing involvement of IRBs and the increasing importance of participant understanding of the research to give informed consent. PNS use of separate criteria would lead to clearer analysis of what makes a project more or less risky, and help IRBs to reach decisions.

The final category, the domain of ‘post-normal science’, is reserved for projects for which system uncertainties are so high that they include the adequacy of the current ethical principles used to assess risk. Values, in other words, become as important as facts. Managing risk in this zone needs extended consultation with a wider community, and one that assesses social values as well as scientific facts and expertise. A recent example of science that falls into this top tier is the question of whether or not to genetically engineer human embryos.

Using PNS to assess risk in human research has received little attention so far. But it offers a more coherent approach and permits a more nuanced analysisthan the current regulatory framework. It would promote the goal of risk-based regulation of human research.

Credit: UT Southwestern