Clinical trials provide critical evidence around whether new treatments and procedures are effective and are generally recognised as central to the advancement of medical knowledge. However, there are many important questions that are often raised in the wake of publications of clinical trial findings that cannot be answered by the trial alone. For example—if a trial fails to find a treatment effect there is always the question of why. If it demonstrates a treatment effect then the next obvious question is whether the intervention could realistically be rolled out into clinical practice and if so, in what context. And then there are always interesting issues to explore around the experiences of the participants and those involved in the trial. In short, some of the questions that might be asked are: How did participants perceive the intervention? Were the possible benefits worth participants’ time, money and inconvenience? What did the participants think worked well and/or could have been done better? All of these types of questions are important but need to be answered using a systematic approach as part of a formal process evaluation.1, 2

A process evaluation, like all research, needs to be carefully planned and should have a protocol in which all important decisions are articulated prior to data analysis. Some journals now publish the protocols of process evaluations for very large trials involving complex interventions.3 Process evaluations that aim to explain the results of a trial should start with a logic model, which is a theoretical framework to explain how an intervention may work and all the factors that feed into its possible success or failure. There are many good guidelines on the theories, principles and methodologies involved in different kinds of process evaluations.1, 2

Process evaluations generally involve mixed methods research, which is a combination of qualitative and quantitative methods.4 For example, a process evaluation would aim in part to explore how participants perceived an intervention. Addressing this issue might involve a statistical analysis of trial data on uptake and adherence. In addition, a qualitative component could entail in-depth interviews or focus groups that interrogate these issues directly with study participants, as well as possibly family members and health care providers. Another question that might be addressed in a process evaluation is why an intervention was not found to have an effect. Perhaps the intervention failed because of contamination of the control group, or maybe the ‘usual care’ that patients were receiving was of a higher standard than originally anticipated. To investigate these possibilities, data could be collected on the type and extent of treatment received by control participants. Such issues can only be properly investigated with a combination of qualitative and quantitative methodologies.

There is a nice example of a process evaluation in this month’s edition of Spinal Cord.5 The authors conducted a clinical trial to determine whether people with spinal cord injury could effectively coach their peers to self-manage their health. The trial focused on whether this type of peer coaching was effective. The process evaluation focused on exploring how the coaches interacted with their peers and the sorts of roles they filled and coaching strategies they adopted. It used a combination of qualitative and quantitative methodologies to analyse 504 telephone calls conducted over a 6-month period. This paper is just one process evaluation conducted in parallel with the main trial. The authors could have (and may have) conducted many other types of process evaluations to explore all sorts of issues.

Spinal Cord encourages submissions of process evaluations where they provide indepth analyses of some of the nuances of large clinical trials and where they help explain trial results.