Main

“Antidepressant drugs don't work — official study” (The Independent); “Depression drugs don't work, finds data review” (The Times); “Prozac, used by 40m people, does not work say scientists” (The Guardian) — a selection of headlines from some of the UK's national newspapers on 26 February 2008. The basis for these news stories: a meta-analysis of clinical trials of several widely used antidepressants published the same day by Kirsch and colleagues1.

The nature of the headlines is a cause for concern, as the meta-analysis does not show that antidepressants do not work. Before turning to the flaws in its reporting, however, it is worth considering the study's merits, as they are relevant to the potential future impact of such studies in general.

One such merit is the study's use of unpublished data, which has so far been relatively rare. With the aid of the Freedom of Information Act, the authors obtained data on all clinical trials submitted to the US FDA for the licensing of four antidepressants. In this way, they avoided the issue of 'publication bias', which can lead to overestimates of a drug's effectiveness because of a greater likelihood that clinical trials with positive findings will be published than those with inconclusive or negative findings.

Indeed, the need to address the problem of publication bias has been a driving force behind recent efforts to ensure that clinical trials are appropriately registered, and therefore identifiable — for example, by those conducting systematic reviews. Such efforts have had considerable success so far, and given the importance of transparency in clinical-trial reporting, and the growing emphasis on achieving this, it seems likely that meta-analyses such as that by Kirsch et al. will be increasingly feasible and more common in the future.

With this in mind, it is worth considering why this study might have generated the reaction it did. Returning to its methodology, the authors used meta-analytical techniques on their data to assess the relationship between initial disease severity and the improvement of scores for drug and placebo on a popular scale for measuring the severity of depression1. Overall, the analysis indicated a significant difference in efficacy between drug and placebo, but the authors found that the size of this difference only reached the criterion for clinical significance of the UK's National Institute of Clinical Excellence in the most severely depressed patients. The authors go on to state that: “Given these data, there seems little evidence to support the prescription of antidepressant medication to any but the most severely depressed patients, unless alternative treatments have failed to provide benefit” 1.

It seems possible that such statements, which are mirrored in the “Editor's Summary” for the paper, could have had a significant role in the way that the study was reported in the mainstream media. Moreover, relatively little attention seems to have been drawn to the limitations of the analysis in the paper itself or in the Summary.

To mention a few briefly, the analysis only includes clinical trials conducted prior to the approval of the drugs, omitting a considerable body of data available on their effects from trials conducted post-approval. It is also well recognized that antidepressant trials carried out for the purpose of regulatory approval are not necessarily a good reflection of the real-world scenarios in which the drugs are used. All but one trial analysed involved groups with mean initial depression scores in the 'very severe' range, limiting the strength of extrapolations for less severely depressed patients. Furthermore, there is a large placebo effect in antidepressant trials, and the study compared the effects of antidepressants with placebo, and not directly with other treatments such as cognitive behavioural therapy. So, there does not seem to be a strong basis on this study alone for making sweeping statements about the relative merits of different types of treatment for less severely depressed patients.

Such limitations need not necessarily detract from the value of meta-analytical studies such as this, but the significance of the limitations seems unlikely to be fully appreciated by those not highly versed in statistical analysis of clinical trials. Given this, the onus to help avoid the harm that can be caused by flawed media reporting of such studies should arguably fall most strongly on the authors, reviewers, editors and publishers.