The Editors' Summary explains:
The researchers obtained data on all the clinical trials submitted to the FDA ... They then used meta-analytic techniques to investigate whether the initial severity of depression affected the HRSD [Hamilton Rating Scale for Depression] improvement scores for the drug and placebo groups in these trials. They confirmed first that the overall effect of these new generation of antidepressants was below the recommended criteria for clinical significance. Then they showed that there was virtually no difference in the improvement scores for drug and placebo in patients with moderate depression and only a small and clinically insignificant difference among patients with very severe depression. The difference in improvement between the antidepressant and placebo reached clinical significance, however, in patients with initial HRSD scores of more than 28—that is, in the most severely depressed patients. Additional analyses indicated that the apparent clinical effectiveness of the antidepressants among these most severely depressed patients reflected a decreased responsiveness to placebo rather than an increased responsiveness to antidepressants.The press simplified it further. The MSNBC headline was "Antidepressants may not help many patients". The Guardian announced: "Prozac, used by 40m people, does not work say scientists".
Reactions, adverse and otherwise
There were reactions to the effect that "we've know all along antidepressants don't work" and at the other extreme "nothing could ever convince me that antidepressants don't work."
A lot of reaction came from people who believe they have benefited from antidepressants. See, for example, the comments following a summary of the study at depression.about.com.
The blogosphere had plenty of reactions: FuturePundit, Action Potential (the Nature Neuroscience blog), The MindFields College Blog, and on and on.
And the journal itself, PLoS Medicine, had an enormous number of responses to the paper.
Betta check the meta
The heart of the findings in this paper is the meta-analysis itself, and when I examined it, two things jumped out immediately. The figure below shows them both.
There's a lot to look at in the figure. The red triangles represent the results of the patients who received the antidepressant. The bigger the triangles, the more weight they receive in the analysis. Similarly, the circles represent the placebo results. The solid red curve is a model fit to the antidepressant results. The dashed blue curve is a model fit to the placebo results. The green region shows where there is a clinically important difference between the curves.
First, look at the vertical axis, labeled "Improvement (d)" and ranging from 0 to 2. This is the mean improvement in the Hamilton Rating Scale for Depression (HRSD), but it has been divided by the standard deviation. Why divide it by the standard deviation? Well this is what you might do if each study was using a different rating scale, in order to standardize things. But here it's not necessary. Each study used the HRSD, so it would be better not to standardize.
Second, if triangles represent antidepressant results and circles represent placebo results from the studies, how do they pair up? Each study has two "arms": an antidepressant arm and a placebo arm, but on the figure you can't tell which triangle belongs with which circle. This points to an important problem: the authors meta-analyzed the antidepressant arms separately from the placebo arms. But the studies were randomized controlled trials, which means that within each study the two arms are comparable. Ignoring this can introduce bias. The standard approach in meta-analysis is to compute a contrast between the two arms within each study, and then meta-analyze these contrasts.
But do either of these points make much of a difference? It turns out that they do. PJ Leonard took the trouble of rerunning the analyses using raw HRSD scores and the standard meta-analytic approach rather than the separate-arms analysis of Kirsch and co-authors, and obtained an effect about 50% larger than they did, and stronger evidence of clinical importance. Leonard also performed a regression analysis corresponding to the figure above.
Robert Waldmann has also done some interesting work on this.
Overcoming depression: there's no silver bullet
The evidence doesn't seem to support the notion that antidepressants "don't work". The overheated media response to this article was unfortunate. And that's a topic in itself.
Nonetheless, it seems that on average the effect of antidepressants is hardly overwhelming. So far there's no silver bullet for depression. Drugs can help, but so can other interventions. Including kindness and understanding.
Update: 11Apr2008 Thanks for a post on The Home for Wayward Statisticians, I found a couple more interesting links. One is by Mark Liberman on Language Log. The other is an editorial in Nature.