Thursday, September 29, 2011

Too much of nothing

Is more placebo better?

A friend of mine pointed me to the above TED talk, by Ben Goldacre. It's a entertaining presentation with lots of interesting content, although Goldacre's discussion of the placebo effect—"one of the most fascinating things in the whole of medicine" (6:32)—is a little weak. At 6:47, he says:

We know for example that two sugar pills a day are a more effective treatment for getting rid of gastric ulcers than one sugar pill a day. Two sugar pills a day beats one pill a day. And that's an outrageous and ridiculous finding, but it's true.
Notice that the claim is not about pain, but about actually healing the ulcers.

The source of this claim is apparently a 1999 study by de Craen and co-authors titled "Placebo effect in the treatment of duodenal ulcer" [free full text/pdf]. It's a systematic review based on 79 randomized trials comparing various drugs to placebo, taken either four times a day or twice a day depending on the study. (Note that Goldacre refers to twice a day versus once a day; I'm uncertain of the reason for the difference.) From each trial, the authors extracted the results in the placebo group only, obtaining the following results:

The pooled 4 week healing rate of the 51 trials with a four times a day regimen was 44.2% (805 of 1821 patients) compared with 36.2% (545 of 1504 patients) in the 28 trials with a twice a day regimen
This 8% difference was statistically significant, and remained so even when several different statistical models were used.

However, the authors are up-front about a key limitation of the study: "We realize that the comparison was based on nonrandomized data." Even though the data were obtained from randomized trials, none of the trials individually compared a four-times-a-day placebo regimen to a twice-a-day placebo regimen, so the analysis is a nonrandomized comparison. What if there were important differences between the patients, the study procedures, or the overall medical care provided in the four-times-a-day trials and the two-times-a-day trials? The authors discuss various attempts to adjust for gender, age, smoking, and type of comparator drug, but report that this made little difference. But they acknowledge that:

Although we adjusted for a number of possible confounders, we can not rule out that in this nonrandomized comparison the observed difference was caused by some unrecognized confounding factor or factors.
The strength of a randomized comparison is that important differences between groups are unlikely—even when it comes to unrecognized factors. Although the authors go on to consider other possible biases, their bottom line is:
... we speculate that the difference between regimens was induced by the difference in frequency of placebo administration.
These results of this study are intriguing, but they're hardly definitive.


Bookmark and Share

Sunday, September 25, 2011

The placebo defect

Suppose a clinical trial randomizes 100 patients to receive an experimental drug in the form of pills and an equal number of patients to receive identical pills except that they contain no active ingredient, that is, placebo. The results of the trial are as follows: 60 of the patients who received the experimental drug improved, compared to 30 of the patients who received the placebo. The drug clearly works better than the placebo.[1] But 30% of the patients who received the placebo did get better. There seems to be a placebo effect, right?

Wrong. The results from this trial provide no information about whether or not there is a placebo effect. To determine whether there is a placebo effect you would need compare the outcomes of patients who received placebo with the outcomes of patients who received no treatment. And not surprisingly, trials with a no-treatment arm are quite rare.

But there are some. In a landmark paper published in the New England Journal of Medicine in 2001 (free full text), Asbjørn Hróbjartsson and Peter Gøtzsche identified 130 trials in which patients were randomly assigned to either placebo or no treatment. Their conclusions?
We found little evidence in general that placebos had powerful clinical effects. Although placebos had no significant effects on objective or binary outcomes, they had possible small benefits in studies with continuous subjective outcomes and for the treatment of pain.
How could that be? Returning to our hypothetical trial, recall that among the patients who received placebo, 30% improved. The question is, how many would have improved had they not received placebo? If the answer is 10%, then there is 20% placebo effect. But if the answer is 30%, then there is no placebo effect at all. What Hróbjartsson and Gøtzsche found was that in most cases there was no significant placebo effect. The exception—and it is an interesting one—was in studies with continuous subjective outcomes and for the treatment of pain. It is not hard to imagine how a placebo effect could operate in such cases. The expectation of an effect can strongly influence an individual's subjective experience and assessment of pain, satisfaction, and so forth.

A study published this summer provides a nice illustration. Weschler and colleagues randomized patients with asthma to receive an inhaler containing a bronchodilator medication (albuterol), a placebo inhaler, sham acupuncture, or no intervention. When patients were asked to rate their improvement, the results were as follows:

Self-rated improvement was similar between the active-medication, placebo, and sham-acupuncture groups, and significantly greater than in the no-intervention group.

When an objective measure of respiratory function (maximum forced expiratory volume in 1 second, FEV1) was made, the results were as follows:

The objective measure of improvement was similar between the placebo, sham-acupuncture, and no-intervention groups, and significantly less than in the active-medication group.

At least in this study, it appears that a placebo effect can operate when the outcome of interest is self-rated improvement, but not when an objective outcome is used. This finding is in accordance with what Hróbjartsson and Gøtzsche originally reported, as well with an update of their review published in 2004 (free pdf).

Indeed the notion of a placebo effect in the case of objectively-measured outcomes has always seemed a little shaky, and the putative mechanisms rather speculative. So why has the placebo effect commanded so much attention?

Fascination with the placebo effect

Although placebos had probably been used clinically long before[2], it was a 1955 paper published in the Journal of the American Medical Association by Henry Beecher titled The Powerful Placebo, that brought widespread attention to the placebo effect. Beecher's analysis of 15 placebo-controlled trials for a variety of conditions showed that 35% of the patients who received placebo improved and he referred to this as "real therapeutic effects" of placebo. As discussed above, this mistakenly attributes clinical improvement among patients who received placebo to an effect of the placebo itself, without considering other possible causes such as the natural course of the illness. Unfortunately Beecher's error was not widely understood, and the mystique of the placebo was cemented.

Over the years, the placebo effect has received a tremendous amount of attention in both the academic and popular press. A search of PubMed, a publicly-accessible database of citations of biomedical publications, reveals 527 publications with the words "placebo effect" in the title, going back to 1953. This number is particularly impressive given that not all articles on the topic—for instance, Beecher's paper itself—include the words "placebo effect" in their title. A Google search of "placebo effect" reports "about 5,220,000 results". Why has so much attention been given to such a dubious notion?

One reason may be our fascination with mind-body interactions. Conventional medicine, perhaps influenced by the philosophy of René Descartes, has tended to treat the mind and body as entirely separate. It is clear that this is not so, perhaps most obviously in regards to mental health. Perhaps in reaction, some fuzzy thinking has developed around the idea of mind-body interactions. New-age and alternative-medicine movements have often entailed beliefs about how positive attitudes can heal the body, and conversely how negative ones can lead to illness. While this may contain elements of truth, at its worst it fosters dogmatic thinking and pseudoscience.

Curiously, however, in more scientific circles recent developments in neurobiology have also encouraged interest in the placebo effect. Advances in understanding of how the brain works have lead to research efforts to understand the mechanism of action of the placebo effect. This is more than a little odd, given the fairly sparse evidence for such an effect! An article in Wired Magazine asserts that "The fact that taking a faux drug can powerfully improve some people's health—the so-called placebo effect—has long been considered an embarrassment to the serious practice of pharmacology." Note that the article takes for granted "the fact" that the placebo effect works.

Indeed, the term "the placebo effect" itself is part of the problem. By labeling it as an effect, we lend it credence. Arguing against the placebo effect seems to put one at an immediate disadvantage. Hasn't everyone heard of the placebo effect? How could anyone deny such an established fact?

1. ^Relative to the sample size, the difference is large enough that we can safely rule out chance as an explanation. In statistical terms, a test of the hypothesis that the improvement rates in the two groups are equal using Fisher's exact test gives a p-value < 0.001.
2. ^For some historical background, see The Problematic Placebo, by Stanley Scheindlin [pdf].

Labels: ,

Bookmark and Share