Sunday, September 25, 2011

The placebo defect

Suppose a clinical trial randomizes 100 patients to receive an experimental drug in the form of pills and an equal number of patients to receive identical pills except that they contain no active ingredient, that is, placebo. The results of the trial are as follows: 60 of the patients who received the experimental drug improved, compared to 30 of the patients who received the placebo. The drug clearly works better than the placebo.[1] But 30% of the patients who received the placebo did get better. There seems to be a placebo effect, right?

Wrong. The results from this trial provide no information about whether or not there is a placebo effect. To determine whether there is a placebo effect you would need compare the outcomes of patients who received placebo with the outcomes of patients who received no treatment. And not surprisingly, trials with a no-treatment arm are quite rare.

But there are some. In a landmark paper published in the New England Journal of Medicine in 2001 (free full text), Asbjørn Hróbjartsson and Peter Gøtzsche identified 130 trials in which patients were randomly assigned to either placebo or no treatment. Their conclusions?
We found little evidence in general that placebos had powerful clinical effects. Although placebos had no significant effects on objective or binary outcomes, they had possible small benefits in studies with continuous subjective outcomes and for the treatment of pain.
How could that be? Returning to our hypothetical trial, recall that among the patients who received placebo, 30% improved. The question is, how many would have improved had they not received placebo? If the answer is 10%, then there is 20% placebo effect. But if the answer is 30%, then there is no placebo effect at all. What Hróbjartsson and Gøtzsche found was that in most cases there was no significant placebo effect. The exception—and it is an interesting one—was in studies with continuous subjective outcomes and for the treatment of pain. It is not hard to imagine how a placebo effect could operate in such cases. The expectation of an effect can strongly influence an individual's subjective experience and assessment of pain, satisfaction, and so forth.

A study published this summer provides a nice illustration. Weschler and colleagues randomized patients with asthma to receive an inhaler containing a bronchodilator medication (albuterol), a placebo inhaler, sham acupuncture, or no intervention. When patients were asked to rate their improvement, the results were as follows:

Self-rated improvement was similar between the active-medication, placebo, and sham-acupuncture groups, and significantly greater than in the no-intervention group.

When an objective measure of respiratory function (maximum forced expiratory volume in 1 second, FEV1) was made, the results were as follows:


The objective measure of improvement was similar between the placebo, sham-acupuncture, and no-intervention groups, and significantly less than in the active-medication group.

At least in this study, it appears that a placebo effect can operate when the outcome of interest is self-rated improvement, but not when an objective outcome is used. This finding is in accordance with what Hróbjartsson and Gøtzsche originally reported, as well with an update of their review published in 2004 (free pdf).

Indeed the notion of a placebo effect in the case of objectively-measured outcomes has always seemed a little shaky, and the putative mechanisms rather speculative. So why has the placebo effect commanded so much attention?

Fascination with the placebo effect

Although placebos had probably been used clinically long before[2], it was a 1955 paper published in the Journal of the American Medical Association by Henry Beecher titled The Powerful Placebo, that brought widespread attention to the placebo effect. Beecher's analysis of 15 placebo-controlled trials for a variety of conditions showed that 35% of the patients who received placebo improved and he referred to this as "real therapeutic effects" of placebo. As discussed above, this mistakenly attributes clinical improvement among patients who received placebo to an effect of the placebo itself, without considering other possible causes such as the natural course of the illness. Unfortunately Beecher's error was not widely understood, and the mystique of the placebo was cemented.

Over the years, the placebo effect has received a tremendous amount of attention in both the academic and popular press. A search of PubMed, a publicly-accessible database of citations of biomedical publications, reveals 527 publications with the words "placebo effect" in the title, going back to 1953. This number is particularly impressive given that not all articles on the topic—for instance, Beecher's paper itself—include the words "placebo effect" in their title. A Google search of "placebo effect" reports "about 5,220,000 results". Why has so much attention been given to such a dubious notion?

One reason may be our fascination with mind-body interactions. Conventional medicine, perhaps influenced by the philosophy of René Descartes, has tended to treat the mind and body as entirely separate. It is clear that this is not so, perhaps most obviously in regards to mental health. Perhaps in reaction, some fuzzy thinking has developed around the idea of mind-body interactions. New-age and alternative-medicine movements have often entailed beliefs about how positive attitudes can heal the body, and conversely how negative ones can lead to illness. While this may contain elements of truth, at its worst it fosters dogmatic thinking and pseudoscience.

Curiously, however, in more scientific circles recent developments in neurobiology have also encouraged interest in the placebo effect. Advances in understanding of how the brain works have lead to research efforts to understand the mechanism of action of the placebo effect. This is more than a little odd, given the fairly sparse evidence for such an effect! An article in Wired Magazine asserts that "The fact that taking a faux drug can powerfully improve some people's health—the so-called placebo effect—has long been considered an embarrassment to the serious practice of pharmacology." Note that the article takes for granted "the fact" that the placebo effect works.

Indeed, the term "the placebo effect" itself is part of the problem. By labeling it as an effect, we lend it credence. Arguing against the placebo effect seems to put one at an immediate disadvantage. Hasn't everyone heard of the placebo effect? How could anyone deny such an established fact?

______________________________
1. ^Relative to the sample size, the difference is large enough that we can safely rule out chance as an explanation. In statistical terms, a test of the hypothesis that the improvement rates in the two groups are equal using Fisher's exact test gives a p-value < 0.001.
2. ^For some historical background, see The Problematic Placebo, by Stanley Scheindlin [pdf].

Labels: ,

Bookmark and Share

29 Comments:

Blogger raz said...

Wow, thanks for that beautiful explanation. This has always been a huge pet peeve of mine when studies report the effect of placebos, particularly when outcomes are self-reported as with pain, depression, etc.

4:56 PM, September 25, 2011  
Blogger Random John said...

After working with a few of these trials, I learned that the placebo control was really controlling for a variety of effects, which are hard to disentangle and really depends on the disease or condition studied. Among these effects are honest-to-goodness placebo effect (as mentioned above), natural course of the disease, body's ability to recover from disease, environmental influences on symptoms (dietary or otherwise), concomitant medications and drug-drug interactions, and just about anything else that can influence someone else's response.

I think there are a few studies where they look at placebo vs. no treatment, but the inability to blind them brings in other effects as well, mostly related to bias of recording symptoms and/or severity. You'd really have to have a hard endpoint and control everything carefully.

5:36 PM, September 25, 2011  
Blogger Nichol Brummer said...

Having chronic bowel disease, I've learnt from myself how easy it is to see correlations between how I feel today and what I ate yesterday .. building theories that are ultimately nonsense. The effects of wishful thinking while interpreting the data that can often fluctuate rather randomly .. they might easily explain a large part of the 'placebo effect'.

7:14 PM, September 25, 2011  
Blogger Nick Barrowman said...

raz: Thanks, it's been a pet peeve for me too. In particular, media reports tend to present a very uncritical view of the placebo effect, which drives me crazy!

John: That's a good list of effects that may be behind "placebo response". I agree that even when a study has a no-treatment arm, attempting to measure the placebo effect could be difficult.

Nichol: I think it's really difficult to apply rigorous methodology in trying to assess one's own symptoms in relation to diet or treatments. A randomized n-of-1 trial can sometimes be used for a stable chronic condition, but you need a healthcare provider who is willing and able to carry it out.

9:07 PM, September 25, 2011  
Anonymous Bryan said...

Good stuff, but I assume in most cases the researcher isn't really interested in what the placebo effect is over a no-intervention group (the interest is squarely treatment versus placebo).

So, what's the point of including a no-treatment group unless one is indeed studying (versus controlling) placebo effects?

Bryan

9:22 PM, September 25, 2011  
Blogger Nick Barrowman said...

Bryan, I think you're right: the researcher is generally interested in treatment versus placebo. As you suggest, the only reason I can think of for adding a no-intervention group is when there is particular interest in a possible placebo effect.

9:40 PM, September 25, 2011  
Anonymous Sarah said...

"At least in this study, it appears that a placebo effect can operate when the outcome of interest is self-rated improvement, but not when an objective outcome is used."

So a placebo doesn't actually make people better, but it makes them *feel* better. But isn't that sometimes a useful outcome in itself?

7:18 AM, September 26, 2011  
Blogger Nick Barrowman said...

Yes, there is some evidence that a placebo can make people feel a bit better (though not as much as people often seem to think when they just look at the response rate in the placebo group of a two-arm trial). This has led some people to believe that it is ethically acceptable to deceptively prescribe placebo to a patient! (There is, however, some recent literature suggesting that placebos can help people feel better even when they know it's just a placebo: "This pill doesn't contain any active medication, but it seems to make people feel better.") I tend to believe, however, that we should be focusing on things that help with people's conditions, not just how they feel about them. With psychological conditions, however, the lines are definitely blurred.

What bothers me most, however, is the not-uncommon belief that placebos can work wonders (through some vague mind-body interactions) to improve all manner of conditions, from heart disease to cancer. That seems at best doubtful, and at worst cruel.

7:45 AM, September 26, 2011  
Anonymous Steve Silberman said...

Nick, thanks for this post, but as the author of the Wired article, I should point out that there are many significant problems with the "landmark" Hróbjartsson and Gøtzsche meta-analysis.

Case in point. From the paper itself:

"The trials investigated 40 clinical conditions: hypertension, asthma, anemia, hyperglycemia, hypercholesterolemia, seasickness, Raynaud’s disease, alcohol abuse, smoking, obesity, poor oral hygiene, herpes simplex infection, bacterial infection, common cold,
pain, nausea, ileus, infertility, cervical dilatation, labor,
menopause, prostatism, depression, schizophrenia, insomnia,
anxiety, phobia, compulsive nail biting, mental
handicap, marital discord, stress related to dental
treatment, orgasmic difficulties, fecal soiling, enuresis,
epilepsy, Parkinson’s disease, Alzheimer’s disease,
attention-deficit–hyperactivity disorder, carpal tunnel
syndrome, and undiagnosed ailments."

Does anyone truly believe or suggest that there is a placebo response in play in mental handicap, fecal soiling, poor oral hygiene, marital discord, bacterial infection, and infertility? Not that I've seen, and I've read hundreds of papers and a dozen books on the subject. If you want to "prove" that a subtle phenomenon doesn't exist, drown it in noise.

While Hróbjartsson and Gøtzsche made many valuable points, they cast their net too wide.

Steve

11:42 AM, September 26, 2011  
Blogger Nick Barrowman said...

Steve, I agree that the inclusion of such a great variety of trials may be problematic. But some have claimed that the placebo effect is a very general phenomenon, spanning many different clinical conditions, and that has been the perception of much of the general public. The Hróbjartsson and Gøtzsche meta-analysis has its shortcomings, but it does suggest that where there is a placebo effect it probably is quite subtle.

1:37 PM, September 26, 2011  
Anonymous Steve Silberman said...

It certainly is problematic, if what you're trying to disprove is the existence of an effect that is only relevant to certain kinds of disorders (such an inflammation, pain, and the other conditions for which significant placebo effects have been replicated in numerous studies). Otherwise, you risk having the end result of your meta-analysis being equivalent to the statement: "Placebo is not a panacea." I've never actually heard anyone claim that it is.

2:03 PM, September 26, 2011  
Blogger Nick Barrowman said...

I think it's really hard to do these studies. A paper by Hróbjartsson, Kaptchuk, and Miller, titled Placebo effect studies are susceptible to response bias and to other types of biases is currently in press in the Journal of Clinical Epidemiology. Quoting from the abstract: "The inherent nonblinded comparison between placebo and no-treatment is the best research design we have in estimating effects of placebo, both in a clinical and in an experimental setting, but the difference between placebo and no-treatment remains an approximate and fairly crude reflection of the true effect of placebo interventions. A main problem is response bias in trials with outcomes that are based on patients' reports. Other biases involve differential co-intervention and patient dropouts, publication bias, and outcome reporting bias. Furthermore, extrapolation of results to a clinical settings are challenging because of a lack of clear identification of the causal factors in many clinical trials, and the nonclinical setting and short duration of most laboratory experiments."

4:55 PM, September 26, 2011  
Anonymous Hebert - diseño de paginas web said...

Well the example is pretty good:) and if I think there is a placebo effect:)

8:41 PM, September 27, 2011  
Blogger Disgruntled PhD said...

Hi, apologies for being a little late to the party, I tried to reply last week but my phone ate the response.

In any case, you need to look at the following Meissner et al paper http://www.biomedcentral.com/1741-7015/5/3

Essentially, they looked at placebo effects in clinical trials across the same kinds of studies Hrobjarrtsson and Goetszche used, and found that there was a medium sized placebo effect when the outcome measure was a physical parameter, but none when the outcome was a biochemical parameter. They re-analysed the H&G data, and reported the same finding.

In addition, it is well known that placebo administration (in many cases) for pain can be reversed by naloxone, suggesting that the endogenous opioid system is involved in these placebo effects, and ruling out the response bias explanation.

4:48 AM, October 01, 2011  
Blogger Nick Barrowman said...

Thanks, Disgruntled,

I had a look at the Meissner et al paper. They attempted to measure the placebo effect in randomized trials that did not include a no-treatment group. To do this they did two things. First, they used trials involving stable diseases or conditions:

"... trials were excluded when the disease was expected to either improve or deteriorate during the study period, irrespective of experimental treatment, e.g., due to the natural course of the disease, or to co-interventions. For example, mild-to-moderate hypertension would be considered rather stable over a 4-week period in otherwise healthy patients but unstable in women developing hypertension during pregnancy."

Second:

"... we searched for trials in which the baseline data provided an appropriate reference to interpret the changes observed within the placebo groups as the effect of the placebo intervention itself and not of other, placebo-unrelated factors."

This is an interesting idea, but the examination of change within the placebo group is not a randomized comparison (for a different example of this, see my post about whether more placebo is better). Randomized comparisons, like the ones used by Hróbjartsson and Gøtzsche, are much stronger.

I haven't had a chance yet to look at the studies of naloxone.

12:11 PM, October 01, 2011  
Blogger Disgruntled PhD said...

Nick,

OK i definitely take your point (and the issues with H&G are well known), there has been some interesting work on whether placebo effects are larger in experimental studies than in clinical trials (Vase 2002, 2005). Probably the best paper on the biochemical substrate of placebo is Benedetti et al 2003 Conscious and Unconscious Placebo Responses, as it seems to delinate some systems where expectancy and conditioning can produce meaningful placebo effects. I personally would argue that the experimental studies are far closer to clinical practice than are randomised controlled trials, and that the biggest flaw in H&G was the inclusion of all possible trials in the review.

In essence, you appear to be saying that comparisons made through meta-analysis are in general not valid, unless they are within trial comparisons. Am I correct? I see the point, but there's a lot of meta-analysis in less controversial areas that run foul of this standard.

3:58 AM, October 02, 2011  
Blogger Nick Barrowman said...

Thanks again, Disgruntled,

I just had a look at the Benedetti paper [free text/pdf].

It's a remarkable study, involving three different experiments (one on pain involving 60 healthy participants, one on motor performance involving just 10 participants with Parkinson's disease, and one on hormone levels involving 95 healthy participants). One reason it strikes me as remarkable is that it was approved by a research ethics board—given that it involved self-inflicted pain until it became "unbearable", surgical implantation of electrodes (in the Parkison's patients, although perhaps that was part of their clinical treatment), placement of indwelling intravenous catheters and taking several blood samples along with injection of a drug (sumatriptan) which stimulates growth hormone secretion and inhibits cortisol secretion.

A large part of the authors' focus is on conditioning effects, which while related, are somewhat distinct from the placebo effect. Their conclusion:

"This suggests that placebo responses are mediated by conditioning when unconscious physiological functions such as hormonal secretion are involved, whereas they are mediated by expectation when conscious physiological processes such as pain and motor performance come into play, even though a conditioning procedure is performed."

I'm not surprised by a placebo effect mediated by expectation in the case of pain and motor performance. (Although motor performance is an objective measurement, I can see how expectancy could have an effect.) With regard to conditioning, I'm reminded of Pavlov's dog salivating at the ring of a bell. Presumably salivation is the result of a neurobiological cascade. So conditioned hormonal responses seem plausible. But that's not the usual context with placebos. One might argue that there is pervasive cultural conditioning about the effectiveness medical treatments and that could have a physiological effect, but that strikes me as speculative.

As an aside, some of the experimental design in the Benedetti study did surprise me. For example:

"All of the experiments were performed according to a randomized double-blind design in which neither the subject nor the experimenter knew what drug was being administered. To do this, either sumatriptan or saline solution was given. To avoid using a large number of subjects, when the saline injection had to be performed in groups 2, 3, 4, 5, 6, 7, 8, and 9, two or three subjects per group received sumatriptan and were interspersed among those who received the saline injection. Those subjects who received sumatriptan in place of saline were not included in the study because they were used only to allow the double-blind design."

In other words, they didn't use a standard randomized design because it would have required too large a sample size.

10:32 AM, October 02, 2011  
Blogger Nick Barrowman said...

Disgruntled,

You also wrote: "In essence, you appear to be saying that comparisons made through meta-analysis are in general not valid, unless they are within trial comparisons. Am I correct? I see the point, but there's a lot of meta-analysis in less controversial areas that run foul of this standard."

All things being equal, the strongest type of meta-analysis is a meta-analysis of randomized trials, where within each randomized trial a comparison is made, and then information from those comparisons is pooled across all the trials. I wouldn't say that other approaches are invalid, but (again, all things being equal) they are more prone to bias.

Meta-analysis of observational studies is sometimes performed simply because no randomized trials have done, or perhaps randomized trials would not be ethical, or even possible. Meta-analysis of single-group (uncontrolled) studies is sometimes done, and this is even more problematic. Having been involved in each of these types of studies, I wouldn't say they are invalid. Just that we should try to use the strongest methods available, and the weaker our methods, the more cautious should be our interpretation.

10:58 AM, October 02, 2011  
Anonymous Anonymous said...

My understanding is that the use of trials with placebo is to determine whether the treatment arm is superior to the placebo arm, regardless of whether the placebo arm had or did not have a true "placebo effect." So what is the complaint about really? We still want a placebo arm and not a "no treatment arm."

12:21 AM, October 03, 2011  
Blogger Nick Barrowman said...

That's right, placebos are used in trials to determine whether the treatment arm is superior to the placebo arm, regardless of whether or not the placebo arm had a true placebo effect. But there has also been a great deal of discussion about the "amazing" placebo effect, and there is a widespread belief that it is a very general and powerful phenomenon. As I noted in a previous comment, there have even been arguments that it would be ethically acceptable to deceptively prescribe placebo to patients. Furthermore, I think that focusing on a "placebo effect" may sometimes promote quackery, and tends to distract us from therapies that actually do work and where we understand the mechanism.

7:29 AM, October 03, 2011  
Blogger Nathanael Johnson said...

Hey Nick - I appreciate your clear explanation of the placebo v. no treatment problem.
In response to Sarah you wrote:
"I tend to believe, however, that we should be focusing on things that help with people's conditions, not just how they feel about them."
Why the pejorative "just"? For a limited set of conditions, the case could be made in precisely the other direction, that in the long run what matters is the way people feel about things, not their conditions. If a treatment for any chronic disease makes me feel better and more able to live a productive life, isn't that more important than my white blood cell count? Obviously there are some conditions that will kill you quick, no matter how happy you are, but there are tons of chronic conditions that mainly make people feel crappy.
But look, arguing that either feeling or mechanics is more important recapitulates the old Cartesian divide. We know that they are both important and intertwined in complex ways - ways that quacks and enthusiasts tend to get wrong.
I understand you're point: This is a tough thing to study, and there are a hundred ways to show an effect when there is none. But we shouldn't fight bias with bias.

12:12 PM, October 04, 2011  
Blogger Nick Barrowman said...

Nathanael, thanks for your comment. I do think that it is important to distinguish between treatments that objectively improve a patient's condition and treatments that result in patient-reported improvement. However, patient-reported improvement may be transient if the underlying disease process is not modified. That said, in some cases, such as depression and chronic pain, there may not be any available objective measures. As you note, objective and subjective characteristics are "both important and intertwined in complex ways - ways that quacks and enthusiasts tend to get wrong".

8:01 PM, October 04, 2011  
Blogger hammad sohail said...

Forex Affaris.. Latest Currency news updates, latest forex trading business updates, trading updates, forex trading latest news, forex brokers directory, forex brokers list, Dollars news affairs, Stock Markets, stock market news, stock market analysis, technology news, international forex markets, international forex business news and all updates about Forex Trading
ForexAffairs.Com

3:22 AM, November 16, 2013  
Blogger Faheem Zia said...

For All Latest Hot Current Affairs
www.hotcurrentaffairs.com

11:54 AM, May 20, 2014  
Blogger ahmedraza moon said...

Get Facebook Likes on your fb page, likes on your facebook pictures, followers on your facebook id, shares of your facebook posts, every thing is available here, visit for more details
www.jobzcorner.com

3:22 PM, September 15, 2014  
Blogger Ahsan Afsar said...

Find best business plan from home without any work, just invest $1 and get 120% Total Profit within a week
EarningsClub.Com

5:26 PM, September 16, 2014  
Blogger Syed Kazim Ali said...

Earn Money on without investment, just join free add clicking websites and earn upto $10 per Day, Join Now
jobzcorner.com

3:45 AM, September 17, 2014  
Blogger hasnain raza said...

Play Games and Earn Money online from home, best add clicking website in the world
PaidVerts.com

5:08 AM, September 20, 2014  
Blogger ahmedraza moon said...

Earning is only for you, just spend 1 hour daily and earn upto $35 Daily with just clicking job, Join Now
adsclickearning.com

8:59 AM, October 22, 2014  

Post a Comment

<< Home