Tuesday, September 29, 2009

Why do we overinterpret study findings?

MSNBC recently reported that a new study suggests "U.S. states whose residents have more conservative religious beliefs on average tend to have higher rates of teenagers giving birth". (I learned of this on Rationally Speaking.) The study itself is Open Access, so all the details are freely available. The scatterplot illustrates the strong association the authors found. Now, the authors were reasonably cautious in how they interpreted their findings. The trouble is, the general public may not be.

A common error is to conclude the study shows that religiosity causes higher teen birth rates. But correlation does not imply causation. It could be that higher teen birth rates cause religiosity. Or perhaps a third, unidentified factor causes both.

But isn't the strength of association still impressive? It is. But what if, as I just suggested, there are other variables involved? Such confounding variables (or confounders, as they are commonly known) can wreak havoc on this sort of analysis. Indeed, the authors of the study did adjust for median household income and abortion rate (both at the state level). But it is possible that other confounders are lurking. And unfortunately, we tend to forget entirely about the possibility of confounders when we hear about study findings.

Another error is to conclude that the findings directly apply to individuals. Here I will quote the authors directly:
We would like to emphasize that we are not attempting to use associations between teen birth rate and religiosity, using data aggregated at the state level, to make inferences at the individual level. It would be a statistical and logical error to infer from our results, “Religious teens get pregnant more often.” Such an inference would be an example of the ecological fallacy ... The associations we report could still be obtained if, hypothetically, religiosity in communities had an effect of discouraging contraceptive use in the whole community, including the nonreligious teens there, and only the nonreligious teens became pregnant. Or, to create a different imaginary scenario, the results could be obtained if religious parents discouraged contraceptive use in their children, but only nonreligious offspring of such religious parents got pregnant. We create these scenarios simply to illustrate that our ecological correlations do not permit statements about individuals.
To err is human ...

My goal here has not been to criticize the authors of this study, nor the media. Rather, what I find remarkable is how such a simple statement—"states whose residents have more conservative religious beliefs on average tend to have higher rates of teenagers giving birth"—can be so easily misinterpreted, and in so many different ways! Does anyone know of any research about our tendency to overinterpret scientific findings? Of course, we'd probably overinterpet it.

Labels: , , , ,

Bookmark and Share


Anonymous Jon Peltier said...

Cause and effect in this case is pretty clear, isn't it? Religiosity leads to not teaching important life skills like birth control. Abstention? Come on; teenagers are going to fool around, fact of life. Teenagers without knowledge (and availability) of birth control get pregnant.

8:26 AM, September 30, 2009  
Blogger Nick Barrowman said...


I agree that your explanation is plausible, and indeed may be largely correct. But the study itself doesn't support any inferences about cause and effect. It is also plausible that in states with high teen birth rates there are consequent social problems that push people (perhaps different people) toward religious solutions, i.e. a kind of conservative backlash. More investigation is needed—ideally prospective studies of individuals—to get a better understanding of cause and effect.

11:10 AM, September 30, 2009  
Anonymous Anonymous said...

Nick -- I used to know a guy-- a guy who was sober as a judge, gainfully employed, an effective parent and citizen. A guy you'd want on a jury -- or as a neighbor. Then he explained to me that he must have some special electrical force field that surrounded him. His evidence? He said that it wasn't uncommon that streetlights would often flicker or shut off when he was near one of them. He was convinced that his body emitted some kind of interference that made this happen. I tried to tell him that streetlights flicker and shut off when he's not around... he was just not around to see it! Bulbs burn out or bright lightening or headlights can temporarily trip the light sensor and shut them off. He would not believe me. Streetlights flickered when he was around. Period. He caused it in some way. It's so frustrating to try to reason with this "logic." When I hear supposedly intelligent journalists or elected officials spout off about some correlation-causation fallacy, I want to scream. Keep up the great blog... Facebook Nancy in Minneapolis

8:22 PM, September 30, 2009  
Blogger Nick Barrowman said...

Hi Nancy,

Great story! You know, in a way I'm less concerned about idiosyncratic irrationality, since it's unlikely to have much impact. But when lots of people make the very same mistake it can be really damaging. Studies are often taken to be much more definitive than they really are. When later studies seem to contradict them, the public can become jaded and confused. (And sometimes people say unkind things about statistics!)

10:12 PM, September 30, 2009  

Post a Comment

<< Home