Saturday, April 29, 2006

Probably incorrect probability

A recent post on the blog Blackademic about the alleged rape of a black woman by white Duke university lacrosse players generated a flurry of comments. One of them anonymously argued:
"Unfortunately, statistically a black women is significantly more likely to make a false accusation of rape than to have been raped by a white man. According to the National Crime Victimization Survey ( http://www.ojp.usdoj.gov/bjs/pub/pdf/cvus/current/cv0342.pdf ), less than .0004% of black rape victims were raped by whites. (The NCVS reports the percentage as 0% because there were less than 10 reported cases. I assumed 9 cases, to come up with an actual percentage) Even with the most conservative figure of 2% of rape allegations being false, this means in the case of the Duke Rape Case, the victim is 5000 times more likely to have made a false accusation than to have actually been raped."
There were some perplexed responses to this dramatic claim:
  • "yeah, cuz stats and figures are ALWAYS correct--whatever. it depends on who did the survery and for whom."
  • "How did you come up with the 5000 times more likely figure? That makes no sense at all. Using the figures you cited, the victim regrdless of race is likely to be lying only 2% of the time."
  • "Even if this study were accurate, and even if it were ethical to invoke the laws of probability to determine whether someone is believable-- two outsized ifs-- one coin, landing heads up, doesn't determine the likelihood of the next coin landing heads up. Neither does one woman, 30 years ago, have any bearing on the likelihood that another woman is telling the truth."
  • "the statistics and logic are just that - excercises in probability that tell us nothing about the case in question, because they are not equal to evidence."
But I think these responses miss the point: as far as I can see, the claim is simply incorrect. I think my reasoning is correct, but if I've slipped up please leave a comment.

First, Anonymous claimed that "less than .0004% of black rape victims were raped by whites." I followed the link to the National Crime Victimization Survey to check on this. The total number of rapes or sexual assaults of blacks listed was 24,010 and based on "about 10 or fewer sample cases" the perceived offender was white 0.0% of the time. I'm not entirely sure what this means, but Anonymous reasoned that as many as 9 of the offenders might be white. Now 9 out of 24,010 is about 0.04%, not 0.0004%. Anonymous then introduces "the most conservative figure of 2% of rape allegations being false". Dividing 2% by the incorrect figure of 0.0004%, Anonymous claims that "the victim is 5000 times more likely to have made a false accusation than to have actually been raped". If we divide 2% by the correct figure of 0.04%, we get 50 not 5000!

But apart from this error, the interpretation of the ratio is wrong. For it to be right, we would have to know the probability that the woman was raped. But that's not what the 0.04% represents. Instead, it's an estimate of the probability that if a black person is raped, the offender is white.

So the whole thing is invalid. The problem isn't with probabilistic reasoning per se, it's with faulty probabilistic reasoning. And that's a shame when something so important is at stake.
Bookmark and Share

Monday, April 24, 2006

The thrust and parry of the evidence-based-medicine debate

The debate around evidence-based medicine (EBM) makes for fascinating reading, not least because of the prevalence of hyperbole. In a 2004 paper (it's not open access, but here is the reference), Massimo Porta writes:
"Common sense should build upon a body of evidence and experience accrued over the centuries and shared by the medical community. That some members of the community have made it their task to define which parts of the collective experience constitute evidence and which have less title to reach above water has contributed to the current state of affairs. EBM acolytes now perceive practitioners as grubby underlings, hopeless at applying the latest (evidence-based) literature. Clinicians, resentfully, feel watched by nerds who spend their time sipping coffee while talking to computers instead of patients."
Ow! He continues:
"When it began, it all sounded rather sensible: treatments should be tested for efficacy and trials should be controlled, randomized, double-masked and sufficiently powered. Procedures that do not pass muster should not be recommended for use in clinical practice and self-respecting, commonsensical doctors should refrain from adopting them anyway. But then epidemiologists, statisticians and librarians saw power befalling them as they trotted unexplored avenues towards number crunching."
Given my recent post, I was rather amused by his references to power-hungry number crunchers! In part he was responding to a tongue-in-cheek article in the 2002 holiday issue of BMJ, which purports to reveal the 10 commandments of evidence based medicine:
  • Thou shalt treat all patients according to the EBM cookbook, without concern for local circumstances, patients' preferences, or clinical judgment
  • Thou shalt honour thy computerised evidence based decision support software, humbly entering the information that it requires and faithfully adhering to its commands
  • Thou shalt put heathen basic scientists to the rack until they repent and promise henceforth to randomise all mice, materials, and molecules in their experiments
  • Thou shalt neither publish nor read any case reports, and punish those who blaspheme by uttering personal experiences
  • Thou shalt banish the unbelievers who partake in qualitative research, and force them to live among basic scientists and other heathens
  • Thou shalt defrock any clinician found treating a patient without reference to all research published more than 45 minutes before a consultation
  • Thou shalt reward with a bounty any medical student who denounces specialists who use expressions such as "in my experience"
  • Thou shalt ensure that all patients are seen by research librarians, and that physicians are assigned to handsearching ancient medical journals
  • Thou shalt force to take mandatory retirement all clinical experts within a maximum of 10 days of their being declared experts
  • Thou shalt outlaw contraception to ensure that there are adequate numbers of patients to randomise.
The humour and inflated language aside, there are some big issues here. For example, is it appropriate to hold up the randomized controlled trial (RCT) as the "gold standard of evidence" and relegate basic science to an inferior position? The authors of a philosophical analysis of the evidence-based medicine debate argue that:
"Statistical information from an RCT is virtually uninterpretable and meaningless if stripped away from the backdrop of our basic understanding of physiology and biochemistry."
Compare this with what one of the originators of evidence-based medicine has to say:
"In many [cases], empirical solutions, tested by applied research methods, are "holding the fort" until basic understanding—of mechanisms and interventions—is forthcoming."
This is just a small sampling. The debate about evidence-based medicine and more generally the evidence-based movement is huge. And with good reason: there's an awful lot at stake.
Bookmark and Share

Sunday, April 16, 2006

Pyramid power?


In my second-last post, I discussed the recent controversy over the term "evidence-based". It was popularized through evidence-based medicine, an enormously influential movement spearheaded in the early 1990's by epidemiologists at McMaster University (see accompanying picture of main campus). It certainly sounds reasonable to suggest that medicine (or healthcare more broadly, or education, or policy ...) should be evidence-based, but what does it mean? Here, repeating from my last post, is probably the best known definition of evidence-based medicine:
"the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients."
Clearly the first step in making sense of this definition is to sort out what evidence is.

But is it really important to define evidence? Isn't it just semantics? Well, I think it does matter—for two reasons. First, there's been a widespread push for evidence-based practice and policy. Funding bodies and organizations are giving priority to "evidence-based" approaches and initiatives, and that can have a substantial impact on what research gets done and what practices and policies get implemented. Second, if evidence is not clearly defined, how can we define what "current best evidence" is?

The attempt to delineate different "levels of evidence" has been an ongoing preoccupation of evidence-based medicine. The notion is that some study designs provide more valid or reliable evidence than others, and that there is a "hierarchy of evidence", often depicted as a pyramid such as this one:
(source)
It's not hard to see why this engenders so much heated debate. For example in the figure above, "in vitro ('test tube') research" ranks below "ideas, editorials, opinions"! But this is only one of several such evidence hierarchies, which have notable differences.

For example, the figure below makes no mention at all of benchtop science, and puts "evidence guidelines" higher than randomized controlled trials (RCTs):(source)
As with the previous pyramid, meta-analyses and systematic reviews appear, but here Cochrane systematic reviews are judged best of all ("the 'Gold Standard' for high-quality systematic reviews").

Here's one more pyramid, which doesn't include systematic reviews, but does include anecdotal evidence:
(source)
There are lots of other evidence hierarchies, for example the Oxford Centre for Evidence-based Medicine Levels of Evidence, which makes distinctions according to what the evidence is about (e.g. therapy versus diagnosis).

Distilling the different types of "evidence" from these hierarchies suggests that, according to the authors, evidence may include various types of: (1) empirical studies, (2) summaries of empirical studies, and (3) opinions (hmmm ...). But it's certainly clear that there isn't complete consensus on exactly what qualifies in each of these categories, nor on the rankings.

Perhaps all these pyramids haven't been built on the strongest foundations?
Bookmark and Share

Saturday, April 08, 2006

A defense of blogs - part 2



In my first post of this three-part series, I considered the possible origins of negative attitudes about blogs.

In this post, I'm going examine the significance of blogging as a form of communication. And what better starting point than Marshall McLuhan? I can't claim to understand much of what he wrote, but his epigrams are wonderfully insightful. Perhaps most famous is his assertion that "the medium is the message." And when it comes to blogs, what is the medium? Well, it's global personal publishing that's easy, interactive, and effectively free. McLuhan is suggesting that we should focus on the medium rather than the content per se. Critics of blogging miss this point, choosing instead to decry the quality of much of the content. Here is how journalist Ron Steinman saw things, writing in June 2004:
"Reputedly, there are more than a million blogs and still counting. It is scary. Truly, who has the time to read, digest, and make sense of all the words spewed forth? I do not. I do not want to try."
Methinks he doth protest too much. Is Steinman perhaps suppressing an obsessive-compulsive urge to clean the filthy stables of the blogosphere? Given that the number of blogs today is estimated to be upwards of 30 million, Hercules himself would be daunted.

Fortunately, nobody need take on such a task. The wonderful thing about blogs is if you don't like them, you don't need to read them! Unlike spam, which is an irritation we could all do without, you can just ignore blogs if that's your preference. You can also ignore books, magazine, television, and movies if you like. Goodness knows, there's lots of trash there too! But most of us reckon that it's possible to separate some of the wheat from the chaff. I don't imagine anyone is entirely successful, but there's lots of good stuff out there, and some good strategies for finding it.

Arguably, the challenge is much greater when in comes to blogs. One solution is to stick to the "A-list" blogs. But I think that's a real mistake, because the message of the blog medium is this: for the first time in human history, an ordinary person can share his or her perspectives, as he or she sees fit, with the rest of the world. A fabulous flowering of creativity and self-expression is taking place; why miss out on it?

It might be argued that blogs are not unique in this respect. Newsgroups, electronic mailing lists and internet forums have many similarities with blogs, and predate blogs by many years. However a key distinguishing feature of blogs is their ownership. Fundamentally, newsgroups, mailing lists, and forums are communities, with all the associated strengths and weaknesses. The invitation is: "come and share as we discuss X". While a community can grow around a blog through the commenting feature, the blog belongs to the owner not the community, and the central focus remains the owner's posts. The invitation is: "check out my posts, and leave comments if you like". In no way is this meant to denigrate the value of the comments. Indeed I find the comments on my blog to be a wonderful source of insight and humour—and at least half the fun. Similarly, when I read other blogs I often check the comments. Among other things, they give a great sense of who's reading (although of course, there may be many readers who remain silent).

Ironically, despite predictions (by McLuhan among others?) that the written word was doomed by the dominance of electronic media like radio, television, and the internet, blogs are heralding a renaissance in writing. The linearity of the printed page was widely dismissed as old-fashioned and boring, allegedly incompatible with the infinitesimal attention span we've all developed. This was always a weak argument, premised on an oversimplified analysis of patterns of media consumption. What is true is that we read blogs differently from how we read a newspaper, or a magazine, or a book. This is partly due to the "post-centric" nature of blogs (an observation attributed to Meg Hourihan). It is also partly a function of hyperlinks. Incidentally, there has been extensive comment on the journalistic value (or lack thereof) of blogs. I don't intend to weigh in on this, except to point out that the use of links in blog posts allows for the attribution of sources and justification of claims—something the print media could sometimes benefit from. For more on the relationship between blogs and journalism, see this article by Steven Johnson.

The internet is widely seen as the realization of McLuhan's "global village". But unlike many villages of old, blogs are making this one profoundly democratic: now it is not only the chief and the high priest whose voices can be heard—we've always been forced to listen to them—instead we can tune in to whomever we like. A. J. Liebling pointed out that "freedom of the press is guaranteed only to those who own one"—well now everyone can own a press. A soapbox for all! (Without the noise pollution.)

By opening up communication, I believe that blogs are helping to bring about a huge increase in intellectual efficiency for humanity. Ideas previously isolated by geography (even on a local scale) and stifled by dominant cultural and political assumptions can now flow freely. Earlier technologies have only hinted at this kind of exchange.

One frequently-heard criticism is that blogs are largely driven by vanity and ego. On the one hand, this is simply a tautology. A person's blog is, after all, a projection of themselves (their ego) onto the internet. On the other hand, this is a psychological claim: bloggers derive personal gratification from expressing themselves. But then this too has a tautological flavour, for why else would they do so? Presumably then, the claim is that there is too much ego and pride (a rather more neutral term than vanity) involved. In the case of a blog that is transparently self-glorifying, the claim is plausible. But regardless, there is always the choice to ignore any given blog, particularly if it offers nothing to the reader. On balance, blog narcissism would seem to be a harmless release. And thanks to professionally-designed blog templates, we don't have to deal with so many hideously ugly vanity pages (actually that one is a parody).

I leave you with some interesting links. As so often, the Wikipedia entry on blogs is excellent, with some interesting history and a list of 20 (!) different types of blogs. Seth Godin has a neat e-book about blogs. Finally, this one is more about newsgroups than blogs, but it's too much fun not to include.
Bookmark and Share

Thursday, April 06, 2006

Evidence-based ambiguity


In the last 15 years, evidence-based medicine has taken the world by storm. According to a famous definition, evidence-based medicine is
"the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients."
In this spirit, the evidence supporting many time-honoured practices in medicine has been examined, and in a number of cases found wanting. (For an informative and entertaining look at this, see this slide presentation by former British Medical Journal editor Richard Smith.) The exalted status that expert opinion once enjoyed is waning. Today the cry is "Show me the evidence!" (cf. Jerry Maguire.)

If you'll pardon the pun, the success of evidence-based medicine has been infectious. The prefix "evidence-based" is popping up not just in connection with healthcare: today there is evidence-based education, evidence-based software-engineering, evidence-based librarianship, and the list goes on. The "evidence-based movement" seems victorious.

But there are stirrings of discontent. From the beginning, evidence-based medicine has had its critics. (See this editorial for a balanced account of their objections.) A key issue relates to the ambiguity in the word "evidence". If it means empirical evidence, it would seem that clinical experience, pathophysiological theory, patient values, and expert opinion have no role to play. Alternatively, evidence can be defined broadly: "evidence is anything that establishes a fact or gives reason for believing something" (The Oxford American Dictionary, via this report). But this "colloquial" definition opens the doors so wide as to be useless here. For example, a religious argument might be compelling for the believer, but surely would not constitute "evidence" for the present purposes. For some other interesting perspectives on the definition of evidence in evidence-based medicine, see this essay by Amanda Fullan (an undergraduate student at the time).

In recent years the evidence-based movement has expanded to areas such as public health and policy. In a 2004 essay titled What is Evidence and What is the Problem?, the Acting Executive Director of the American Psychological Association writes
"These days, you can hear the terms “good science”, “evidence”, and “data” a lot in Washington. One of the catch phrases around policy-making circles is “evidence-based”, applied to a host of contents including education, policy, practice, medicine, even architecture. You would think that this would make us all quite happy – at least those who advocate that decisions about policy, social interventions, and future directions be based on data. But, ironically, the new emphasis on evidence-based this and that has been simultaneously welcomed and greeted with raised anxiety levels and red flags of concern."
And the ambiguity of the word "evidence" is even more problematic in this context:
"It is clear that discussions of definitions of evidence, distinctions among kinds of evidence (including scientific data, expert judgment, observation, and theory), and consensus on when to use what, will occupy us for some time."
The Canadian Health Services Research Foundation (CHSRF) has recently grappled with these issues, issuing a report, and holding a workshop. One of the "key messages" from the workshop was that
"Although the literature shows that decision makers work with a colloquial understanding of evidence (often alongside a scientific understanding), some participants felt strongly that the information classified as colloquial evidence should not be called evidence. They acknowledged the importance of this information but suggested finding a substitute term, such as “colloquial knowledge” or “colloquial factors.”"
Finally, the CHSRF adopted the following (rather extended) definition:
"Evidence is information that comes closest to the facts of a matter. The form it takes depends on context. The findings of high-quality, methodologically appropriate research are the most accurate evidence. Because research is often incomplete and sometimes contradictory or unavailable, other kinds of information are necessary supplements to or stand-ins for research. The evidence base for a decision is the multiple forms of evidence combined to balance rigour with expedience—while privileging the former over the latter."
Hmmm ... not entirely convincing, but I see what they're getting at. But where did they get that stuff about coming "closest to the facts of the matter"? I'd say it's either begging the question or using a circular argument.

Epilogue: In their latest newsletter, the CHSRF announce that they've decided to abandon the term "evidence-based":
"Following feedback and discussions at the “Weighing Up the Evidence” workshop in September 2005, the mission of the foundation has been changed to better reflect the emerging realization that research is justifiably only one, albeit very important, input to decision-making."
The new term? Evidence-informed.
Bookmark and Share