Sunday, April 16, 2006

Pyramid power?


In my second-last post, I discussed the recent controversy over the term "evidence-based". It was popularized through evidence-based medicine, an enormously influential movement spearheaded in the early 1990's by epidemiologists at McMaster University (see accompanying picture of main campus). It certainly sounds reasonable to suggest that medicine (or healthcare more broadly, or education, or policy ...) should be evidence-based, but what does it mean? Here, repeating from my last post, is probably the best known definition of evidence-based medicine:
"the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients."
Clearly the first step in making sense of this definition is to sort out what evidence is.

But is it really important to define evidence? Isn't it just semantics? Well, I think it does matter—for two reasons. First, there's been a widespread push for evidence-based practice and policy. Funding bodies and organizations are giving priority to "evidence-based" approaches and initiatives, and that can have a substantial impact on what research gets done and what practices and policies get implemented. Second, if evidence is not clearly defined, how can we define what "current best evidence" is?

The attempt to delineate different "levels of evidence" has been an ongoing preoccupation of evidence-based medicine. The notion is that some study designs provide more valid or reliable evidence than others, and that there is a "hierarchy of evidence", often depicted as a pyramid such as this one:
(source)
It's not hard to see why this engenders so much heated debate. For example in the figure above, "in vitro ('test tube') research" ranks below "ideas, editorials, opinions"! But this is only one of several such evidence hierarchies, which have notable differences.

For example, the figure below makes no mention at all of benchtop science, and puts "evidence guidelines" higher than randomized controlled trials (RCTs):(source)
As with the previous pyramid, meta-analyses and systematic reviews appear, but here Cochrane systematic reviews are judged best of all ("the 'Gold Standard' for high-quality systematic reviews").

Here's one more pyramid, which doesn't include systematic reviews, but does include anecdotal evidence:
(source)
There are lots of other evidence hierarchies, for example the Oxford Centre for Evidence-based Medicine Levels of Evidence, which makes distinctions according to what the evidence is about (e.g. therapy versus diagnosis).

Distilling the different types of "evidence" from these hierarchies suggests that, according to the authors, evidence may include various types of: (1) empirical studies, (2) summaries of empirical studies, and (3) opinions (hmmm ...). But it's certainly clear that there isn't complete consensus on exactly what qualifies in each of these categories, nor on the rankings.

Perhaps all these pyramids haven't been built on the strongest foundations?
Bookmark and Share

4 Comments:

Anonymous Mohammed-TA said...

Later, I might share some more thoughts once they mature.

For now, I ask:

Is it enough to just spell out "sytematic review" (SR) when giving them the top position or should one specify further and write "Systematic reviews of randomised controlled trials"?

A good RCT, appears to me, to top an SR with meta-analysis based on quasi-experimental studies.

Further, a good RCT with a harder endpoint appears to be more meaningful (and of interest) than an SR with an meta-analysis of surrogate endpoints.

Just a prelude to what I might share later:

An SR or any other research design has a face value (that at birth or publication) and a "near real value" after it has been challenged with critical appraisal and criticism of others.

Purposely, I am keeping away from the mention of newer evidence that require updating a meta-analysis.

Perhaps (?) an SR or any other design should never have the top slot -- that should remain always empty testifying to the limitations of evidence and room for improvement and updating.

10:05 PM, April 19, 2006  
Anonymous Mohammed-TA said...

Further and perhaps,

From an individual's perspective -- the most important perspective -- a randomised double blind placebo/alternate treatment controlled N of 1 crossover trial is the most important evidence generating study design.

For the individual, as opposed to organisations formulating guidelines and making policies, can an N of 1 trial hold more water than an SR of good RCTs?

9:34 PM, April 24, 2006  
Anonymous Mohammed-TA said...

Not without reason have I come to the conclusion that the Truth, unlike our perception of it, is irrepressible.

Based upon this understanding, I respect views that challenge popular beliefs -- controversies or "conspiracy" theories.

However, that does not necessarily mean that the unpopular view is the correct view.

Between the two, of course in my limited abilities, I look for traits that are correlated with the Truth -- Wisdom (reason?), lack of secondary gains, honesty, language of expression (the caution and precision in language employed), attestation of good character of associated people by a community (if this information is available) etc are some such traits.

Before being content with the evidence coming out of a particular research design, one question that I think must be asked is:

what are the currently existing criticisms of it, and what are their truth values at least based upon reason and conflict of interest if not anything else?

If none are to be found, can we generate some of our own (assuming the evidence has never been through a critical appraisal)?

9:11 PM, April 25, 2006  
Blogger Nick Barrowman said...

As usual, Mohammed, your comments are insightful. I especially like the idea of leaving the top of the pyramid empty!

The pyramids represent idealized hierarchies of well-done studies. It can be argued (convincingly, I believe) that a well-done RCT is less susceptible to bias than a well-done case-control study. But in real life, studies are inevitably and idiosyncratically flawed. What happens to the hierarchy then? It gets even more complex when you also consider systematic reviews.

N-of-1 trials are appealing, but they are limited to certain types of chronic diseases and treatments.

Ultimately, I think no hierarchy of study designs can replace the kind of careful consideration that you're advocating.

7:02 PM, April 26, 2006  

Post a Comment

<< Home