Evidence Soup
How to find, use, and explain evidence.

« Fun-with-Evidence Friday: Science works, b!t3^&s! | Home | Major study debunks belief that people in the distance are really tiny. »

Monday, 19 August 2013

How can we speed up the adoption of new medical evidence?

Why does some medical evidence gain acceptance quickly, while other findings do not? That is the eternal question. Three recent contributions address this in very different ways: The Journal of Comparative Effectiveness Research, the Trip database, and a New Yorker piece authored by a surgeon/public-health researcher.

But first, I'll pose some questions of my own.

What's "acceptance"? How do you pinpoint acceptance of evidence? Inclusion in a clinical practice guideline? In a formulary? By documenting actual treatments over a sustained time period?

Also, how can you truly know what evidence was used in making a decision? Supporting evidence is sometimes formally ranked, referenced, etc. - but often not. Lack of transparency and consistency are an unfortunate result. (During my PhD research, I attempted to pinpoint the basis of regulatory health decision-making: Bottom line, it ain't easy.)

How much do people consider the source? Sometimes an audience is skeptical about the provider of given evidence; peer-review systems help in this regard (though our current science journal process is plagued with problems). Maybe the findings come from clinical trials, when evidence from real-world patient outcomes is what's preferred. The list goes on.

Our feelings about a particular source overshadow our discussions of the evidence. Just ask Matt Ridley, who evoked a strong response to his views on climate change, especially when writing in the Wall Street Journal. I do admire his efforts to remain objective, saying "it is the evidence that persuades me whether a theory is right or wrong, and no, I could not care less what the 'consensus' says."

How can we speed adoption of evidence? "Diffusion of innovation" is the phrase often applied when people investigate the spread of new findings. A number of factors influence speed of diffusion.

1. Understanding the process. Key influencers are considered in When is evidence sufficient for decision-making? A framework for understanding the pace of evidence adoption (Journal of Comparative Effectiveness Research, July 2013).

The authors (Robert W. Dubois​, Michael Lauer​, and Eleanor Perfetto​) looked at three diverse case studies - statins, drug eluting stents, and bone marrow transplantation for breast cancer – to establish a proposed framework. Five factors stood out: 1) validity, reliability, and maturity of the science available before widespread adoption; 2) communication of the science; 3) economic drivers; 4) patients’ and physicians’ ability to apply published scientific findings to their specific clinical needs; and 5) incorporation into practice guidelines.

CER

This report thoroughly evaluates the case studies and what happened with the associated supporting evidence. I'd like to see a coding scheme to support these qualitative assessments -- though formal codification of such subject matter can get pretty artificial. And the authors directly acknowledge that an "objective application of the framework to a broader and randomly selected set of situations is needed to further validate the findings from the three case studies. [p. 389]" So it's all good.

2. Synthesizing evidence faster. Jon Brassey, developer of the Trip database, has lamented the painfully slow Cochrane systematic review process. He claims "On average a Cochrane systematic review takes 23 months from protocol to publication." and "In an analysis of 358 dermatology questions only three could be answered by a single systematic review, so less than 1%."

So Jon's trying out a provocative idea: Replacing people with machines. "One thing I've been working on recently has been an ultra-rapid review system, based on machine learning and some basic statistics.  In a nutshell can we take multiple abstracts, 'read' what they're about and combine the results to give a 'score' for the intervention? More importantly, will any score actually be meaningful?" These first test results were released August 2.

Trip

 

3. Keeping people talking. In Slow Ideas, Atul Gawande looks at the different adoption patterns for 19th-century surgical anesthesia and antiseptics (New Yorker Annals of Medicine: Some innovations spread fast. How do you speed the ones that don’t?).

Gawande, a surgeon / public-health researcher, observes that "We yearn for frictionless, technological solutions. But people talking to people is still the way that norms and standards change." Amen to that.

What can we do next? [Disclaimer: Shameless self-promotion.] Speed is not always our friend. But simple visualization and evidence synthesis certainly are. That's what I'm working toward at PepperSlice.com.

 

Comments

Rob, Well said. While "the evidence" (and not consensus) is what persuades Ridley about the validity of a given theory, that same evidence might fail to persuade many others. I'm likely to be persuaded by someone who makes a strong argument - applying deep knowledge to analyze empirical evidence according to accepted scientific principles. And if there seems to be a consensus around the theory, based on investigations by others, I see no harm in weighing that.

I have to say that Ridley's statement that "it is the evidence that persuades me whether a theory is right or wrong, and no, I could not care less what the 'consensus' says" is all well and good for those topic areas where he has subject matter expertise sufficient to evaluate that evidence. The evidence will be in the primary literature (simply reading blogs - especially those with which one agrees - is completely insufficient). Without that subject matter expertise, one is forced to decide whom to credit.

I've written a couple of blog posts on this, one is at: http://hamiltonianfunction.blogspot.com/2011/07/hierarchy-of-evaluating-research-like.html

The comments to this entry are closed.