Evidence Soup
How to find, use, and explain evidence.

15 posts categorized "interview wednesday"

Wednesday, 13 August 2014

Interview Wednesday: James Taylor on decision management, analytics, and evidence.

JamestaylorFor Interview Wednesday, today we hear from James Taylor, CEO of Decision Management Solutions in Palo Alto, California. Email him james@decisionmanagementsolutons.com, or follow on Twitter @jamet123. James' work epitomizes the mature use of evidence: Developing decision processes, figuring out ahead of time what evidence is required for a particular type of decision, then continually refining that to improve outcomes. I'm fond of saying "create a decision culture, not a data culture" and decision management is the fundamental step toward that. One of the interesting things he does is show people how to apply decision modeling. Of course we can't always do this, because our decisions aren't routine/repeatable enough, and we lack evidence - although I believe we could achieve something more meaningful in the middle ground, somewhere between establishing hard business rules and handling every strategic decision as a one-off process. But enough about me, let's hear from James.

#1. How did you get into the decision-making field, and what types of decisions do you help people make?
I started off in software as a product manager. Having worked on several products that needed to embed decision-making using business rules, I decided to join a company whose product was a business rules management system or BRMS. While there are many things you can do with business rules and with a BRMS, automating decision-making is where they really shine. That got me started but then we were acquired by HNC and then FICO – companies with a long history of using advanced predictive analytics as well as business rules to automate and manage credit risk decisions.

That brought me squarely into the analytics space and led me to the realization that Decision Management – the identification, automation, and management of high volume operational decisions – was a specific and very high-value way to apply analytics. That was about 11 or 12 years ago and I have been working in Decision Management and helping people build Decision Management Systems ever since. The specific kinds of decisions that are my primary focus are repeatable, operational decisions made at the front line of an organization in very high volume. These decisions, often about a single customer or a single transaction, are generally wholly or partly automated as you would expect. They range from decisions about credit or delivery risk to fraud detection, from approvals and eligibility decisions to next best action and pricing decisions. Often our focus is not so much on helping a person MAKE these decisions as helping them manage the SYSTEM that makes them.

#2. There's no shortage of barriers to better decision-making (problems with data, process, technology, critical thinking, feedback, and so on). Where does your work contribute most to breaking down these barriers?
I think there are really three areas – the focus on operational decisions, the use of business rules and predictive analytics as a pair, and the use of decision modeling. The first is simply identifying that these decisions matter and that analytics and other technology can be applied to automate, manage, and improve them. Many organizations think that only executives or managers make decisions and so neglect to improve the decision-making of their call center staff, their retail staff, their website, their mobile application etc.

The ROI on improving these decisions is high because although each decision is small, the cumulative effect is large because these decisions are made so often. The second is in recognizing business rules and predictive analytics as a pair of technologies. Business rules allow policies, regulations, and best practices to be applied rigorously while maintaining the agility to change them when necessary. They also act as a great platform for using predictive analytics, allowing you to define what to DO based on the prediction. Decision Management focuses on using them together to be analytically prescriptive. The third is in using decision modeling as a way to specify decision requirements. This helps identify the analytics, business rules and data required to make and improve the decision. It allows organizations to be explicit about the decision making they need and to create a framework for continuous improvement and innovation.

#3. One challenge with delivering findings is that people don't always see how or where they might be applied. In your experience, what are some effective ways to encourage the use of data in decision making?
Two things really seem to help. The first is mapping decisions to metrics, showing executives and managers that these low-level decisions contribute to hitting key metrics and performance indicators. If, for instance, I care about my level of customer retention, then all sorts of decisions made at the front line (what retention offer to make, what renewal price to offer, service configuration recommendations) make a difference. If I don't manage those decisions then I am not managing that metric. Once they make this connection they are very keen to improve the quality of these decisions and that leads naturally to using analytics to improve them.

Unlike executive or management decisions there is a sense that the "decision makers" for these decisions don't have much experience and it is easier therefore to present analytics as a critical tool for better decisions. The second is to model out the decision. Working with teams to define how the decision is made, and how they would like it to be made, often makes them realize how poorly defined it has been historically and how little control they have over it. Understanding the decision and then asking "if only" questions – "we could make a better decision if only we knew…" – make the role of analytics clear and the value of data flows naturally from this. Metrics are influenced by decisions, decisions are improved by analytics, those analytics require data. This creates a chain of business value.

#4. On a scale of 1 to 10, how would you rate the "state of the evidence" in your field? Where 1 = weak (data aren't available to show what works and what doesn't), and 10 = mature (people have comprehensive evidence to inform their decision-making, in the right place at the right time).
This is a tricky question to answer. In some areas, like credit risk, the approach is very mature. Few decisions about approving a loan or extending a credit card limit are made without analytics and evidence. Elsewhere it is pretty nascent. I would say 10 in credit risk and fraud, 5-6 in marketing and customer treatment, and mostly 1-2 elsewhere.

#5. What do you want your legacy to be?
From a work perspective I would like people to take their operational decisions as seriously as they take their strategic, tactical, executive and managerial decisions. Thanks, James. Great explanation of creating value with evidence by stepping back and designing a mature decision process.

Wednesday, 20 April 2011

Interview Wednesday: Leart Shaka, crusader for good evidence about vaccination.

Vaccinetimes For today's Interview Wednesday, we talk with Leart Shaka, Editor of Vaccine Times. I first wrote about him last year after discovering his blog. As a self-described layman, he's created an impressive repository of information about vaccines - and is an example of what one passionately curious person can do. (His journey also illustrates the pathetic state of reliable, user-friendly evidence: In a perfect world, his efforts wouldn't be needed because the appropriate resources would have already been available.)

Leart tweets as @skepdude and also @VaccineTimes. Since I first wrote about his project, he's put together a team and launched Vaccine Times. He explains why he has put so much effort into the project: "The need became apparent to me as I was verifying a claim made by a Generation Rescue report. The claim was straightforward: the US mandates 36 vaccines by the age of 5, while the average of another 30 countries was only 18. The claim itself does not sound like much, count the doses and compare. Nevertheless, I soon found that after a couple of hours of work, finding information from reliable sources, compiling it, verifying the calculations etc., just weren’t enough. If it was that time-consuming to verify a simple claim like that, I wondered, how much more time-consuming is it to verify the other claims?" Amen to that!

#1. What got you interested in evidence?
I don’t think there was any one thing, or any one event, that got me interested in evidence. I think all people are interested in evidence, and I am not different from the rest of humanity in that regard. The differences, however, between people have to do with the kind of evidence they require, and how consistent they are at applying these requirements.

As a skeptic I subscribe to the “extraordinary claims require extraordinary evidence” line of thinking. Believers of woo-woo on the other hand set a very low bar for their side, while at the same time demanding an almost impossibly high bar from the other side. To them the only criteria for evidence is “does it support my belief?” instead of “is this evidence reliable?” That is why an anti-vaccine proponent will accept an anecdote that supports his belief that vaccines are dangerous, while at the same time dismissing a mountain of scientific evidence that goes against that belief.

What types of evidence do you work with most often (medical, business research, statistics, social science, etc.)?
Since I decided to focus my skeptical activities to countering the anti-vaccine misinformation, I have been dealing mostly with medical evidence, in the form of scientific studies about vaccines. However, the science behind vaccines is not the only focus of The Vaccine Times; we also deal with suffering caused by vaccine-preventable diseases, and raising the public’s awareness about the dangers of not vaccinating. As such we also rely on information from government sources such as the CDC, the World Health Organization website, articles from news websites, other pro-health websites, blogs etc.

What is your involvement with evidence: Applying it, advocating its use, researching/developing it, synthesizing/explaining/translating it, communicating it?
I am not a scientist, as such my involvement with evidence falls more in the communication aspect, limited of course by my non-expert understanding of it. The main goal of The Vaccine Times is to publicize, to advertise so to speak, the science-based evidence about vaccines to the laypublic. However, we try to do that in a way that is easily accessible. That is why we need to rely on other kinds of evidence, besides the scientific one, such as the ones I mentioned in the previous question (news items, other websites and blogs).

Where do you go looking for evidence, and what types of sources do you prefer? (formally published stuff such as journals, or something less formalized?)
That depends on what I am trying to communicate to the public. If for example, I am summarizing a new study about vaccines, I go straight to the source, the actual study. In those cases I prefer not to rely on a news item describing the study, or a press release, I want to go straight to the primary source. In general, questions about the science, should be settled by looking at the scientific studies.

If on the other hand, I simply want to communicate a recent development, say an outbreak of an infectious disease, then I rely on news stories from various sources on the web. If the information about the outbreak is available in a government website such as the CDC, or a state’s Health Department website, I prefer to get the information there. If that is not available, I rely on news items from the web, which I read carefully to try and find out what their primary source is, and try to follow their primary source to ensure that the reporting on the news item was correct. For example, if the article says : “According to the Minnesota Health Department”, that is a clue for me to go to the Minnesota Health Department web page and do a search for the topic in question.

Same goes for blog entries. I follow their links to try to get to the primary source. Many times a blog entry will link to a news article, which in turn I have to peruse for its source of information and so on. Determining if a source of information is reliable, if it is “good enough”, is as much of an art as it is a science. You get better the more you do it. The basic idea is that you always want to go to the primary source of the information, if you can. If it is not possible for whatever reason, then secondary sources will have to do, but our reliance on such sources has to decrease accordingly.

#2. On a scale of 1 to 10, where 10= ‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the overall “state of the evidence” in your primary field?
I have chosen to spend my time on the vaccination issue. The rating I guess depends on the kind of question being asked. If the question is “Do vaccines reduce incidence of the infectious diseases they are meant to protect against?”, I would say the rating would be a 9.999999. In that case the science is as crystal clear as it can be that yes, vaccines reduce incidence considerably. However, in science there’s always a margin of error; there is always a bit of uncertainty for every result; I don’t think you can ever get a 10 in matters of science.

On the other hand, I may not be the best person to answer this question. I am not an expert in vaccines, so any answer I give is limited to my layman’s understanding of the evidence.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality? (data incompatibility, lack of research skills, information overload, lack of standard ways of presenting evidence, lack of motivation to follow evidence-based practices, ...)
Information overload is the main problem I see. Scientific knowledge has exploded over the past couple hundred years, and it is almost impossible for a layperson to properly understand the science behind any issue, be it vaccines, global warming or what have you. There are literally thousands of studies done on vaccines. Some of them are good, some bad. First of all, one needs to be able to access all of them, read them all. Then, one has to be able to distinguish the reliable, from the less reliable, from the unreliable studies. Then one has to somehow summarize the information on the first two and figure out where that totality of evidence is pointing. This is a tremendous task.

This process requires large amounts of time and expertise. Most laypeople, including myself, don’t have either, so at best we get an educated “feel” for the verdict of all this evidence, but we have to be clear: our “feel” does not trump expertize. A layperson who has spent 20 years looking at an issue is not at par with an expert who has spent 20 years looking at an issue, and that is where most anti-vaccine proponents stumble. They think their Google University Education is just as, if not more, reliable than an expert’s education and experience.

Where do you see technology making things better?
I can see how technology can make information, scientific evidence, more accessible. In fact the Internet has already done so. With a few clicks I can find the abstract of any study I want, if not the full article. On the other hand we don’t yet have a technology that can distinguish between the reliable, less reliable, and unreliable sources of evidence. That is still up to the individual; and that is where we have a problem. There is nothing technology can do to make someone a better researcher. It’s a tool, and as all other tools if used properly it can be extremely useful, but in the hands of someone who does not understand how to use it, it becomes useless, and sometimes even dangerous.

#4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow?
That depends on who you’re sharing with, what kind of evidence you’re sharing, and what your goal is. In general, I think, one has to tailor the format to the audience. There is no simple answer, no one format that can apply to all. You must communicate differently to a parent, than you do to a rabid anti-vaccine proponent, than you do to a science-minded individual, because all 3 care about different things. They are interested in different things, and if your goal is to make a difference, to persuade one person one way or another, one speech is not going to apply to all 3 scenarios.

In general, the idea is to understand the point the other person is making; if they are correct you want to reinforce that point by adding your evidence and if they are wrong you have to figure out how and where they went wrong, and then figure out which piece of evidence will address that error better, and then figure out what is the best way to communicate that piece of evidence.

Again, it’s an art and one gets better with practice.

What mistakes do you see people making when they explain evidence?
The mistakes are multiple. You get things such as cherry picking, i.e. choosing only the evidence that supports your point of view while disregarding contrary evidence; drawing conclusions beyond what the evidence supports; not applying the same criteria to your side as to the other side, in a few words: all the logic 101 fallacies and errors.

#5. What do you want your legacy to be?
I’d like to be remembered for being The Vaccine Times guy; the guy who figured out how to get the pro-health message to the parents in a way that made it easy for them to digest. I would like to be able to dilute the anti-vaccine propaganda as much as possible, to help parents find their way to science-based evidence about vaccines.

Vaccines are extremely important, millions of lives have been saved with the advent of vaccines and there are some today who are pushing people away from life-saving vaccines. We owe it to the next generation to try and fight this kind of misinformation to the best of our abilities. I guess if I had to put it in a few words I’d like to be remembered as the guy who tried to do the right thing for our children.

Thanks for sharing these insights, Leart. You are truly an inspiration.


____________________________

Chime in. Would you like to be interviewed, or do you have someone to recommend? Drop me a note at tracy AT evidencesoup DOT com.

Wednesday, 30 March 2011

Interview Wednesday: Dr. Elizabeth Oyer, evaluator of evidence.

Elizabeth Oyer PhD For today's Interview Wednesday, we talk with Elizabeth Oyer, Director of Research at Eval Solutions in Carmel, Indiana, USA. You can read Elizabeth's blog here. Her Twitter account is @elizabethoyer.

#1. What got you interested in evaluation and evidence?
I believe my work in evaluation has been my destiny for a while. I began my undergraduate studies in Psychology with a minor in French (because I loved studying French!). By my sophomore year, the practicality of a French minor set in and I decided to double major in Communications. To satisfy my passion for French, I began teaching in a weekend program for
gifted students (3rd and 4th graders). In my senior year of teaching this class, I was paired with an Education major. We decided to divide up the teaching duties, alternating throughout each class. Well, instantly I could see how much better she was at teaching!

I was fascinated by the impact of our different abilities on the students. Staring down the barrel of graduate school, I was elated to discover the field of Educational Psychology and my path to evaluation began at Indiana University's program with an Inquiry emphasis.

What types of evidence do you work with most often (medical, business research, statistics, social science, etc.)?
For most of my projects, my work incorporates quantitative data (e.g., achievement data and survey data) as well as qualitative data (e.g., observations, interviews) and extant records (e.g., lesson plans) in education. The majority of my clients are in the field of education, although I also work with businesses (e.g., needs assessments, strategic planning) and non-profit organizations.

What is your involvement with evidence and evaluation: applying it, advocating its use, researching/developing it, synthesizing/explaining/translating it, communicating it?
I would say there is an element of all of these in my work! Certainly most of my projects incorporate reviewing literature and work in the field, modifying instruments to meet the clients' needs, reporting the results for accountability requirements but also translating trends in the data to feed back to answer local questions about progress, as well as disseminating the results (generally at conferences).

Where do you go looking for evidence, and what types of sources do you prefer? (formally published stuff such as journals, or something less formalized?)
When I am looking for evidence to help me process and contextualize results, I rely mostly on formal evidence (like journals). However, the informal conversations I have with my colleagues are priceless! When we are together we are always thinking, analyzing, evaluating our work to improve our impact on the field.

#2. On a scale of 1 to 10, where 10= ‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the overall “state of the evidence” in your primary field?
I would say with the advances in brain research and contributions coming from effectiveness of comprehensive reform, our understanding of what we're supposed to do is an '8' - but how to get systems to their optimal states of effectiveness consistently and across contexts is about a '4'.

Work on implementation fidelity and systems thinking is getting us there, but it certainly appears we have a long road ahead of us.

Which of these situations is most common in your field?
a) Much of the evidence we need doesn’t yet exist.
b) People don't know about the evidence that is available.
c) People don't understand the evidence well enough to apply it.
d) People don’t follow the evidence because it's not the expectation.

I would say a healthy level of 'c' and 'd' are common. As expectations and accountability standards at the federal level increase, the field responds. When there are vague policies, there are too many opportunities for excuses for why standards of evidence are too difficult to implement.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality?
I believe there are a couple of sizeable barriers to our knowledge in the field. The first relates to access to information about the work of our peers. For a large number of workers in the field (evaluators who are not in higher education), access to journals of published work is difficult. There are no models to open access to this work in the same way that there is easy access to other resources (like music and videos). Paying $20 to read what may be 1 out of 50 articles you need for a project is simply not feasible.

Secondly, I believe access to individual-level student data (in accordance with FERPA standards, of course) continues to be more difficult than necessary. In this digital age, access is more often a matter of the will of the community stakeholders (administrators, teachers, school boards, project leadership) than a practical issue. It is vital that our understanding of progress in education target observable impact on individual students; we must continue to implement evaluation frameworks that test our theory of change from system level factors (e.g, school policies) to classroom level factors (e.g., teachers and resources) on clearly articulated student outcomes.

Where do you see technology making things better?
Technology has the potential to improve access to work universally when we have a business model that allows for the work to be reasonably profitable. As we move toward instruction that is grounded in data, technology can provide a vehicle for information flow, especially with automated tools like a web services and data clouds.

#4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow?
Well, many times there are reporting requirements set by the program officers funding the project, so those are always followed. However, the standards for evaluation have guidelines for communicating results and I try to incorporate these into my reports. I also try to create additional formats (like press releases, Executive Summaries) to provide access to information. Finally, I meet with all of my clients to discuss what the evaluation is saying about the project's progress (or how the evaluation needs to be improved to better inform the progress!).

I also have begun sharing my work in my blog and I try to present at conferences annually. Professionally, I would like to do more published work, so that will be my next target for myself.

What mistakes do you see people making when they explain evidence?
I would say poor evaluation frameworks or poor implementation of sound frameworks is at the root of a report that doesn't give the guidance that is needed. Spending time clearly articulating the theory of change down to the observable, measurable implementation goals for the core program elements is more often the problem than the data itself. Often folks go way beyond the evidence at hand to make generalizations that don't hold water.

#5. What do you want your legacy to be?
I'd like to be remembered for pursuing excellence in the field with integrity and respect for stakeholders while delivering high quality, useful information that changes the way things work. A very lofty goal that keeps me hopping!

#6. What question do you wish we'd asked?

"What is your most important resource?" My colleagues are my most important resource. I couldn't do my work with the hard work of my subcontractors and project partners and clients. My colleagues provide formal support (through partnerships on grants) as well as serve as an informal resource (to bounce ideas, confirm interpretations of evidence in tight spots) and just generally feed my evolution as a professional. My clients keep me motivated, with their passion for their projects, that my purpose is to provide the best evaluation services and seek out the best resources to meet the needs of each project.

Thanks for sharing these insights, Elizabeth.


____________________________

Chime in. Would you like to be interviewed, or do you have someone to recommend? Drop me a note at tracy AT evidencesoup DOT com.

Wednesday, 09 February 2011

Interview Wednesday: Belinda Ireland, The Evidence Doc.

Belindaireland I'm delighted to interview Belinda Ireland, MD, MS Epidemiology, and President of TheEvidenceDoc (St. Louis, Missouri, USA). She demonstrates a deep understanding of how research findings can be synthesized into useful medical guidelines. Belinda writes a nice blog about evidence-based innovations in health care quality improvement. She also has a LinkedIn profile and a Twitter account (@TheEvidenceDoc).

The Five Questions.
#1. What got you interested in evidence?
My interest began 35 years ago in medical school, when the answer to my routine question of, “Why do we always treat disease X with therapy Y?” was consistently “Because it works.” But when I pressed my teachers to provide specifics on the effectiveness, they just referred me to our textbooks. When I consulted the texts, I found references citing other textbooks. In short, I was unable to find answers to my questions without tracking down the original study, or more often the case report or expert opinion that generated the belief in treatment success.

This experience made me suspect of the scientific foundation of much of our medical decisions. Years later I learned of Archie Cochrane and Kerr White, both outspoken advocates of evidence-based medicine before there was even such a name for it. They are often credited with setting the estimate, in 1976, of only 10-20% of medical practice being based on evidence, which was certainly consistent with my educational experience at that time.

As a result, I decided to pursue graduate work in epidemiology so I could help contribute to the evidence base of medicine. I’ve been a Medical Epidemiologist ever since.

What types of evidence do you work with most often (medical, business research, statistics, social science, etc.)?
I mostly work with medical evidence of effectiveness and safety of interventions.   Both are important in any evidence review. A fundamental precept of medicine is to “first, do no harm”, so the standards for evidence of safety differ from standards for effectiveness to protect people from unintentional, medically induced harm.  This is not to say that we don’t often make conscious decisions to treat people with risky interventions, but those decisions need to be carefully considered to determine that the potential for overall benefit to that person exceeds the risks.

What is your involvement with evidence: applying it, advocating its use, researching/developing it, synthesizing/explaining/translating it, communicating it?
I have worked with all aspects of clinical evidence in the:

  • Design and conduct of original research that becomes part of the body of evidence.
  • Critical evaluation of a body of evidence through meta-analyses and systematic reviews.
  • Explanation of evidence reviews to broad audiences.
  • Advocacy for its use in guidelines and policy development.
  • Creation of standards for its application.

I’ve participated both locally and nationally in these activities.

Where do you go looking for evidence, and what types of sources do you prefer? (formally published stuff such as journals, or something less formalized?)
I always go to the original source of data. In my field, it’s common to restrict that to randomized controlled trials (RCTs). Since more than 25,000 RCTs are published each year, this is still a huge body of data to review. However, depending on the clinical question, limiting to only RCTs will miss important data. For questions where similar interventions or behavioral practices have been in place for years, we have the opportunity to look at observational data.

Observational data is often completely ignored because of the potential for bias in its collection. However, well-designed observational studies can reduce this risk and we also have analytic methods to reduce the bias, allowing us to evaluate and harvest some very useful information that can add context to the RCTs or may be the only source of data. I may also use case reports, which are potentially very biased sources of data, but are often our first indicators of side effects from newly adopted interventions and serve as an early warning system for the need for better evidence before widespread implementation.

Having said that, I always look to the original data and do my own systematic evaluations, I do want to acknowledge there are some organizations that conduct or sponsor systematic reviews of evidence and do a very good job of it, like the Cochrane Collaboration, the Agency for Healthcare Research and Quality (AHRQ) and the National Institute for Health and Clinical Evidence (NICE). Any day now, the Institute of Medicine (IOM) is scheduled to release a new report that will recommend a set of standards for developing and reporting systematic reviews of clinical research.  These standards, if widely adopted, could help ensure the greater availability of scientifically valid evidence summaries.

#2. On a scale of 1 to 10, where 10= ‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the overall “state of the evidence” in your primary field?
If I were consistent with the estimates by Archie Cochrane and Kerr White, I would say a 3. However, we do have some evidence that the number varies by condition. So for some very common conditions, like depression, the evidence is sparse and complicated by the social stigma that interferes with accurate records of persons with the condition and with open investigation of treatment. For other common conditions like diabetes, we now have so much evidence that David Eddy (a pioneer in evidence-based medicine) and his colleagues have been able to create models with such detail (Archimedes) that they have been able to use these models to predict effectiveness of interventions, possibly reducing the need to enroll people in clinical trials.

Which of these situations is most common in your field?
a) Much of the evidence we need doesn’t yet exist.
b) People don't know about the evidence that is available.
c) People don't understand the evidence well enough to apply it.
d) People don’t follow the evidence because it's not the expectation.

Well, all of these are common, but in medicine right now, I think that b) and c) are under-recognized, leading to a huge waste of knowledge and resources. Not only are clinicians unaware of available evidence (after all, what busy clinician can keep up with >25,000 new RCTs each year?) but we also have evidence that researchers who generate new evidence may not be aware of, or don’t know how to use, the body of evidence preceding their work.

Just last month, the Annals of Internal Medicine published a very clever systematic examination of how frequently existing prior research is cited in publications of randomized controlled trials. They found a very discouraging amount - less than a quarter of the pre-existing research was cited. So are researchers not examining data from previous studies to use in the design of new ones? Or are they aware of the prior research and choosing only to report on that which supports their work and beliefs? Or are they aware of the prior work, but don’t understand its relevance or don’t realize the bias they introduce from cherry-picking the evidence they like?

I do believe that if we can better train researchers to find and use all the preceding evidence in the design, conduct, and analysis of new studies, we could more quickly advance the knowledge on which we make clinical decisions and also have more resources available to create new evidence. Perhaps we need to encourage those who fund research to require it! Along with this we would also need a big change in access to the results of medical research. Currently, much of medical research findings, even those studies funded through government sources, are not freely available because they are published in journals that restrict access to subscribers and that preferentially publish studies with positive results over those with negative findings.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality?
Three barriers in particular:

  1. Lack of recognition of the need for evidence for their setting or application. Often those who are in the trenches of delivering health care believe that evidence is what scientists need in order to do research, but don’t realize the importance of using evidence to design and implement practices to improve how the interventions are delivered to the patients.
  2. Misinterpretation of what evidence is, especially the mistaken belief that evidence can be based on only one, usually the latest, study. Instead, evidence is always an unbiased assessment of all relevant data and requires critical examination of all the studies.
  3. Differences in underlying values and beliefs of those who examine and summarize the evidence, which becomes more important when the evidence is weak or conflicting. Mammography screening is a good recent example. We’re at the point now where improvements in breast cancer treatments at more advanced stages of disease have coincided with less certainty about the benefit of early detection. Recent rigorous evaluation of the existing evidence on screening by the United States Preventive Services Task Force (USPSTF) led to development of guidance that was widely accepted by public health and primary care docs, but passionately rejected by specialty physicians.

    Specialists (particularly those in radiology and oncology) set a goal for defining mammography initiation and frequency that maximizes early detection in every individual because they believe this provides better opportunity for early intervention and cure. In doing so, their definitions of what constitute ‘clinically important lesions’ are more aggressive and include the detection of lesions that may never progress to impacting the health of the woman and may regress spontaneously. They also discount or minimize risks associated with early detection, deciding these risks are always worth the benefit. In effect, their process assumes all women will agree with these values. However, the goal that public health and primary care providers chose for defining screening intervals was to balance the risks and benefits from early detection for each woman, and to develop recommendations using shared decision-making principles that incorporate the woman’s values.

    After an extensive review of all the guidelines and the methods used to develop them, which I presented at last year’s Guideline International Network (GIN) conference, I suspect this difference in values is at the heart of the passionate dispute.

Where do you see technology making things better?
Searchable databases of all published studies using international standards for the reporting of study design, sources of bias, and outcomes would be of great value.

#4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow?
I’ve taught evidence-based methods and presented the findings of evidence reviews to many different audiences. I try to tailor the information to their needs without “dumbing down” the information. So a presentation may include more or less teaching alongside the findings, and will incorporate examples the audience can relate to in order to convey the concept.

And I always try to create information-rich documents; generally visual displays of data in the style of Edward Tufte that people can keep and study. It is rewarding to have them call me later to discuss additional richness in the data that they discover on their own. In this way they become engaged in the evidence-based process and often go on to become passionate users of evidence.

What mistakes do you see people making when they explain evidence?
Two things:

  • There is a translational gap between the work of researchers in health care and those who deliver health care. I found an example in the latest Users’ Guide from the series published in the Journal of the American Medical Association (JAMA). As with all of the Users’ Guides series, the authors developed a rigorous and methodologically sound guide to assist in evaluating quality improvement interventions. It fills an important need and will hopefully spur more robust development and evaluation of new quality improvement programs.
  • Unfortunately, outside of academic institutions, many health care organizations lack staff with sufficient training in epidemiology or general scientific methods to understand and implement the detailed recommendations of the guide. The hospitals and other care delivery sites need a guide they can understand and use. They need to recognize the value in using evidence-based methods to evaluate quality improvement initiatives before they waste time and resources implementing purported “best practices” of uncertain benefit for their setting.

  • Another big mistake in explaining evidence in clinical applications is the equivalency of evidence and expert opinion. While it is true that in the absence of any evidence, clinicians will need to use either their own experience or that of those they find to have more “expertise”, this experience is not equal to the evidence from systematic examination of large scale, well-designed studies that minimize biases like selective recall of limited experience. Some clinical specialty societies still equate expert opinion with scientifically derived evidence, granting it the same level of evidence rating in their guidelines. When these guidelines are then converted to quality performance measures, providers may be incorrectly judged on how often they deliver standards of care to patients - and patients may be subjected to unnecessary harm from systematized practices that were not truly evidence-based, and instead came from experiences that should not be widely generalized.

#5. What do you want your legacy to be?
I’d like to be remembered for helping clinicians and provider organizations realize that it isn’t as hard - and doesn’t take as long as they think - to embed evidence-based methods into their practices and clinical decision-making. And by incorporating that critical examination of the prior evidence, they can more quickly bring better quality of care to the patients they serve.

Thanks, Belinda. Your insights are very helpful.

____________________________

Chime in. Would you like to be interviewed, or do you have someone to recommend? Drop me a note at tracy AT evidencesoup DOT com.

Wednesday, 02 February 2011

Interview Wednesday: Evan Perlman, research analyst and inquiring mind.

Evanperlman Today we talk with Evan Perlman, a research analyst in Baltimore, Maryland, USA. I recommend following him on Twitter (@EverydayInquiry); great name, interesting tweets, and his tag line is "Everything is better with evidence". Like me, Evan is a PhD-Public-Policy type: Here's his LinkedIn profile.

The Five Questions.
#1. What got you interested in evidence?
I'm not sure how I got into it as a formal process, but in grad school I started working on lit reviews and data sets, and learning econometrics. These kinds of things really opened my eyes as far as the way that evaluation can and must be a part of daily life. To me, even though evaluation is a science, it's critical to daily life in both a technical and non-technical way. It works whether you're exploring public policy, economics, biology, or any other way of understanding the world.

When it reached the point where I was making charts and spreadsheets as a hobby, and my friends and colleagues were enjoying them, I started to believe that this is the all-time best historical era since the Enlightenment in which to be a data geek. I never thought I'd be doing stats for fun, but there you go.

What types of evidence do you work with most often (medical, business research, statistics, social science, etc.)?
I mostly do public policy and economic research, so I use statistical analysis of social science and health care data sets. But I also like using data for less-serious things, as a way of understanding trends in my life or in the world, and sometimes just for a laugh.  (As in this graphic celebrating the arrival of Oregon Trail on Facebook.)

What is your involvement with evidence: applying it, advocating its use, researching/developing it, synthesizing/explaining/translating it, communicating it?
In the career context, I have the opportunity to do all of those things. The data don't interpret themselves, and not every client is going to want to interpret arcane models and results. In daily life, I'm finding it more and more important to be able to translate evidence in a way that non-technical people understand, and to communicate information in a way that people will accept it as valid and meaningful. Most policy-makers are not trained statisticians, but I think most of them want to use valid evidence in their argument - though cherry-picking of evidence is a huge challenge that needs to be overcome.

Where do you go looking for evidence, and what types of sources do you prefer? (formally published stuff such as journals, or something less formalized?)
The literature review is a universally great tool, but a lot of times I am working with raw secondary data sets like Census and BLS data. I imagine that knowing where to find the right data is a skill in itself. I think I'm overly wary of anecdotal evidence, but there are some questions for which experimental data is rare or non-existent, which really forces you to think about what evidence you will accept, and why.

#2. On a scale of 1 to 10, where 10= ‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the overall “state of the evidence” in your primary field?
Public policy is really interesting that way. In some policy questions debates, you can have what seems like an open-and-shut case, but evidence is cherry-picked according to its value to one side or the other. This happens in all kinds of controversial issues, from gun control to legalizing gambling.

On the other hand, some issues just haven't had much in the way of formal evaluation and decisions are made based on what "everybody knows", which often turns out to be wrong or overly simplistic. I think in the last decade or so we've seen a stronger push for program evaluation, but we haven't necessarily seen an increase among policy makers in the understanding of and willingness to use objective data. So let's say that the state of the evidence varies widely, but it's often overshadowed by the state of the politics and culture.

Which of these situations is most common in your field?
a) Much of the evidence we need doesn’t yet exist.
b) People don't know about the evidence that is available.
c) People don't understand the evidence well enough to apply it.
d) People don’t follow the evidence because it's not the expectation.

All of these things are true, depending on the policy field. The dream would be to eliminate b, c, and d so that's all that's left is to find ways to gather evidence about policy questions. But if you look at the state of many policy debates today, it's clear that b, c, and d are prevalent. Which is not to say, by the way, that all controversial issues have one objective "right answer" at this moment. The problem is more that the popular debates over these issues are so divorced from objective evidence that it's hard to even talk about what an objective "right answer" might be. It's hard to base policy on opinion polls when the people giving those opinions are distrustful of science and can select their evidence from sources geared to serve a narrow ideology.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality?
I think we're already living in a world where a person who has the right research and evaluation skills can find enough objective evidence to become decently informed on any issue. It's not necessary for everyone to be a trained statistician and an expert in the technical details of every issue. If a person knows how to find data, how to make a judgment about whether or not it's reliable, and how to interpret it in a way that's supported by what's really in the data versus what they think it should say, then they have all the tools they need to form an educated opinion.

Social media does make it very easy for people to exchange information, and of course there's a whole community of evidence-users, which is why we're doing this interview now, right? But it also means that to some extent every opinion or statement has a legitimate shot at exposure to a wide audience. This should make people more aware of the need for transparency and caution, but somehow I think that's not happening yet.

Where do you see technology making things better?
We have access to more information than ever, and we can crunch numbers and process that information faster than ever. It's a rich and beautiful world of data. And it's easier now to make connections and collaborate, and to get feedback from a wider group of people, which is fantastic.

#4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow?
I think it's important for evidence to be presented in context of a narrative, or else it will be lost in the shuffle. So while it's obviously critical to be transparent as far as data sources and methods, I tend to spend more time on "Why does this matter? What are the implications? What's new and interesting here?" I think that's what most people will connect with first, and it's what will draw them in to explore the evidence further. But I don't have a specific format. Lately, as you've seen, I've been trying to fit the discussion into Tweets, which is a fun challenge.

What mistakes do you see people making when they explain evidence?
Three of the cardinal sins: failing to cite sources, presenting information out of context, and drawing conclusions that the data don't support. Basically, if you find that you're choosing to omit evidence because it doesn't fit, you need to change your conclusions. To do otherwise is dishonest, and contributes to public distrust and confusion about evidence and, really, science as a whole.

#5. What do you want your legacy to be?
I’d like to be remembered for encouraging people to use evidence to become better decision makers and happier people.

Thanks, Evan. I agree, it is a rich and beautiful world of data.


____________________________

Chime in. Would you like to be interviewed, or do you have someone to recommend? Drop me a note at tracy AT evidencesoup DOT com.

Wednesday, 26 January 2011

Interview Wednesday: Elizabeth Lusk, entrepreneur and evidence broker.

LizLusk Today we talk with Elizabeth Lusk, Co-Founder & KT Conceptual Design Lead for gestalt collective. She's located in Toronto, Ontario, Canada (lucky her). As she explains it, the gestalt collective "specializes in the development, implementation, and evaluation of strategies that help people and organizations better manage their knowledge assets. Committed to moving knowledge into action, together we identify new opportunities to maximize knowledge flow, drive performance improvements and optimize efficiencies to help people, groups, and organizations realize their potential."

Liz is the KT Conceptual Design Lead for the Canadian Dementia Resource and Knowledge Exchange (CDRAKE), and KTE Associate and Knowledge Broker for the Seniors Health Research Transfer Network (SHRTN). She's on LinkedIn and Twitter (@gestaltKT). Also, you can follow the CDRAKE project at @knowdementia.

The Five Questions.
#1. What got you interested in evidence?
Insatiable curiousity. I like to know. I like delving deeper to see if I agree with how someone arrived at a particular piece of ‘evidence’. I like having moments where people’s perspectives blow mine out of the water and I shift and grow.

What types of evidence do you work with most often (medical, business research, statistics, social science, etc.)?
Every type I can get my hands on. I really like exploring what people are saying across fields and disciplines – to see where there is overlap – check out the variety of perspectives and language to describe what are often similar challenges and phenomenon. As well, how people or groups are approaching challenges in differing contexts. I like making those connections and then sharing and discussing it with people and organizations.

If I have to pick a particular type of evidence that I work with most often it is in the realms of design and systems thinking; knowledge translation, exchange, brokering, mobilization; knowledge asset management; and communication. I am also increasingly sourcing and applying concepts from complexity science.

What is your involvement with evidence: applying it, advocating its use, researching/developing it, synthesizing/explaining/translating it, communicating it?
At present I’ve moved away from synthesis because I feel like by the time you do it properly it’s bordering on irrelevancy, or the political landscape changes swiftly and you miss opportunities. I’m finding the greatest success, at present (my ubiquitous qualifier), by telling stories and developing infographics to communicate evidence in a way that does not require expertise as it relates to the topic. Less time consuming, greater return on investment, greater impact.

That said, the landscape will change and I’ll need to continue to be creative in how I communicate evidence with the people I collaborate with. I am really into creating and / or identifying channels in support of knowledge exchange and flow as well as safe and supportive space to co-create. Helping people make connections to others who have experience with a particular evidence set is also proving to be rewarding. I like to help people, organizations, and networks think and act like knowledge brokers.

Where do you go looking for evidence, and what types of sources do you prefer? (formally published stuff such as journals, or something less formalized?)
I need and want to communicate with all the people I work with effectively. I aspire to be an evidence chameleon – someone who can discuss evidence with people using their language to enhance our capacity to connect and collaborate. For the academic in me, I read journal articles. I want to know what the ‘field’ is saying and whether there is consensus or debate. For the journalist in me, I read blogs.

I love reading individuals' unabated opinions and perspectives. I get a greater sense of the energy around a topic through non-peer reviewed evidence. I also love Twitter because you can find some great evidence curators. You can really leverage it as a continuous learning tool. I love books, too. Basically I'm always reading and enjoy lots of sources. I like reconciling the lot of them.

#2. On a scale of 1 to 10, where 10= ‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the overall “state of the evidence” in your primary field?
If I consider my primary field to be knowledge translation, mobilization, and brokering, I believe the overall ‘state of the evidence’ is about a 6. I see this field as having a few ‘camps’ that speak about similar things being driven from different perspectives and for differing purposes. That can make it difficult for people to navigate.

Which of these situations is most common in your field?
a) Much of the evidence we need doesn’t yet exist.
b) People don't know about the evidence that is available.
c) People don't understand the evidence well enough to apply it.
d) People don’t follow the evidence because it's not the expectation.

A little ‘a’ mixed with a whole lot of ‘c’.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality?
We tend to build tools and mechanisms to access ‘evidence’ using a predominantly linear and library centric approach. We want to catalogue and codify everything – it’s all very binary in nature. People then become dependent on there being a go to place or source where the evidence is stored. Then they want it translated for them and to their context. This approach overestimates access to information and does not address connecting people; people who have experience with evidence and can therefore add dimensions to evidence that a document can never capture.

While there is great value in library science and I frequently benefit from libraries and librarians, we need to broaden our scope because it is limited as it relates to getting knowledge into practice. Simple access to information is not proving to be enough. What we need (in my humble opinion) is a complementary approach. We need to help people develop the skills and comfort required to work within our complex (and rich) systems for information sharing by giving them space to do so; start trusting one another again and shake off our stereotypes regarding people’s ‘roles’ and try and connect with the person in the role.

 I think we can reach our knowledge potential if we shift our focus to the flow of knowledge rather than only capturing and storing knowledge. Archiving and codifying is good for historical purposes. We can always learn from our history. But if we could get better at exchanging knowledge by connecting with people then maybe we’ll unclog the system and re-energize people. Of course to ensure that quality evidence ‘flows’, we need to diligently help people understand and critically appraise evidence.

Where do you see technology making things better?
Technology helps us have global reach. I love technology. People can access people through multiple channels having never met them in person. It can help us broaden our perspectives and the lens through which we navigate evidence. Ultimately though, it’s the people behind the technology. Technology enhances access to people if it’s used as a learning tool - not just a marketing tool. When used as a learning tool, I believe the person on the other end is oriented to connecting rather than simply selling or pushing. A reciprocal relationship can be established.

With the variety of free online publishing platforms to choose from (e.g. Wordpress, Tumblr, Blogger, Digg, Twitter etc.), more and more people are able to afford and indulge in curating evidence and sharing it online. That said, there is a segment of people I run into through my work that lament over the need for one place to go online and that ‘it is all so confusing’.

My business partner and I were reflecting upon this and thought ‘imagine there was only one style and location to purchase jeans’, no matter what our body type or preference. That wouldn’t fly, right? By having quality options that address the variety of people’s information-seeking preferences, the hope is that one can find evidence tailored and organized in a way that is meaningful to them. When people connect with evidence in a way that resonates with them, they tend to apply it to their life and work. Certainly, critical appraisal skills apply here as well.

#4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow?
This definitely depends on the person or group I’m sharing and discussing evidence with, as well as in what capacity I am working with them. My approach is always tailored and dynamic. For instance, if I’m partnering with a group as a knowledge broker – with the more formal people and groups – early on I mirror their communication style so we can ease into knowing one another and develop trust. I will share journal articles that are relevant to their work, and if I see something in the news I forward that along as well. With more established working relationships I share peer-reviewed articles from different disciplines, infographics, blogs, videos, and other sources of media I think may contribute to our work. Adding this dimension tends to energize our thoughts and creates space for us to innovate. I mobilize around that energy and when relevant, we begin a co-creation process.

What mistakes do you see people making when they explain evidence?
Injecting their bias, but that’s human nature, right? People also use jargon. I’m guilty of it myself. I’ll have in my head the terminology from whatever I read last that inspired me and I just really want to share it! I love language so it just slips out. I have to engage in some serious self-talk to remind myself not to do this with colleagues, mostly. I still go there though. The biggest thing I notice on the whole is that people default to speaking about evidence only in terms of the peer-reviewed literature. I respect peer-reviewed literature, however, there are a lot of people experiencing things, writing about their thoughts, and sharing through other channels. In the end, I wish for people to expand their notion of what constitutes ‘evidence’. Let’s shift our focus to helping people navigate evidence.

#5. What do you want your legacy to be?
I’d like to be remembered for being a flexible and thoughtful person. Wait – scratch that – that doesn’t sound how I meant it. Let’s try this again; an open-minded person who cares what you have to say and is willing to help.

Thanks, Elizabeth. I really enjoyed hearing your perspective - particularly on the need for navigating and appraising evidence, not just codifying and storing it.


____________________________

Chime in. Would you like to be interviewed, or do you have someone to recommend? Drop me a note at tracy AT evidencesoup DOT com.

Wednesday, 12 January 2011

Interview Wednesday: Richard Puyt, lecturer, consultant, and researcher.

Richardpuyt Today we talk with Richard Puyt, a freelance lecturer at the University of Applied Science in Amsterdam, management consultant, and researcher (he's working on his Ph.D). Richard is located in Amersfoort, the Netherlands. He's on LinkedIn here, he blogs at Evidencebased-management.com, and on Twitter he's @richardpuyt.

The Five Questions.
#1. What got you interested in evidence?
In 2007 I read the presidential address [pdf] to the Academy of Management by Denise Rousseau for the first time [Ed. note: here's Denise's recent Evidence Soup interview]. This immediately made sense to me. I’ve worked more than ten years as a consultant in several firms (both government and commercial), and had a hard time understanding the decision-making processes. There always seems to be a sense of urgency. If you have a skeptical mindset and ask simple questions like: 'Why are we doing this?', managers tend to get uncomfortable. They prefer it if you ‘just do your job’. When pressed for an answer, responses are ‘it is necessary’ or ‘this is beneficial for our company’.

Evidence supporting these statements is rarely provided. This is in itself not a problem, but the impact of a (major) change in the organization (for example implementing a new ERP system or a merger) is rarely started by looking for the best available evidence, let alone appraised. Especially in government settings, the initial reason (or the business case) is soon forgotten and the moment the project is declared finished, everybody is running towards the next project. Unfortunately, evaluation and learning are never top priority. Evidence of the benefits (or the lack of) of the change is not established. Lessons-learned reports are not written, and nobody asks for them when starting a new project. The same mistakes are bound to happen in future projects. When accepting a new consulting job, I always start out with looking for evidence. This starts with interviewing people and desk research.

What types of evidence do you work with most often (medical, business research, statistics, social science, etc.)?
Mostly business research and evidence from social science (and sometimes medical).

What is your involvement with evidence: applying it, advocating its use, researching/developing it, synthesizing/explaining/translating it, communicating it?
On the evidence-based management collaborative blog, authors and contributors advocate the use of evidence and discuss the latest developments. We are always looking for people who want to participate in the discussions or write reviews. For my Ph.D research, I’m researching and synthesizing the use of evidence by managers in the management literature. We can learn a lot from medical science (but not copy everything they have developed).

Where do you go looking for evidence, and what types of sources do you prefer? (formally published stuff such as journals, or something less formalized?)
The answer depends of the question I need to answer. The formalized stuff is good as background information and a way to frame the question. However, the context of the problem (and the people working there) is a valuable source as well. The real skill is synthesizing the different levels of evidence, and accepting that sometimes we just don’t know.

This is actually one of this major issues in management decision-making: Access to the best available evidence. The scholarly databases have restricted access, and sometimes research is proprietary and will never become publicly available. Most managers don’t have the time or the skills to phrase their problem and research it. This is why they don’t use scholarly sources. Then there is Google. My students need to be trained in appraising information from the Internet. Although Google keeps getting better, you still need to appraise the information. The same applies to managers. The idea of quick fixes and copying approaches from airport books (remember the book From Good to Great by Jim Collins?) need to be actively discouraged in formal education. To quote Henry Mintzberg: Management is an art, largely a craft, and not a science - but it uses science.

#2. On a scale of 1 to 10, where 10= ‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the overall “state of the evidence” in your primary field?
Hmm. In one big generalization, an 8. In some fields we do know a lot - however, in most areas we don’t. Bear in mind, management is a social science.

Regarding the notion of things appearing to be ‘crystal clear’, I recommend reading Trust in numbers: The pursuit of objectivity in science and pubic life by Theodore M. Porter, and Making sense of the organization by Karl Weick. Denise Rousseau is now editing the Handbook of Evidence-Based Management: Companies, Classrooms, and Research, which will be published by Oxford Univ Press the Academy of Management; also something to look forward to (I read the draft already).

Which of these situations is most common in your field?
a) Much of the evidence we need doesn’t yet exist.
b) People don't know about the evidence that is available.
c) People don't understand the evidence well enough to apply it.
d) People don’t follow the evidence because it's not the expectation.

All of the above apply.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality?
Politics, and the fact that evidence is not self-explanatory.

Where do you see technology making things better?
It can improve access to the best available evidence, and evidence can be appraised and aggregated faster. Exchange of evidence is much easier.

4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow?
At present I don’t have a standard format, but with a few colleagues we are developing teaching material.

What mistakes do you see people making when they explain evidence?
Bias, self-interest, misinterpretation, selective shopping in the available evidence (interesting clip here).

#5. What do you want your legacy to be?
I’d like to be remembered for being modest, but asking the right questions at the right time.

Thanks, Richard. Lots of useful insights. I'm going to look up your excellent reading recommendations.
____________________________

Chime in. Would you like to be interviewed, or do you have someone to recommend? Drop me a note at tracy AT evidencesoup DOT com.

Wednesday, 05 January 2011

Interview Wednesday: Chris Lysy, research analyst and evidence enthusiast.

Chris_lysy For today's Interview Wednesday, we talk with Chris Lysy, a research analyst at Westat in Research Triangle, North Carolina, USA. He has extensive experience with real-world evidence about social and educational programs: How to find it, explain it, and apply it.

I highly recommend EvalCentral.com, a site Chris created to aggregate interesting evaluation content - everything from tools for finding evidence to presenting findings to clients. You can read more about Chris here, and follow him on Twitter at @clysy.

The Five Questions.
#1. What got you interested in evaluation and evidence?
I’ve always had lots of questions and they tend not to answer themselves. As it relates to my career, positions in contract research and evaluation compliment my inquisitive nature. After undergrad I accepted a grunt work role in contract research (evidence only occasionally collects itself), and then earned an MA in Sociology. After that I bounced around a couple of small research consultancies then fell into evaluation as a data specialist with North Carolina’s statewide early childhood program. I’ve stuck with evaluation, but have returned to my original company (Westat) accepting a better, less grunt work, role. I enjoy working in a field where the evidence is often directly connected to something tangible.

What types of evidence do you work with most often (medical, business research, statistics, social science, etc.)?
Currently special education, but I’ve had had experience in multiple fields (including all listed examples) and often find myself immersed in evidence from other disciplines.

What is your involvement with evidence and evaluation: applying it, advocating its use, researching/developing it, synthesizing/explaining/translating it, communicating it?
Occupationally, I am an evidence collector, analyzer, and reporter. Personally, I am an advocate and communicator.

Where do you go looking for evidence, and what types of sources do you prefer? (formally published stuff such as journals, or something less formalized?)
That ultimately depends on the question I am trying to answer. I have a knack for finding evidence quickly over the web using search engines and a snowball approach, but my position also calls for the use of more formally published works. If I work with someone who knows more than me about something, they are usually my first source. If not, I turn to Google. Also, I think if you keep your eyes open, evidence finds you.

#2. On a scale of 1 to 10, where 10= ‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the overall “state of the evidence” in your primary field?
This would require me to pick a primary field... if it’s evaluation, than I would say 7, because finding evidence is kind of the point - but there is always room for improvement. 

Which of these situations is most common in your field?
a) Much of the evidence we need doesn’t yet exist.
b) People don't know about the evidence that is available.
c) People don't understand the evidence well enough to apply it.
d) People don’t follow the evidence because it's not the expectation.

Again, depends on the questions and situation, but I would say a and b. There is a constant need for new evidence (which I’m thankful for because it keeps me employed) but there are also plenty of opportunities to better use the evidence that already exists. Government agencies and organizations at all levels are increasingly offering access to raw data. When working with local communities having access to accurate data at the county or city level is immensely useful but you have to know where to find it and understand how to analyze it.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality?
The biggest barriers to the type of world you describe are privacy and cost. Evidence is rarely free, it costs money to collect and store. I don’t think that always receives the consideration it deserves. If you speak with a professional in the early childhood field tasked with building a comprehensive data system, you will quickly learn about all the associated costs and challenges.

As for privacy, it’s critical to make sure that evidence is collected with the rights of the subject in mind. Privacy concerns keep agencies from the table. As an analyst I love data at the individual level, I can always aggregate it later. You can do a lot with it, including match the data with other sets, but in order to do that you need a link, some type of identifier, and access to the full datasets. But the more individual the data, the greater the privacy concerns. You have to show that the evidence is useful enough to justify the cost but also make sure that it can never fall into the wrong hands. These will continue to be big issues well into the future.

Where do you see technology making things better?
By making evidence accessible, less expensive, and easier to analyze and understand. There is a lot of technology that already exists to do these things, we also need to use the technology.

#4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow?
No systematic way, but I’m a big fan of charts and visualizations. I prefer to share more evidence than less, dumbing down data to single numbers and percentages takes away the context. Context is important when presenting evidence.

What mistakes do you see people making when they explain evidence?
They don’t give the person they are presenting to enough credit. Sure, you may be keyed in on a single metric but the supporting data is also important. It’s ok to let your audience arrive at the point on their own, or even come up with a different point. Use your voice to point out trends or individual findings. Removing the context makes evidence harder to understand, not easier. Of course, don’t expect your audience to always appreciate this. The goal is for them to understand, not for you to get praise for an awesome presentation.

#5. What do you want your legacy to be?
I’d like to be remembered for helping academics, researchers, and evaluators make better use of the vast array of available web tools. Also, as a great husband and father.

Thanks, Chris, for these useful insights.


____________________________

Chime in. Would you like to be interviewed, or do you want to recommend someone? Drop me a note at tracy AT evidencesoup DOT com.

Wednesday, 22 December 2010

Interview Wednesday: Robert W. Dubois, Chief Science Officer at the National Pharmaceutical Council.

Bobby_dubois_official_photo For Interview Wednesday, I'm delighted to feature Robert W. Dubois, MD, PhD, the Chief Science Officer at the National Pharmaceutical Council. He demonstrates a deep understanding of the difference between imagining evidence-based medicine and actually making it happen: All of us should listen and learn.

At NPC, Dr. Dubois oversees research on policy issues related to the harmonization, dissemination, and communication of comparative effectiveness research, as well as on how health outcomes are valued. Previously, he was the Chief Medical Officer at Cerner LifeSciences, focused on comparative effectiveness and the use of an electronic health records infrastructure to implement clinical change. (Dubois is located in Washington, DC and Los Angeles, CA, USA.)

NPC (on Twitter: @npcnow) describes itself this way: The "overarching mission is to sponsor and conduct scientific analyses of the appropriate use of biopharmaceuticals and the clinical and economic value of innovation. The organization’s strategic focus is on comparative effectiveness research and evidence-based medicine for health care decision-making, to ensure that patients have access to high-quality care." Amen to that.

The Five Questions.
#1. What got you interested in evidence-based medicine? What has been your involvement with evidence: applying it, advocating its use, discovering it, synthesizing/explaining it, and/or communicating it?
If I had to sum up my career, it would be that I have tried to help define what is appropriate care as well as to develop approaches to ensure that appropriate care would be given to patients  (and inappropriate care would neither be given nor paid for).  Although this would not be a formal definition of evidence-based medicine, it reflects the intent of it.

After my training in internal medicine and my fellowship in health policy and research methods (Robert Wood Johnson Clinical Scholar at UCLA and RAND), I helped create an “expert system” software platform (at a company named Value Health Science which I helped co-found), used by insurance companies and HMOs in the late 1980s/early 1990s.  Built around nationally developed expert guidelines, this program matched a patient’s clinical presentation to those guidelines to determine whether proposed surgery would be appropriate. This prospective approach identified many patients where alternative less risky/costly therapies had not been attempted and where surgery might be avoided.   We were able to reach out to the surgeons and discuss alternative approaches. That experience taught me a substantial amount about the realities of evidence-based medicine and how it might be implemented.

In the mid-1990s, the disease management movement began and our group was very involved in designing some of the early approaches (at Protocare Sciences).  For the first time, the pharmaceutical industry got involved in not only the delivery of “pills” but also the wrap-around elements of care that might promote optimal patient outcomes (e.g., identifying patients most at risk, encouraging compliance with medication and lifestyle recommendations, close monitoring of clinical progress, and rigorous assessment of patient response). We were able to develop various tools to help educate patients as well as to assist providers in their patient care decisions.

Most recently, I worked at the Cerner Corporation, a very large electronic medical records company. Once again, my focus was on defining appropriateness, assessing cost effectiveness, and participating in the growing comparative effectiveness movement. I had the opportunity to work with many pharmaceutical and biotechnology companies and apply various evidence-based medicine methods to real-world problems. These approaches included:  analysis of patient care as documented in an electronic medical record, quantifying the burden of illnesses and identifying areas for improved care, and assessing the costs and cost effectiveness of various approaches to care. We also had an opportunity to build best practice guidelines into the electronic infrastructure which could alert providers when opportunities to help patients may not have been identified.

I joined the National Pharmaceutical Council about two months ago as their Chief Science Officer. My career has come full circle—beginning in the world of health policy and now being in Washington where comparative effectiveness and evidence-based medicine have become central issues.

During your career, where have you looked for evidence, and what types of sources have you found most helpful? (formally published journals, data mining, information services, etc.?)
Finding the “best” evidence presents various challenges. One typically begins with the published literature. However, key information may not yet have been published in peer reviewed articles. Adding information presented at conferences is also important, although those studies have not yet undergone the rigorous peer reviewed scrutiny that is needed.  Unfortunately, much “evidence” never gets published. This publication “bias” occurs because negative results (i.e., that a particular intervention did not work or was no better than alternatives) may not have been written up for publication or may not have been accepted by journals (who prefer positive results). Finding this “unpublished” information is important to get a complete view.

Complementing information from the above sources, I have always solicited input from expert clinicians. True evidence-based medicine needs to be a blend of what has been studied formally as well as the judgment of clinicians making day-to-day patient care decisions. Neither doctors’ opinions nor published literature by themselves can be sufficient.

#2. On a scale of 1 to 10, where 10=‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the “state of the evidence” in your primary field?

It is a mixed picture that I can best explain below.

Which of these situations is most common in your field?
a) Much of the evidence we need doesn’t yet exist.
b) People don't know about the evidence that is available.
c) People don't understand the evidence well enough to apply it.
d) People don’t follow the evidence because it's not the expectation.

Probably a, b, c, and d are all true. It has been estimated that at best, only one third to one-half of what we do as physicians has adequate evidence to support those decisions. The establishment and funding of the Patient Centered Outcomes Research Institute (PCORI) (as part of the Patient Protection and Affordable Care Act) will help over time to reduce some of these critical evidence gaps.  But, improving care is not just about creating more evidence. Perhaps only 50% of care that we know works is actually given to patients—so there is much room for improvement in the care that we provide.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality? (data incompatibility, lack of research skills, lack of standard ways of presenting evidence, lack of motivation to follow evidence-based practices, ...) And where do you see technology making things better?
Books have been written on this topic. I will highlight a few elements: more research is needed in determining not only what types of therapies are best for which types of patients, but also the manner that patients receive that care — should all diabetics be cared for by specialists, or by primary care physicians? In a group practice, or with individual physicians? Whomever they see needs access to full information about each of their patients. Until we have electronic medical record systems that can talk to each other (e.g., from the hospital, the specialist, the primary care doctor, the retail pharmacy), that goal will not be a reality.

And, it is more complex than just technologic integration. Each system must use the same patient ID, define key clinical terms in similar ways, and capture information in database fields rather than in just free text. Yet another key driver relates to the way we reimburse providers of care. In a fee-for-service environment, there will inevitably be silos of care. We will also reward volume of services provided rather than the quality of those services, the outcomes that patients achieve, or whether the care met evidence-based medicine principles.

#4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow? What mistakes do you see people making when they explain evidence?
One size cannot fit all. Each stakeholder has different interests and among the many individuals, their level of domain expertise and understanding of research methods will also differ. To effectively convey information requires that the information be tailored to those individuals. Moreover, information will most likely be discarded or forgotten unless that information is provided at the point of decision making. I may read Consumer Reports each week and learn about the best car or washing machine. But, I will not really understand or remember the issues until I am actually shopping for that car or washing machine. In like fashion, we need to provide doctors and patients with the information that they need at the moments when they are making relevant decisions. I refer to these as “teachable” moments.

5. What do you want your legacy to be?
I’d like to be remembered for having a passion for understanding when care would be appropriate and helping to make it more likely to happen, and having done so in a scientifically rigorous way that is practical in the real world.

Many thanks, Dr. Dubois. Lots of food for thought here; I know the Evidence Soup community will be very interested in your perspective.


____________________________

Chime in. Would you like to be interviewed, or do you have someone to recommend? Drop me a note at tracy AT evidencesoup DOT com.

Wednesday, 08 December 2010

Interview Wednesday: Denise Rousseau, enthusiastic champion of evidence-based management (and university professor).

DeniserousseauToday we talk with Denise Rousseau, a professor at Carnegie Mellon University, and self-described inter-galactic champion of evidence-based management. She's active in the Academy of Management and is one of the organizers behind Evidencebased-Management.com. (I've written before about Denise's efforts: In 2007, I discussed her article Is There Such a Thing as Evidence-Based Management? [get the pdf here] and later, about meetups of the Evidence-based Management Collaborative.)

The Five Questions.
#1. What got you interested in evidence?
After 30 years of research and teaching MBAs, I came to realize how little our own MBAs really understood about management research findings, and even worse, how seldom they used research findings in their decision making. Well, that is not quite right, my finance colleagues have students and alums who use financial research findings, ditto for my operations research colleagues. It is management students (HR, strategy, organizational behavior) who didn't know and don't use research findings.

Then, horror of horrors, as Academy of Management president, I started getting emails from AOM members who were less academic research-oriented, complaining about how our journals were no help in their teaching or consulting. My first reaction was to feel bad for them, since AOM wasn't being useful to them in terms of professional updating. And then I stopped and thought about it further. In my own teaching, I periodically have to update and alter what I teach, because new findings make the old obsolete. Why weren't other people feeling the same need? Then it hit me: What if our educators and consultants aren't using the evidence in their work? Inquiry led me to the realization that both in the classroom and professional practice, lots of people with doctorates in fields related to management were ignoring current evidence, or acting on hunches and impressions that bore no connection to the huge, rich evidence base we have in social science and management. (Long answer, but that's how I got started.)

What types of evidence do you work with most often (medical, business research, statistics, social science, etc.)?
Business and social science research, quantitative, qualitative, meta-analyses, etc. Depends on what the issue or the problem requires.

What is your involvement with evidence: applying it, advocating its use, researching/developing it, synthesizing/explaining/translating it, communicating it?
I try to do all of the above. Applying in my own life as I teach from an evidence-based perspective, consult and advise based on evidence from science as well as facts of the organizational setting itself. Research is still my great love, but if it is seldom used, even the most wonderful research has less value than it would otherwise.

I now do more work trying to integrate different literatures - to understand what the core findings are, and am writing more translation pieces. The latter is the absolute hardest, as writing for colleagues is more of an in-group language, and plain English is not my forte....but I am working on it!

Where do you go looking for evidence, and what types of sources do you prefer? (formally published stuff such as journals, or something less formalized?)
I am a big library-search engine data base user - sometimes I think I’m a reference librarian at heart. Once I start searching a) I always find useful things related to the question I am pursuing, and b) come across fascinating other things I want to know about in the process (this is a huge distraction sometimes, otherwise a very fulfilling discovery)! So I like ABI/Inform best, along with psychological research databases.

#2. On a scale of 1 to 10, where 10= ‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the overall “state of the evidence” in your primary field?
I am trained as an industrial psychologist, and I think in what we have studied for the most part we are "8".
The field is cumulative, and we know a lot about a) motivation, b) selection, c) performance (especially individual), and d) employment relationships, including the psychological contract.

Which of these situations is most common in your field?
a) Much of the evidence we need doesn’t yet exist.
b) People don't know about the evidence that is available.
c) People don't understand the evidence well enough to apply it.
d) People don’t follow the evidence because it's not the expectation.

"B". People just don't know - and for the most part, academics don't teach behavioral evidence to management students. Instead we use cases, pop theory, and give them little opportunity to practice applying research findings.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality?
Access is a huge problem, complicated by journals and proprietary data bases seeking to profit by restricting access. In my ideal world, not only would a literate college-educated person know about using evidence in their professional work, but they also would have been trained in accessing it while in University so that upon graduation they could stay updated (and their University would give them electronic library privileges as alums, so they could easily do this).

Alums would form communities of practice to help each other keep up-to-date, and more attention would be given to professional and exec updating on what the science says as pertains to professional practice. It would be a normal part of developing expertise over the course of a career. Now we are producing too many confident amateurs who don't know what they don't know, and they have the degree to prove it.

Where do you see technology making things better?
Accessibility, improved searching strategies, relational databases, update alerts when new content in an area is put on the web.

#4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow?
In all honesty, I follow the template of Julia Child's approach to mastering the art of French cooking: State the general principle (what is sauce?), the forms it comes in (Hollandaise, Bernaise...), and what should you watch for when you make it and what to do if a problem turns up. Julia was a master behaviorist, I am convinced.

So in terms of evidence, I’ve found that focusing on core principles, simple statement of what the finding is (set specific goals to improve performance). Then discuss conditions of use (procedural knowledge, like how many goals can be set at one time, 5 or fewer, per cognitive limits on attention) and contingencies (What if it doesn't work? Well, the effect of goals depends on the people have accepted them. Did you set goals in a way that is legitimate and credible to the people involved?).

What mistakes do you see people making when they explain evidence?
Too much backstory (the five theories that came before this one, the theoretical controversies that remain) - and not enough about what's in it for the end user.

#5. What do you want your legacy to be?
I’d like to be remembered for helping managers and people in organizations make better, more thoughtful, more ethical decisions.

What makes you hopeful?
I have more and more students and alums who just assume that evidence is what a good manager incorporates into his or her thinking and actions. Like, if you aren't using evidence, can you really be a responsible decision maker/professional?

Thanks, Denise.


____________________________

Chime in. Would you like to be interviewed, or do you have someone to recommend? Drop me a note at tracy AT evidencesoup DOT com.