Evidence Soup
How to find, use, and explain evidence.

84 posts categorized "presenting the evidence"

Wednesday, 20 June 2018

What makes us trust an analysis. + We're moving!

Rogerpeng-analyticstrust

Evidence Soup is moving to a new URL. At DATA FOR DECIDING, I’m continuing my analysis of trends in evidence-based decision making. Hope to see you at my new blog. -Tracy Allison Altman

In Trustworthy Data Analysis, Roger Peng gives an elegant description of how he evaluates analytics presentations, and what factors influence his trust level. First, he methodically puts analytical work into three buckets: A (the material presented), B (work done but not presented), and C (analytical work not done).

“We can only observe A and B and need to speculate about C. The times when I most trust an analysis is when I believe that the C component is relatively small, and is essentially orthogonal to the other components of the equation (A and B). In other words, were one to actually do the things in the ‘Not Done’ bucket, they would have no influence on the overall results of the analysis.”

Peng candidly explains that his response “tends to differ based on who is presenting and my confidence in their ability to execute a good analysis.... If the presenter is someone I trust and have confidence in, then seeing A and part of B may be sufficient and we will likely focus just on the contents in A. In part, this requires my trust in their judgment in deciding what are the relevant aspects to present.”

Is this bias, or just trust-building? When our assessment differs based on who is presenting, Peng acknowledges this perhaps introduces “all kinds of inappropriate biases.” Familiarity with a trusted presenter helps decision makers communicate with them efficiently (speaking the same language, etc.). Which could, of course, erupt into champion bias. But rightly or wrongly, the presenter‘s track record is going to be a factor. And this can work both ways: Most of us have likely presented (or witnessed) a flawed finding, which causes someone to lose credibility - and winning that credibility back is rather difficult. Thanks to Davd Napoli (@biff_bruise). 


Cloudvisual-208962-unsplash

2. Fight right → Better solutions
“All teams that are trying to address complex issues have to find ways to employ all of their diverse, even conflicting contributions. But how do you fight in a way that is productive? There are tactics your team can employ to encourage fair — and useful — fighting.” strategy+business tells us Why teams should argue: Strong teams include diverse perspectives, and healthy working relationships and successful outcomes hinge on honest communication.” One tactic is to “sit patiently with the reality of our differences, without insisting that they be resolved”. (Ed. note: This is the 2nd time recently where management advice and marital advice sound the same.)


Events Calendar
Bringing the funny to tech talks: Explaining complex things with humor. Denver, August 13, 2018 - no charge. Meetup with PitchLab, Domain Driven Design, and Papers We Love - Denver.

Decision Analysis Affinity Group (DAAG) annual conference, Denver Colorado, March 5-8, 2019.

Data Visualization Summit, San Francisco, April 10-11, 2019. Topics will include The Impact of Data Viz on Decision Making.

Photo credit: CloudVisual on Unsplash

Tuesday, 13 March 2018

Prescribe antidepressants → Treat major depression

Metaanalysis-lancetAn impressive network meta-analysis – comparing drug effects across numerous studies – shows “All antidepressants were more efficacious than placebo in adults with major depressive disorder. Smaller differences between active drugs were found when placebo-controlled trials were included in the analysis…. These results should serve evidence-based practice and inform patients, physicians, guideline developers, and policy makers on the relative merits of the different antidepressants.” Findings are in the Lancet.

Thursday, 08 March 2018

Redefining ‘good data science’ to include communication.

Data science revised skillset on VentureBeat by Emma Walker

Emma Walker explains on VentureBeat The one critical skill many data scientists are missing. She describes the challenge of working with product people, sales teams, and customers: Her experience made her “appreciate how vital communication is as a data scientist. I can learn about as many algorithms or cool new tools as I want, but if I can’t explain why I might want to use them to anyone, then it’s a complete waste of my time and theirs.”

After school, “you go from a situation where you are surrounded by peers who are also experts in your field, or who you can easily assume have a reasonable background and can keep up with you, to a situation where you might be the only expert in the room and expected to explain complex topics to those with little or no scientific background.... As a new data scientist, or even a more experienced one, how are you supposed to predict what those strange creatures in sales or marketing might want to know? Even more importantly, how do you interact with external clients, whose logic and thought processes may not match your own?”

How do you interact with external clients, whose logic and thought processes may not match your own?

Sounds like the typical “no-brainer”: Obvious in retrospect. Walker reminds us of the now-classic diagram by Drew Conway illustrating the skill groups you need to be a data scientist. However, something is “missing from this picture — a vital skill that comes in many forms and needs constant practice and adaption to the situation at hand: communication. This isn’t just a ‘soft’ or ‘secondary’ skill that’s nice to have. It’s a must-have for good data scientists.” And, I would add, good professionals of every stripe.

Tuesday, 06 March 2018

Biased evidence skews poverty policy.

Decision bias: food-desert map

In Biased Ways We Look at Poverty, Adam Ozimek reviews new evidence suggesting that food deserts aren’t the problem, behavior is. His Modeled Behavior (Forbes) piece asks why the food desert theory got so much play, claiming “I would argue it reflects liberal bias when it comes to understanding poverty.”

So it seems this poverty-diet debate is about linking cause with effect - always dangerous, bias-prone territory. And citizen-data scientists, academics, and everyone in between are at risk of mapping objective data (food store availability vs. income) and subjectively attributing a cause for poor habits.

The study shows very convincingly that the difference in healthy eating is about behavior and demand, not supply.

Ozimek looks at the study The Geography of Poverty and Nutrition: Food Deserts and Food Choices Across the United States, published by the National Bureau of Economic Research. The authors found that differences in healthy eating aren’t explained by prices, concluding that “after excluding fresh produce, healthy foods are actually about eight percent less expensive than unhealthy foods.” Also, people who moved from food deserts to locations with better options continued to make similar dietary choices.

Food for thought, indeed. Rather than following behavioral explanations, Ozimek believes liberal thinking supported the food desert concept “because supply-side differences are more complimentary to poor people, and liberals are biased towards theories of poverty that are complimentary to those in poverty.” Meanwhile, conservatives “are biased towards viewing the behavioral and cultural factors that cause poverty as something that we can’t do anything about.”

Thursday, 01 March 2018

Why don't Executives trust analytics?

Boston-dynamics-spot-mini

Last year I spoke with the CEO of a smallish healthcare firm. He had not embraced sophisticated analytics or machine-made decision making, with no comfort level for ‘what information he could believe’. He did, however, trust the CFO’s recommendations. Evidently, these sentiments are widely shared.

A new KPMG report reveals a substantial digital trust gap inside organizations: “Just 35% of IT decision-makers have a high level of trust in their organization’s analytics”.

Blended decisions by human and machine are forcing managers to ask Who is responsible when analytics go wrong? Of surveyed executives, 19% said the CIO, 13% said the Chief Data Officer, and 7% said C-level executive decision makers. “Our survey of senior executives is telling us that there is a tendency to absolve the core business for decisions made with machines,” said Brad Fisher, US Data & Analytics Leader with KPMG in the US. “This is understandable given technology’s legacy as a support service.... However, it’s our view that many IT professionals do not have the domain knowledge or the overall capacity required to ensure trust in D&A [data and analytics]. We believe the responsibility lies with the C-suite.... The governance of machines must become a core part of governance for the whole organization.”

Tuesday, 03 January 2017

Valuing patient perspective, moneyball for tenure, visualizing education impacts.

Patient_value
1. Formalized decision process → Conflict about criteria

It's usually a good idea to establish a methodology for making repeatable, complex decisions. But inevitably you'll have to allow wiggle room for the unquantifiable or the unexpected; leaving this gray area exposes you to criticism that it's not a rigorous methodology after all. Other sources of criticism are the weighting and the calculations applied in your decision formulas - and the extent of transparency provided.

How do you set priorities? In healthcare, how do you decide who to treat, at what cost? To formalize the process of choosing among options, several groups have created so-called value frameworks for assessing medical treatments - though not without criticism. Recently Ugly Research co-authored a post summarizing industry reaction to the ICER value framework developed by the Institute for Clinical and Economic Review. Incorporation of patient preferences (or lack thereof) is a hot topic of discussion.

To address this proactively, Faster Cures has led creation of the Patient Perspective Value Framework to inform other frameworks about what's important to patients (cost? impact on daily life? outcomes?). They're asking for comments on their draft report; comment using this questionnaire.

2. Analytics → Better tenure decisions
New analysis in the MIT Sloan Management Review observes "Using analytics to improve hiring decisions has transformed industries from baseball to investment banking. So why are tenure decisions for professors still made the old-fashioned way?"

Ironically, academia often proves to be one of the last fields to adopt change. Erik Brynjolfsson and John Silberholz explain that "Tenure decisions for the scholars of computer science, economics, and statistics — the very pioneers of quantitative metrics and predictive analytics — are often insulated from these tools." The authors say "data-driven models can significantly improve decisions for academic and financial committees. In fact, the scholars recommended for tenure by our model had better future research records, on average, than those who were actually granted tenure by the tenure committees at top institutions."

Education_evidence

3. Visuals of research findings → Useful evidence
The UK Sutton Trust-EEF Teaching and Learning Toolkit is an accessible summary of educational research. The purpose is to help teachers and schools more easily decide how to apply resources to improve outcomes for disadvantaged students. Research findings on selected topics are nicely visualized in terms of implementation cost, strength of supporting evidence, and the average impact on student attainment.

4. Absence of patterns → File-drawer problem
We're only human. We want to see patterns, and are often guilty of 'seeing' patterns that really aren't there. So it's no surprise we're uninterested in research that lacks significance, and disregard findings revealing no discernible pattern. When we stash away projects like this, it's called the file-drawer problem, because this lack of evidence could be valuable to others who might have otherwise pursued a similar line of investigation. But Data Colada says the file-drawer problem is unfixable, and that’s OK.

5. Optimal stopping algorithm → Practical advice?
In Algorithms to Live By, Stewart Brand describes an innovative way to help us make complex decisions. "Deciding when to stop your quest for the ideal apartment, or ideal spouse, depends entirely on how long you expect to be looking.... [Y]ou keep looking and keep finding new bests, though ever less frequently, and you start to wonder if maybe you refused the very best you’ll ever find. And the search is wearing you down. When should you take the leap and look no further?"

Optimal Stopping is a mathematical concept for optimizing a choice, such as making the right hire or landing the right job. Brand says "The answer from computer science is precise: 37% of the way through your search period." The question is, how can people translate this concept into practical steps guiding real decisions? And how can we apply it while we live with the consequences?

Tuesday, 20 December 2016

Choices, policy, and evidence-based investment.

Badarguments

1. Bad Arguments → Bad Choices
Great news. There will be a follow-on to the excellent Bad Arguments book by @alialmossawi. The book of Bad Choices will be released this April by major publishers. You can preorder now.

2. Evidence-based decisions → Effective policy outcomes
The conversative think tank, Heritage Foundation, is advocating for evidence-based decisions in the Trump administration. Their recommendations include resurrection of PART (the Program Assessment Rating Tool) from the George W. Bush era, which ranked federal programs according to effectiveness. "Blueprint for a New Administration offers specific steps that the new President and the top officers of all 15 cabinet-level departments and six key executive agencies can take to implement the long-term policy visions reflected in Blueprint for Reform." Read a nice summary here by Patrick Lester at the Social Innovation Research Center (@SIRC_tweets).

Pharmagellan

3. Pioneer drugs → Investment value
"Why do pharma firms sometimes prioritize 'me-too' R&D projects over high-risk, high-reward 'pioneer' programs?" asks Frank David at Pharmagellan (@Frank_S_David). "[M]any pharma financial models assume first-in-class drugs will gain commercial traction more slowly than 'followers.' The problem is that when a drug’s projected revenues are delayed in a financial forecast, this lowers its net present value – which can torpedo the already tenuous investment case for a risky, innovative R&D program." Their research suggests that pioneer drugs see peak sales around 6 years, similar to followers: "Our finding that pioneer drugs are adopted no more slowly than me-too ones could help level the economic playing field and make riskier, but often higher-impact, R&D programs more attractive to executives and investors."

Details appear in the Nature Reviews article, Drug launch curves in the modern era. Pharmagellan will soon release a book on biotech financial modeling.

4. Unrealistic expectations → Questioning 'evidence-based medicine'
As we've noted before, @EvidenceLive has a manifesto addressing how to make healthcare decisions, and how to communicate evidence. The online comments are telling: Evidence-based medicine is perhaps more of a concept than a practical thing. The spot-on @trishgreenhalgh says "The world is messy. There is no view from nowhere, no perspective that is free from bias."

Evidence & Insights Calendar.

Jan 23-25, London: Advanced Pharma Analytics 2017. Spans topics from machine learning to drug discovery, real-world evidence, and commercial decision making.

Feb 1-2, San Francisco. Advanced Analytics for Clinical Data 2017. All about accelerating clinical R&D with data-driven decision making for drug development.

Tuesday, 15 November 2016

Building trust with evidence-based insights.

Trust

This week we examine how executives can more fully grasp complex evidence/analysis affecting their outcomes - and how analytics professionals can better communicate these findings to executives. Better performance and more trust are the payoffs.

1. Show how A → B. Our new guide to Promoting Evidence-Based Insights explains how to engage stakeholders with a data value story. Shape content around four essential elements: Top-line, evidence-based, bite-size, and reusable. It's a suitable approach whether you're in marketing, R&D, analytics, or advocacy.

No knowledge salad. To avoid tl;dr or MEGO (My Eyes Glaze Over), be sure to emphasize insights that matter to stakeholders. Explicitly connect specific actions with important outcomes, identify your methods, and provide a simple visual - this establishes trust and crediblity. Be succint; you can drill down into detailed evidence later. The guide is free from Ugly Research.

Guide to Insights by Ugly Research


2. Lack of analytics understanding → Lack of trust.
Great stuff from KPMG: Building trust in analytics: Breaking the cycle of mistrust in D&A. "We believe that organizations must think about trusted analytics as a strategic way to bridge the gap between decision-makers, data scientists and customers, and deliver sustainable business results. In this study, we define four ‘anchors of trust’ which underpin trusted analytics. And we offer seven key recommendations to help executives improve trust throughout the D&A value chain.... It is not a one-time communication exercise or a compliance tick-box. It is a continuous endeavor that should span the D&A lifecycle from data through to insights and ultimately to generating value."

Analytics professionals aren't feeling the C-Suite love. Information Week laments the lack of transparency around analytics: When non-data professionals don't know or understand how it is performed, it leads to a lack of trust. But that doesn't mean the data analytics efforts themselves are not worthy of trust. It means that the non-data pros don't know enough about these efforts to trust them.

KPMG Trust in data and analytics


3. Execs understand advanced analytics → See how to improve business
McKinsey has an interesting take on this. "Execs can't avoid understanding advanced analytics - can no longer just 'leave it to the experts' because they must understand the art of the possible for improving their business."

Analytics expertise is widespread in operational realms such as manufacturing and HR. Finance data science must be a priority for CFOs to secure a place at the planning table. Mary Driscoll explains that CFOs want analysts trained in finance data science. "To be blunt: When [line-of-business] decision makers are using advanced analytics to compare, say, new strategies for volume, pricing and packaging, finance looks silly talking only in terms of past accounting results."

4. Macroeconomics is a pseudoscience.
NYU professor Paul Romer's The Trouble With Macroeconomics is a widely discussed, skeptical analysis of macroeconomics. The opening to his abstract is excellent, making a strong point right out of the gate. Great writing, great questioning of tradition. "For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as 'tight monetary policy can cause a recession.'" Other critics also seek transparency: Alan Jay Levinovitz writes in @aeonmag The new astrology: By fetishising mathematical models, economists turned economics into a highly paid pseudoscience.

5. Better health evidence to a wider audience.
From the Evidence Live Manifesto: Improving the development, dissemination. and implementation of research evidence for better health.

"7. Evidence Communication.... 7.2 Better communication of research: High quality, important research that matters has to be understandable and informative to a wide audience. Yet , much of what is currently produced is not directed to a lay audience, is often poorly constructed and is underpinned by a lack of training and guidance in this area." Thanks to Daniel Barth-Jones (@dbarthjones).

Photo credit: Steve Lav - Trust on Flickr

Tuesday, 27 September 2016

Improving vs. proving, plus bad evidence reporting.

Turtle slow down and learn something

If you view gathering evidence as simply a means of demonstrating outcomes, you’re missing a trick. It’s most valuable when part of a journey of iterative improvement. - Frances Flaxington

1. Immigrants to US don't disrupt employment.
There is little evidence that immigration significantly affects overall employment of native-born US workers. This according to an expert panel's 500-page report. We thought you might like this condensed version from PepperSlice.

Bad presentation alert: The report, The Economic and Fiscal Consequences of Immigration, offers no summary visuals and buries its conclusions deep within dense chapters. Perhaps methodology is the problem, documenting the "evidence-based consensus of an authoring committee of experts". People need concise synthesis and actionable findings: What can policy makers do with this information?

Bad reporting alert: Perhaps unsatisfied with these findings, Julia Preston of the New York Times slipped her own claim into the coverage, saying the report "did not focus on American technology workers [true], many of whom have been displaced from their jobs in recent years by immigrants on temporary visas [unfounded claim]". Rather sloppy reporting, particularly when covering an extensive economic study of immigration impacts.


Immigration

Key evidence: "Empirical research in recent decades suggests that findings remain by and large consistent with those in The New Americans (National Research Council, 1997) in that, when measured over a period of 10 years or more, the impact of immigration on the wages of natives overall is very small." [page 204]

Immigration also contributes to the nation’s economic growth.... Perhaps even more important than the contribution to labor supply is the infusion by high-skilled immigration of human capital that has boosted the nation’s capacity for innovation and technological change. The contribution of immigrants to human and physical capital formation, entrepreneurship, and innovation are essential to long-run sustained economic growth. [page 243]

Author: @theNASEM, the National Academies of Sciences, Engineering, and Medicine.

Relationship: immigration → sustains → economic growth


2. Improving vs. proving.
On @A4UEvidence: "We often assume that generating evidence is a linear progression towards proving whether a service works. In reality the process is often two steps forward, one step back." Ugly Research supports the 'what works' concept, but wholeheartedly agrees that "The fact is that evidence rarely provides a clear-cut truth – that a service works or is cost-beneficial. Rather, evidence can support or challenge the beliefs that we, and others, have and it can point to ways in which a service might be improved."


3. Who should make sure policy is evidence-based and transparent?
Bad PR alert? Is it government's responsibility to make policy transparent and balanced? If so, some are accusing the FDA of not holding up their end on drug and medical device policy. A recent 'close-held embargo' of an FDA announcement made NPR squirm. Scientific American says the deal was this: "NPR, along with a select group of media outlets, would get a briefing about an upcoming announcement by the U.S. Food and Drug Administration a day before anyone else. But in exchange for the scoop, NPR would have to abandon its reportorial independence. The FDA would dictate whom NPR's reporter could and couldn't interview.

"'My editors are uncomfortable with the condition that we cannot seek reaction,' NPR reporter Rob Stein wrote back to the government officials offering the deal. Stein asked for a little bit of leeway to do some independent reporting but was turned down flat. Take the deal or leave it."


Evidence & Insights Calendar

November 9-10, Philadelphia: Real-World Evidence & Market Access Summit 2016. "No more scandals! Access for Patients. Value for Pharma."

29 Oct-2 Nov, Vienna, Austria: ISPOR 19th Annual European Congress. Plenary: "What Synergies Could Be Created Between Regulatory and Health Technology Assessments?"

October 3-6, National Harbor, Maryland. AMCP Nexus 2016. Special topic: "Behavioral Economics - What Does it All Mean?"


Photo credit: Turtle on Flickr.

Tuesday, 13 September 2016

Battling antimicrobial resistance, visualizing data, and value in health.

Dentist-antibiotic-board

PepperSlice Board of the Week: Dentists will slow down on antibiotics if you show them a chart of their prescribing numbers. 

Antimicrobial resistance is a serious public health concern. PLOS Medicine has published findings from an RCT studying whether quantitative feedback and intervention about prescribing patterns will reduce dentists' antibiotic RXs. An intervention group prescribed substantially fewer antibiotics per 100 cases.

The Evidence. Peer-reviewed: An Audit and Feedback Intervention for Reducing Antibiotic Prescribing in General Dental Practice.

Data: Collected using RAPiD Cluster Randomised Controlled Trial, and analyzed with ANCOVA.

Relationship: historical data ➞ influence ➞ dentist antibiotic prescribing rates

This study evaluated the impact of providing general-practice dentists with individualised feedback consisting of a line graph of their monthly antibiotic prescribing rate. Rates in the intervention group were substantially lower than in the control group.

From the authors: "The feedback provided in this study is a relatively straightforward, low-cost public health and patient safety intervention that could potentially help the entire healthcare profession address the increasing challenge of antimicrobial resistance." Authors: Paula Elouafkaoui et al.

#: evidentista, antibiotics, evidence-based practice


Distribution-plots1

2. Visualizing data distributions.
Nathan Yau's fantastic blog, Flowing Data, offers a simple explanation of distributions - the spread of a dataset - and how to compare them. Highly recommended. "Single data points from a large dataset can make it more relatable, but those individual numbers don’t mean much without something to compare to. That’s where distributions come in."


3. Calculating 'expected value' of health interventions.
Frank David provides a useful reminder of the realities of computing 'expected value'. Sooner or later, we must make simplifying assumptions, and compare costs and benefits on similar terms (usually $). On Forbes he walks us through a straightforward calculation of the value of an Epi-pen. (Frank's firm, Pharmagellan, is coming out with a book on biotech financial modeling, and we look forward to that.)


G20-bayes-johnoliver

4. What is Bayesian, really?
In the Three Faces of Bayes, @burrsettles beautifully describes three uses of the term Bayesian, and wonders "Why is it that Bayesian networks, for example, aren’t considered… y’know… Bayesian?" Recommended for readers wanting to know more about these algorithms for machine learning and decision analysis.


Fun Fact: Everyone can stop carrying around fake babies. Evidence tells us baby simulators don't deter teen pregnancy after all.


Evidence & Insights Calendar:

September 19-21; Boston. FierceBiotech Drug Development Forum. Evaluate challenges, trends, and innovation in drug discovery and R&D. Covering the entire drug development process, from basic research through clinical trials.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.