Evidence Soup
How to find, use, and explain evidence.

Tuesday, 09 February 2016

How Warby Parker created a data-driven culture.

 

4 pic Creating a Data Driven Organization 04feb16

 

What does it take to become a data-driven organization? "Far more than having big data or a crack team of unicorn data scientists, it requires establishing an effective, deeply ingrained data culture," says Carl Anderson, director of data science at the phenomenally successful Warby Parker. In his recent O'Reilly book Creating a Data-Driven Organization, he explains how to build the analytics value chain required for valuable, predictive business models: From data collection and analysis to insights and leadership that drive concrete actions. Follow Anderson @LeapingLlamas.

Practical advice, in a conversational style, is combined with references and examples from the management literature. The book is an excellent resource for real-world examples and highlights of current management research. The chapter on creating the right culture is a good reminder that leadership and transparency are must-haves.

UglyResearch_Action_Outcome

Although the scope is quite ambitious, Anderson offers thoughtful organization, hitting the highlights without an overwhelmingly lengthy literature survey. My company, Ugly Research, is delighted to be cited in the decision-making chapter (page 196 in the hard copy, page 212 in the pdf download). As shown in the diagram, with PepperSlice we provide a way to present evidence to decision makers in the context of a specific 'action-outcome' prediction or particular decision step.

Devil's advocate point of view. Becoming 'data-driven' is context sensitive, no doubt. The author is Director of Data Science at Warby Parker, so unsurprisingly the emphasis is technologies that enable data-gathering for consumer marketing. While it does address several management and leadership issues, such as selling a data-driven idea internally, the book primarily addresses the perspective of someone no more than two or three degrees of freedom from the data; a senior executive working with an old-style C-Suite would likely need to take additional steps to fill the gaps.

The book isn't so much about how to make decisions, as about how to create an environment where decision makers are open to new ideas, and to testing those ideas with data-driven insights. Because without ideas and evidence, what's the point of a good decision process?

 

Tuesday, 02 February 2016

The new ISPOR #pharma health decision guidance: How it's like HouseHunters.

Househunters_decision_checklist

'Multiple criteria decision analysis' is a crummy name for a great concept (aren't all big decisions analyzed using multiple criteria?). MCDA means assessing alternatives while simultaneously considering several objectives. It's a useful way to look at difficult choices in healthcare, oil production, or real estate. But oftentimes, results of these analyses aren't communicated clearly, limiting their usefulness (more about that below).

The International Society For Pharmacoeconomics and Outcomes Research (ISPOR) has developed new MCDA guidance, available in the latest issue of Value for Health (paywall). To be sure, healthcare decision makers have always weighed medical, social, and economic factors: MCDA helps stakeholders bring concrete choices and transparency to the process of evaluating outcomes research - where as we know, controversy is always a possibility.

Anyone can use MCDA. To put it mildly, it’s difficult to balance saving lives with saving money. Fundamentally, MCDA means listing options, defining decision criteria, weighting those criteria, and then scoring each option. Some experts build complex economic models, but anyone can apply this decision technique in effective, less rigorous ways.

You know those checklists at the end of every HouseHunters episode where buyers weigh location and size against budget? That's essentially it: People making important decisions, applying judgment, and weighing multiple goals (raise the kids in the city or the burbs?) - and even though they start out by ranking priorities, once buyers see their actual options, deciding on a house becomes substantially more complex.

MCDA gains traction in health economics. As shown in the diagram (source: ISPOR), the analysis hinges on assigning relative weights to individual decision criteria. While this brings rationality and transparency to complex decisions, it also invites passionate discussions. Some might expect these techniques to remove human judgment from the process, but MCDA leaves it front and center.


Looking for new ways to communicate health economics research and other medical evidence? Join me and other speakers at the 2nd annual HEOR Writing workshop in March.                                      


Pros and cons. Let’s not kid ourselves: You have to optimize on something. MCDA is both beautiful and terrifying because it forces us to identify tradeoffs: Quality, quick improvement, long-term health benefits? Uncertain outcomes only complicate things further.

MCDA is a great way to bring interdisciplinary groups into a conversation. It's essential to communicate the analysis effectively, so stakeholders understand the data and why they matter - without burying them in so much detail that the audience is lost.

One of the downsides is that, upon seeing elaborate projections and models, people can become over-confident in the numbers. Uncertainty is never fully recognized or quantified. (Recall the Rumsfeldian unknown unknown.) Sensitivity analysis is essential, to illustrate which predicted outcomes are strongly influenced by small adjustments.

Resources to learn more. If you want to try MCDA, I strongly recommend picking up one of the classic texts, such as Smart Choices: A Practical Guide to Making Better Decisions. Additionally, ISPOR's members offer useful insights into the pluses and minuses of this methodology - see, for example, Does the Future Belong to MCDA? The level of discourse over this guidance illustrates how challenging healthcare decisions have become.  

I'm presenting at the HEOR Writing workshop on communicating value messages clearly with data. March 17-18 in Philadelphia.

Monday, 25 January 2016

How many students are assigned Hamlet, and how many should be?

Syllabus Explorer

Out of a million college classes, how many do you suppose are assigned Hamlet, and how many should be? Professors make important judgments when designing syllabi - yet little is known about what students learn. In Friday's New York Times, members of the Open Syllabus Project describe their effort to open the "curricular black box". As explained in What a Million Syllabuses Can Teach Us, the project seeks to discover what's being assigned.

@OpenSyllabus has ingested information for > 933,000 courses, extracting metadata and making it available online (details are masked to preserve confidentiality). The search engine Syllabus Explorer is now available in beta. 

New metric. With this analysis, the project team is introducing a new "metric based on the frequency with which works are taught, which we call the 'teaching score'." They believe the metric is useful because "teaching captures a very different set of judgments about what is important than publishing does".

For me, this evokes fond memories of my own PhD research: As part of my work I did a citation analysis, wanting to understand the cohesiveness among authorities cited by stakeholders addressing a common problem. Essentially, I wanted to open the "evidentiary black box" and discover how extensively people draw from a common set of scientific findings and policy research.

The evidentiary black box. Rather than ingest as many bibliographics as I could find, I analyzed every bibliography, authority, or reference cited to inform a particular decision --- in this case, a US Environmental Protection Agency decision establishing a new set of air pollution regulations. I compared all the evidence cited by commenters on the EPA issue - private companies, environmental groups, government agencies, and others stakeholders. Several hundred commenters submitted several thousand citations from science and policy journals, trade publications, or gray literature. I found almost no overlap; most authories were cited only once, and the most commonly cited evidence was referenced only five times.

 How much overlap is good? During my citation analysis, I would have hoped that the public discourse would draw from a common body of knowledge, manifested as citations common to more than a smattering of participants. In contrast, the Syllabus Explorer reveals that many college students are reading the usual suspects: Strunk and White, Hamlet, etc.  But how many would be too many, and how much syllabus overlap would be too much? Currently 2,400 classes are assigned Hamlet out of nearly a million syllabi. We want to foster diverse knowledge by examining a wide range of sources, while also developing common frameworks for understanding the world.

Tuesday, 12 January 2016

Game theory for Jeopardy!, evidence for gun control, and causality.

This week's 5 links on evidence-based decision making.

1. Deep knowledge → Wagering strategy → Jeopardy! win
Some Jeopardy! contestants struggle with the strategic elements of the show. Rescuing us is Keith Williams (@TheFinalWager), with the definitive primer on Jeopardy! strategy, applying game theory to every episode and introducing "the fascinating world of determining the optimal approach to almost anything".

2. Gun controls → Less violence? → Less tragedy?
Does the evidence support new US gun control proposals? In the Pacific Standard, Francie Diep cites several supporting scientific studies.

3. New data sources → Transparent methods → Health evidence
Is 'real-world' health evidence closer to the truth than data from more traditional categories? FDA staff explain in What We Mean When We Talk About Data. Thanks to @MandiBPro.

4. Data model → Cause → Effect
In Why: A Guide to Finding and Using Causes, Samantha Kleinberg aims to explain why causality is often misunderstood and misused: What is it, why is it so hard to find, and how can we do better at interpreting it? The book excerpt explains that "Understanding when our inferences are likely to be wrong is particularly important for data science, where we’re often confronted with observational data that is large and messy (rather than well-curated for research)."

5. Empirical results → Verification → Scientific understanding
Independent verification is essential to scientific progress. But in academia, verifying empirical results is difficult and not rewarded. This is the reason for Curate Science, a tool making it easier for researchers to independently verify each other’s evidence and award credit for doing so. Follow @CurateScience.

Join me at the HEOR writing workshop March 17 in Philadelphia. I'm speaking about communicating data, and leading an interactive session on data visualization. Save $300 before Jan 15.

Tuesday, 22 December 2015

Asthma heartbreak, cranky economists, and prediction markets.

This week's 5 links on evidence-based decision making.

1. Childhood stress → Cortisol → Asthma
Heartbreaking stories explain likely connections between difficult childhoods and asthma. Children in Detroit suffer a high incidence of attacks - regardless of allergens, air quality, and other factors. Peer-reviewed research shows excess cortisol may be to blame.

2. Prediction → Research heads up → Better evidence
Promising technique for meta-research. A prediction market was created to quantify the reproducibility of 44 studies published in prominent psychology journals, and estimate likelihood of hypothesis acceptance at different stages. The market outperformed individual forecasts, as described in PNAS (Proceedings of the National Academy of Sciences.)

3. Fuzzy evidence → Wage debate → Policy fail
More fuel for the minimum-wage fire. Depending on who you ask, a high minimum wage either bolsters the security of hourly workers or destroys the jobs they depend on. Recent example: David Neumark's claims about unfavorable evidence.

4. Decision tools → Flexible analysis → Value-based medicine
Drug Abacus is an interactive tool for understanding drug pricing. This very interesting project, led by Peter Bach at Memorial Sloan Kettering, compares the price of a drug (US$) with its "worth", based on outcomes, toxicity, and other factors. Hopefully @drugabacus signals the future for health technology assessment and value-based medicine.

5. Cognitive therapy → Depression relief → Fewer side effects
A BMJ systematic review and meta-analysis show that depression can be treated with cognitive behavior therapy, possibly with outcomes equivalent to antidepressants. Consistent CBT treatment is a challenge, however. AHRQ reports similar findings from comparative effectiveness research; the CER study illustrates how to employ expert panels to transparently select research questions and parameters.

Monday, 14 December 2015

'Evidence-based' is a thing. It was a very good year.

2015 was kind to the 'evidence-based' movement. Leaders in important sectors - ranging from healthcare to education policy - are adopting standardized, rigorous methods for data gathering, analytics, and decision making. Evaluation of interventions will never be the same.

With so much data available, it's a non-stop effort to pinpoint which sources possess the validity, value, and power to identify, describe, or predict transformational changes to important outcomes. But this is the only path to sustaining executives' confidence in evidence-based methods.

Here's a few examples of evidence-based game-changers, followed by a brief summary of challenges for 2016.

What works: What Works Cities is using data and evidence to improve results for city residents. The Laura and John Arnold Foundation is expanding funding for low-cost, randomized controlled trials (RCTs) - part of its effort to expand the evidence base for “what works” in U.S. social spending.

Evidence-based HR: KPMG consulting practice leaders say "HR isn’t soft science, it’s about hard numbers, big data, evidence."

Comparative effectiveness research: Evidence-based medicine continues to thrive. Despite some challenges with over-generalizing the patient populations, CER provides great examples of systematic evidence synthesis. This AHRQ reportillustrates a process for transparently identifying research questions and reviewing findings, supported by panels of experts.

Youth mentoring: Evidence-based programs are connecting research findings with practices and standards for mentoring distinct youth populations (such as children with incarcerated parents). Nothing could be more important. #MentoringSummit2016

Nonprofit management: The UK-based Alliance for Useful Evidence (@A4UEvidence) is sponsoring The Science of Using Science Evidence: A systematic review, policy report, and conference to explore what approaches best enable research use in decision-making for policy and practice. 

Education: The U.S. House passed the Every Student Succeeds Act, outlining provisions for evidence collection, analysis, and use in education policy. Intended to improve outcomes by shifting $2 billion in annual funding toward evidence-based solutions.

Issues for 2016.

Red tape. Explicitly recognizing tiers of acceptable evidence, and how they're collected, is an essential part of evidence-based decision making. But with standardizing also comes bureacracy, particularly for government programs. The U.S. Social Innovation Fund raises awareness for rigorous social program evidence - but runs the risk of slowing progress with exhaustive recognition of various sanctioned study designs (we're at 72 and counting).

Meta-evidence. We'll need lots more evidence about the evidence, to answer questions like: Which forms of evidence are most valuable, useful, and reliable - and which ones are actually applied to important decisions? When should we standardize decision making, and when should we allow a more fluid process?

Tuesday, 08 December 2015

Biased hiring algorithms and Uber is not disruptive.

This week's 5 links on evidence-based decision making.

1. Unconscious bias → Biased algorithms → Less hiring diversity
On Science Friday (@SciFri), experts pointed out unintended consequences in algorithms for hiring. But even better was the discussion with the caller from Google, who wrote an algorithm predicting tech employee performance and seemed to be relying on unvalidated, self-reported variables. Talk about reinforcing unconscious bias. He seemed sadly unaware of the irony of the situation.

2. Business theory → Narrow definitions → Subtle distinctions
If Uber isn't disruptive, then what is? Clayton Christensen (@claychristensen) has chronicled important concepts about business innovation. But now his definition of ‘disruptive innovation’ tells us Uber isn't disruptive - something about entrants and incumbents, and there are charts. Do these distinctions matter? Plus, ever try to get a cab in SF circa 1999? Yet this new HBR article claims Uber didn't "primarily target nonconsumers — people who found the existing alternatives so expensive or inconvenient that they took public transit or drove themselves instead: Uber was launched in San Francisco (a well-served taxi market)".

3. Meta evidence → Research quality → Lower health cost
The fantastic Evidence Live conference posted a call for abstracts. Be sure to follow the @EvidenceLive happenings at Oxford University, June 2016. Speakers include luminaries in the movement for better meta research.

4. Mythbusting → Evidence-based HR → People performance
The UK group Science for Work is helping organizations gather evidence for HR mythbusting (@ScienceForWork).

5. Misunderstanding behavior → Misguided mandates → Food label fail
Aaron E. Carroll (@aaronecarroll), the Incidental Economist, explains on NYTimes Upshot why U.S. requirements for menu labeling don't change consumer behavior.

*** Tracy Altman will be speaking on writing about data at the HEOR and Market Access workshop March 17-18 in Philadelphia. ***

Tuesday, 24 November 2015

Masters of self-deception, rapid systematic reviews, and Gauss v. Legendre.

This week's 5 links on evidence-based decision making.

1. Human fallibility → Debiasing techniques → Better science
Don't miss Regina Nuzzo's fantastic analysis in Nature: How scientists trick themselves, and how they can stop. @ReginaNuzzo explains why people are masters of self-deception, and how cognitive biases interfere with rigorous findings. Making things worse are a flawed science publishing process and "performance enhancing" statistical tools. Nuzzo describes promising ways to overcome these challenges, including blind data analysis.

2. Slow systematic reviews → New evidence methods → Controversy
Systematic reviews are important for evidence-based medicine, but some say they're unreliable and slow. Two groups attempting to improve this - not without controversy - are Trip (@TripDatabase) and Rapid Reviews.

3. Campus competitions → Real-world analytics → Attracting talent
Tech firms are finding ways to attract students to the analytics field. David Weldon writes in Information Management about the Adobe Analytics Challenge, where thousands of US university students compete using data from companies such as Condé Nast and Comcast to solve real-world business problems.

4. Discover regression → Solve important problem → Rock the world
Great read on how Gauss discovered statistical regression, but thinking his solution was trivial, didn't share. Legendre published the method later, sparking one of the bigger disputes in the history of science. The Discovery of Statistical Regression - Gauss v. Legendre on Priceonomics.

5. Technical insights → Presentation skill → Advance your ideas
Explaining insights to your audience is as crucial as getting the technical details right. Present! is a new book with speaking tips for technology types unfamiliar with the spotlight. By Poornima Vijayashanker (@poornima) and Karen Catlin.

Tuesday, 17 November 2015

ROI from evidence-based government, milking data for cows, and flu shot benefits diminishing.

This week's 5 links on evidence-based decision making.

1. Evidence standards → Knowing what works → Pay for success
Susan Urahn says we've reached a Tipping Point on Evidence-Based Policymaking. She explains in @Governing that 24 US governments have directed $152M to programs with an estimated $521M ROI: "an innovative and rigorous approach to policymaking: Create an inventory of currently funded programs; review which ones work based on research; use a customized benefit-cost model to compare programs based on their return on investment; and use the results to inform budget and policy decisions."

2. Sensors → Analytics → Farming profits
Precision dairy farming uses RFID tags, sensors, and analytics to track the health of cows. Brian T. Horowitz (@bthorowitz) writes on TechCrunch about how farmers are milking big data for insight. Literally. Thanks to @ShellySwanback.

3. Public acceptance → Annual flu shots → Weaker response?
Yikes. Now that flu shot programs are gaining acceptance, there's preliminary evidence suggesting that repeated annual shots can gradually reduce their effectiveness under some circumstances. Scientists at the Marshfield Clinic Research Foundation recently reported that "children who had been vaccinated annually over a number of years were more likely to contract the flu than kids who were only vaccinated in the season in which they were studied." Helen Branswell explains on STAT.

4. PCSK9 → Cholesterol control → Premium increases
Ezekiel J. Emanuel says in a New York Times Op-Ed I Am Paying for Your Expensive Medicine. PCSK9 inihibitors newly approved by US FDA can effectively lower bad cholesterol, though data aren't definitive whether this actually reduces heart attacks, strokes, and deaths from heart disease. This new drug category comes at a high cost. Based on projected usage levels, soem analysts predict insurance premiums could rise >$100 for everyone in that plan.

5. Opportunistic experiments → Efficient evidence → Informed family policy
New guidance details how researchers and program administrators can recognize opportunities for experiments and carry them out. This allows people to discover effects of planned initiatives, as opposed to analyzing interventions being developed specifically for research studies. Advancing Evidence-Based Decision Making: A Toolkit on Recognizing and Conducting Opportunistic Experiments in the Family Self-Sufficiency and Stability Policy Area.

Tuesday, 10 November 2015

Working with quantitative people, evidence-based management, and NFL ref bias.

This week's 5 links on evidence-based decision making.

1. Understand quantitative people → See what's possible → Succeed with analytics Tom Davenport outlines an excellent list of 5 Essential Principles for Understanding Analytics. He explains in the Harvard Business Review that an essential ingredient for effective data use is managers’ understanding of what is possible. To counter that, it’s really important that they establish a close working relationship with quantitative people.

2. Systematic review → Leverage research → Reduce waste This sounds bad: One study found that published reports of trials cited fewer than 25% of previous similar trials. @PaulGlasziou and @iainchalmersTTi explain on @bmj_latest how systematic reviews can reduce waste in research. Thanks to @CebmOxford.

3. Organizational context → Fit for decision maker → Evidence-based management A British Journal of Management article explores the role of ‘fit’ between the decision-maker and the organizational context in enabling an evidence-based process and develops insights for EBM theory and practice. Evidence-based Management in Practice: Opening up the Decision Process, Decision-maker and Context by April Wright et al. Thanks to @Rob_Briner.

4. Historical data → Statistical model → Prescriptive analytics Prescriptive analytics finally going mainstream for inventories, equipment status, trades. Jose Morey explains on the Experfy blog that the key advance has been the use of statistical models with historical data.

5. Sports data → Study of bias → NFL evidence Are NFL officials biased with their ball placement? Joey Faulkner at Gutterstats got his hands on a spreadsheet containing every NFL play run 2000-2014 (500,000 in all). Thanks to @TreyCausey.

Bonus! In The Scientific Reason Why Bullets Are Bad for Presentations, Leslie Belknap recaps a 2014 study concluding that "Subjects who were exposed to a graphic representation of the strategy paid significantly more attention to, agreed more with, and better recalled the strategy than did subjects who saw a (textually identical) bulleted list version."