Evidence Soup
How to find, use, and explain evidence.

Wednesday, 20 June 2018

What makes us trust an analysis. + We're moving!

Rogerpeng-analyticstrust

Evidence Soup is moving to a new URL. At DATA FOR DECIDING, I’m continuing my analysis of trends in evidence-based decision making. Hope to see you at my new blog. -Tracy Allison Altman

In Trustworthy Data Analysis, Roger Peng gives an elegant description of how he evaluates analytics presentations, and what factors influence his trust level. First, he methodically puts analytical work into three buckets: A (the material presented), B (work done but not presented), and C (analytical work not done).

“We can only observe A and B and need to speculate about C. The times when I most trust an analysis is when I believe that the C component is relatively small, and is essentially orthogonal to the other components of the equation (A and B). In other words, were one to actually do the things in the ‘Not Done’ bucket, they would have no influence on the overall results of the analysis.”

Peng candidly explains that his response “tends to differ based on who is presenting and my confidence in their ability to execute a good analysis.... If the presenter is someone I trust and have confidence in, then seeing A and part of B may be sufficient and we will likely focus just on the contents in A. In part, this requires my trust in their judgment in deciding what are the relevant aspects to present.”

Is this bias, or just trust-building? When our assessment differs based on who is presenting, Peng acknowledges this perhaps introduces “all kinds of inappropriate biases.” Familiarity with a trusted presenter helps decision makers communicate with them efficiently (speaking the same language, etc.). Which could, of course, erupt into champion bias. But rightly or wrongly, the presenter‘s track record is going to be a factor. And this can work both ways: Most of us have likely presented (or witnessed) a flawed finding, which causes someone to lose credibility - and winning that credibility back is rather difficult. Thanks to Davd Napoli (@biff_bruise). 


Cloudvisual-208962-unsplash

2. Fight right → Better solutions
“All teams that are trying to address complex issues have to find ways to employ all of their diverse, even conflicting contributions. But how do you fight in a way that is productive? There are tactics your team can employ to encourage fair — and useful — fighting.” strategy+business tells us Why teams should argue: Strong teams include diverse perspectives, and healthy working relationships and successful outcomes hinge on honest communication.” One tactic is to “sit patiently with the reality of our differences, without insisting that they be resolved”. (Ed. note: This is the 2nd time recently where management advice and marital advice sound the same.)


Events Calendar
Bringing the funny to tech talks: Explaining complex things with humor. Denver, August 13, 2018 - no charge. Meetup with PitchLab, Domain Driven Design, and Papers We Love - Denver.

Decision Analysis Affinity Group (DAAG) annual conference, Denver Colorado, March 5-8, 2019.

Data Visualization Summit, San Francisco, April 10-11, 2019. Topics will include The Impact of Data Viz on Decision Making.

Photo credit: CloudVisual on Unsplash

Wednesday, 06 June 2018

Consider all the evidence + Evidence Soup is moving!

Alex-ivashenko-223199-unsplash

Evidence Soup is moving! Only the URL is changing: My analysis of trends in evidence-based decision making continues at DATA FOR DECIDING. Through Evidence Soup, I've met amazing people, and am forever grateful. Looking forward to seeing you at my new blog. -Tracy Allison Altman

 Weigh all the evidence. It’s easy to commit to evidence-based management in principle, but not so easy to define what is the ‘best available evidence’ - and then apply it to a high-quality decision. Edward Walker writes on LinkedIn that “Being evidence-based does not mean privileging one class of evidence over all others. Regardless of its source, all evidence that is judged to be trustworthy and relevant, should be considered as part of the decision-making process.”

Hierarchies, metrics, and similar simplifying mechanisms are valuable unless they become weaponized, and people use them to the exclusion of experience, logic, and ideas.

DATA FOR DECIDING: Here’s a cautionary tale against relying too heavily on a set of KPIs, or a standardized dashboard - unless you’re automating routine and repeatable operational decisions, using routine and repeatably reliable data. Most management actions require considering fuzzy evidence and following nuanced methods to ensure an unbiased decision. The development of evidence-based management processes compares to the early development of evidence-based medicine: Hierarchies of evidence (Level 1, A-B-C, etc.) were established to weed out low-quality findings, but it wasn’t long before we had 1a, 1b, 1c and so forth.

Hierarchies, metrics, and similar simplifying mechanisms are valuable guides unless they become weaponized, and people use them to the exclusion of experience, logic, and ideas. Weighing evidence quality is never as simple as identifying how it was collected (such as randomized trial, observation, survey); decision makers, even non-experts, must understand the basics of available analytical methods to fully evaluate the explanatory value of findings.

Algorithms are pretty good decision makers. “Eliminating bias... requires constant vigilance on the part of not only data scientists but up and down the corporate ranks.” In an insightful Information Week commentary, James Kobielus (@jameskobielus) considers the importance of Debiasing Our Statistical Algorithms Down to Their Roots.

“Rest assured that AI, machine learning (ML), and other statistical algorithms are not inherently biased against the human race.”

“Rest assured that AI, machine learning (ML), and other statistical algorithms are not inherently biased against the human race. Whatever their limitations, most have a bias in favor of action. Algorithms tend to make decisions faster than human decision makers, do it more efficiently and consistently, consider far more relevant data and variables, and leave a more comprehensive audit trail. For their part, human decision makers often have their own biases, such as giving undue weight to their personal experiences, basing decisions on comfortable but unwarranted assumptions, and only considering data that supports preconceived beliefs.”

DATA FOR DECIDING: As everyone rushes to debias, Kobielus cautions against overstating the importance of bias reduction, “only one of the outcomes that algorithm-driven decision-making processes are designed to promote. Algorithmic processes may also need to ensure that other outcomes — such as boosting speed, efficiency, throughput, profitability, quality, and the like — are achieved as well, without unduly compromising fairness. Trade-offs among conflicting outcomes must always be addressed openly.”

Tuesday, 22 May 2018

Debiasing your company. And, placebos for health apps?

Samuel -Zeller photo on unsplash

Debiasing is hard work, requiring transparency, honest communication - and occasional stomach upset. But it gets easier and can become a habit, especially if people have a systematic way of checking their decisions for bias. In a recent podcast, Nobel-winning Richard Thaler explains several practical ways to debias decisions.

First, know your process. "You can imagine all kinds of good decisions taken in 2005 were evaluated five years later as stupid. They weren’t stupid. They were unlucky. So any company that can learn to distinguish between bad decisions and bad outcomes has a leg up."

A must-have for repeatable, high-quality decision processes. Under "Write Stuff Down," Thaler encourages teams to avoid hindsight bias by memorializing their assumptions and evaluation of a decision - preventing someone from later claiming "I never liked that idea." (I can't help but wonder if this write-it-down approach would make the best, or possibly the worst, marital advice ever. But I digress.)

“Any company that can learn to distinguish between bad decisions and bad outcomes has a leg up.”

Choice architecture relates to debiasing. It "can apply in any company. How are we framing the options for people? How is that influencing the choices that they make?" But to architect a set of effective choices - for ourselves or others - we must first correct for our own cognitive biases, and truly understand how/why people choose.

Diversity of hiring + diversity of thought = Better decisions. “[S]trong leaders, who are self-confident and secure, who are comfortable in their skin and their place, will welcome alternative points of view. The insecure ones won’t, and it’s a recipe for disaster. You want to be in an organization where somebody will tell the boss before the boss is about to do something stupid.” [Podcast transcript: McKinsey Quarterly, May 2018]

margin: 25px 0px 10px 0px;

Can we rigorously evaluate health app evidence? In Slate, Jessica Lipschitz and John Touros argue that we need a sugar-pill equivalent for digital health. “Without placebos, we can’t know the actual impact of a [health app] because we have not controlled for the known impact of user expectations on outcomes. This is well illustrated in a recent review of 18 randomized controlled trials evaluating the effectiveness of smartphone apps for depression.”

When an app was compared with a ‘waitlist’ control condition, the app seemed to reduce depressive symptoms. But when these findings were compared to active controls — such as journaling — the comparative effectiveness of the smartphone apps fell 61%. “This is consistent with what would be expected based on the placebo effect.” Via David Napoli's informative Nuzzel newsletter. [Slate Magazine: Why It's So Hard to Figure Out Whether Health Apps Work]


Photo credits: Samuel Zeller and Thought Catalog on Unsplash

 

Monday, 14 May 2018

Building a repeatable, evidence-based decision process.

Decision-making matrix by Tomasz Tunguz
How we decide is no less important than the evidence we use to decide. People are recognizing this and creating innovative ways to blend what, why, and how into decision processes.

1. Quality decision process → Predictable outcomes
After the Golden Rule, perhaps the most important management lesson is learning to evaluate the quality of a decision process separately from the outcome. Tomasz Tunguz (@ttunguz) reminds us in a great post about Annie Duke, a professional poker player: “Don’t be so hard on yourself when things go badly and don’t be so proud of yourself when they go well.... The wisdom in Duke’s advice is to focus on the process, because eventually the right process will lead to great outcomes.”

Example of a misguided “I'm feeling lucky” response: Running a crummy oil company, and thinking you‘re a genius even though profits arise from unexpected, $80/barrel prices.

2. Reinvent the meeting → Better decisions
Step back and examine your meeting style. Are you giving all the evidence a voice, or relying on the same old presentation theater? Kahlil Smith writes in strategy+business (@stratandbiz) “If I catch myself agreeing with everything a dominant, charismatic person is saying in a meeting, then I will privately ask a third person (not the presenter or the loudest person) to repeat the information, shortly after the meeting, to see if I still agree.” Other techniques include submitting ideas anonymously, considering multiple solutions and scenarios, and a decision pre-mortem with a diverse group of thinkers. More in Why Our Brains Fall for False Expertise, and How to Stop It.

3. How to Teach and Apply Evidence-Based Management. The Center for Evidence-Based Management (CEBMa) Annual Meeting is scheduled for August 9 in Chicago. There's no fee to attend.

Tuesday, 13 March 2018

Biased instructor response → Students shut out

Benjamin-dada-323461-unsplash

Definitely not awesome. Stanford’s Center for Education Policy Analysis reports Bias in Online Classes: Evidence from a Field Experiment. “We find that instructors are 94% more likely to respond to forum posts by white male students. In contrast, we do not find general evidence of biases in student responses…. We discuss the implications of our findings for our understanding of social identity dynamics in classrooms and the design of equitable online learning environments.”

“Genius is evenly distributed by zip code. Opportunity and access are not.” -Mitch Kapor

One simple solution – sometimes deployed for decision debiasing – is to make interactions anonymous. However, applying nudge concepts, a “more sophisticated approach would be to structure online environments that guide instructors to engage with students in more equitable ways (e.g., dashboards that provide real-time feedback on the characteristics of their course engagement).”

Prescribe antidepressants → Treat major depression

Metaanalysis-lancetAn impressive network meta-analysis – comparing drug effects across numerous studies – shows “All antidepressants were more efficacious than placebo in adults with major depressive disorder. Smaller differences between active drugs were found when placebo-controlled trials were included in the analysis…. These results should serve evidence-based practice and inform patients, physicians, guideline developers, and policy makers on the relative merits of the different antidepressants.” Findings are in the Lancet.

Thursday, 08 March 2018

Redefining ‘good data science’ to include communication.

Data science revised skillset on VentureBeat by Emma Walker

Emma Walker explains on VentureBeat The one critical skill many data scientists are missing. She describes the challenge of working with product people, sales teams, and customers: Her experience made her “appreciate how vital communication is as a data scientist. I can learn about as many algorithms or cool new tools as I want, but if I can’t explain why I might want to use them to anyone, then it’s a complete waste of my time and theirs.”

After school, “you go from a situation where you are surrounded by peers who are also experts in your field, or who you can easily assume have a reasonable background and can keep up with you, to a situation where you might be the only expert in the room and expected to explain complex topics to those with little or no scientific background.... As a new data scientist, or even a more experienced one, how are you supposed to predict what those strange creatures in sales or marketing might want to know? Even more importantly, how do you interact with external clients, whose logic and thought processes may not match your own?”

How do you interact with external clients, whose logic and thought processes may not match your own?

Sounds like the typical “no-brainer”: Obvious in retrospect. Walker reminds us of the now-classic diagram by Drew Conway illustrating the skill groups you need to be a data scientist. However, something is “missing from this picture — a vital skill that comes in many forms and needs constant practice and adaption to the situation at hand: communication. This isn’t just a ‘soft’ or ‘secondary’ skill that’s nice to have. It’s a must-have for good data scientists.” And, I would add, good professionals of every stripe.

Tuesday, 06 March 2018

Biased evidence skews poverty policy.

Decision bias: food-desert map

In Biased Ways We Look at Poverty, Adam Ozimek reviews new evidence suggesting that food deserts aren’t the problem, behavior is. His Modeled Behavior (Forbes) piece asks why the food desert theory got so much play, claiming “I would argue it reflects liberal bias when it comes to understanding poverty.”

So it seems this poverty-diet debate is about linking cause with effect - always dangerous, bias-prone territory. And citizen-data scientists, academics, and everyone in between are at risk of mapping objective data (food store availability vs. income) and subjectively attributing a cause for poor habits.

The study shows very convincingly that the difference in healthy eating is about behavior and demand, not supply.

Ozimek looks at the study The Geography of Poverty and Nutrition: Food Deserts and Food Choices Across the United States, published by the National Bureau of Economic Research. The authors found that differences in healthy eating aren’t explained by prices, concluding that “after excluding fresh produce, healthy foods are actually about eight percent less expensive than unhealthy foods.” Also, people who moved from food deserts to locations with better options continued to make similar dietary choices.

Food for thought, indeed. Rather than following behavioral explanations, Ozimek believes liberal thinking supported the food desert concept “because supply-side differences are more complimentary to poor people, and liberals are biased towards theories of poverty that are complimentary to those in poverty.” Meanwhile, conservatives “are biased towards viewing the behavioral and cultural factors that cause poverty as something that we can’t do anything about.”

Thursday, 01 March 2018

Why don't Executives trust analytics?

Boston-dynamics-spot-mini

Last year I spoke with the CEO of a smallish healthcare firm. He had not embraced sophisticated analytics or machine-made decision making, with no comfort level for ‘what information he could believe’. He did, however, trust the CFO’s recommendations. Evidently, these sentiments are widely shared.

A new KPMG report reveals a substantial digital trust gap inside organizations: “Just 35% of IT decision-makers have a high level of trust in their organization’s analytics”.

Blended decisions by human and machine are forcing managers to ask Who is responsible when analytics go wrong? Of surveyed executives, 19% said the CIO, 13% said the Chief Data Officer, and 7% said C-level executive decision makers. “Our survey of senior executives is telling us that there is a tendency to absolve the core business for decisions made with machines,” said Brad Fisher, US Data & Analytics Leader with KPMG in the US. “This is understandable given technology’s legacy as a support service.... However, it’s our view that many IT professionals do not have the domain knowledge or the overall capacity required to ensure trust in D&A [data and analytics]. We believe the responsibility lies with the C-suite.... The governance of machines must become a core part of governance for the whole organization.”

Tuesday, 06 February 2018

Now cognitive bias is poisoning our algorithms.

Tversky-kahneman-altman-PWLtalk2018-cover-1-476x476

Can we humans better recognize our cognitive biases before we turn the machines loose, fully automating them? Here’s a sample of recent caveats about decision-making fails: While improving some lives, we’re making others worse.

Yikes. From HBR, Hiring algorithms are not neutral. If you set up your resume-screening algorithm to duplicate a particular employee or team, you’re probably breaking the rules of ethics and the law, too. Our biases are well established, yet we continue to repeat our mistakes.

Amos Tversky and Daniel Kahneman brilliantly challenged traditional economic theory while producing evidence of our decision bias. Recently I gave a Papers We Love talk on behavioral economics and bias in software design. T&K’s early research famously identified three key, potentially flawed heuristics (mental shortcuts) commonly employed for decision-making: Representativeness, availability, and anchoring/adjustment. The implications for today’s software development must not be overlooked.

Algorithms might be making the poor even less equal. In Automating Inequality, Virginia Eubanks argues that the poor “are the testing ground for new technology that increases inequality.” She argues that our “moralistic view of poverty... has been wrapped into today‘s automated and predictive decision-making tools. These algorithms can make it harder for people to get services while forcing them to deal with an invasive process of personal data collection. As examples, she profiles a Medicaid application process in Indiana, homeless services in Los Angeles, and child protective services in Pittsburgh.”

Prison-sentencing algorithms are also feeling some heat. “Imagine you’re a judge, and you have a commercial piece of software that says we have big data, and it says this person is high risk...now imagine I tell you I asked 10 people online the same question, and this is what they said. You’d weigh those things differently.” [Wired article] Dartmouth researchers claim that a popular risk-assessment algorithm predicts recidivism about as well as a random online poll. Science Friday also covered similar issues with crime sentencing algorithms.