Evidence Soup
How to find, use, and explain evidence.

258 posts categorized "evidence-based management"

Wednesday, 20 June 2018

What makes us trust an analysis. + We're moving!

Rogerpeng-analyticstrust

Evidence Soup is moving to a new URL. At DATA FOR DECIDING, I’m continuing my analysis of trends in evidence-based decision making. Hope to see you at my new blog. -Tracy Allison Altman

In Trustworthy Data Analysis, Roger Peng gives an elegant description of how he evaluates analytics presentations, and what factors influence his trust level. First, he methodically puts analytical work into three buckets: A (the material presented), B (work done but not presented), and C (analytical work not done).

“We can only observe A and B and need to speculate about C. The times when I most trust an analysis is when I believe that the C component is relatively small, and is essentially orthogonal to the other components of the equation (A and B). In other words, were one to actually do the things in the ‘Not Done’ bucket, they would have no influence on the overall results of the analysis.”

Peng candidly explains that his response “tends to differ based on who is presenting and my confidence in their ability to execute a good analysis.... If the presenter is someone I trust and have confidence in, then seeing A and part of B may be sufficient and we will likely focus just on the contents in A. In part, this requires my trust in their judgment in deciding what are the relevant aspects to present.”

Is this bias, or just trust-building? When our assessment differs based on who is presenting, Peng acknowledges this perhaps introduces “all kinds of inappropriate biases.” Familiarity with a trusted presenter helps decision makers communicate with them efficiently (speaking the same language, etc.). Which could, of course, erupt into champion bias. But rightly or wrongly, the presenter‘s track record is going to be a factor. And this can work both ways: Most of us have likely presented (or witnessed) a flawed finding, which causes someone to lose credibility - and winning that credibility back is rather difficult. Thanks to Davd Napoli (@biff_bruise). 


Cloudvisual-208962-unsplash

2. Fight right → Better solutions
“All teams that are trying to address complex issues have to find ways to employ all of their diverse, even conflicting contributions. But how do you fight in a way that is productive? There are tactics your team can employ to encourage fair — and useful — fighting.” strategy+business tells us Why teams should argue: Strong teams include diverse perspectives, and healthy working relationships and successful outcomes hinge on honest communication.” One tactic is to “sit patiently with the reality of our differences, without insisting that they be resolved”. (Ed. note: This is the 2nd time recently where management advice and marital advice sound the same.)


Events Calendar
Bringing the funny to tech talks: Explaining complex things with humor. Denver, August 13, 2018 - no charge. Meetup with PitchLab, Domain Driven Design, and Papers We Love - Denver.

Decision Analysis Affinity Group (DAAG) annual conference, Denver Colorado, March 5-8, 2019.

Data Visualization Summit, San Francisco, April 10-11, 2019. Topics will include The Impact of Data Viz on Decision Making.

Photo credit: CloudVisual on Unsplash

Wednesday, 06 June 2018

Consider all the evidence + Evidence Soup is moving!

Alex-ivashenko-223199-unsplash

Evidence Soup is moving! Only the URL is changing: My analysis of trends in evidence-based decision making continues at DATA FOR DECIDING. Through Evidence Soup, I've met amazing people, and am forever grateful. Looking forward to seeing you at my new blog. -Tracy Allison Altman

 Weigh all the evidence. It’s easy to commit to evidence-based management in principle, but not so easy to define what is the ‘best available evidence’ - and then apply it to a high-quality decision. Edward Walker writes on LinkedIn that “Being evidence-based does not mean privileging one class of evidence over all others. Regardless of its source, all evidence that is judged to be trustworthy and relevant, should be considered as part of the decision-making process.”

Hierarchies, metrics, and similar simplifying mechanisms are valuable unless they become weaponized, and people use them to the exclusion of experience, logic, and ideas.

DATA FOR DECIDING: Here’s a cautionary tale against relying too heavily on a set of KPIs, or a standardized dashboard - unless you’re automating routine and repeatable operational decisions, using routine and repeatably reliable data. Most management actions require considering fuzzy evidence and following nuanced methods to ensure an unbiased decision. The development of evidence-based management processes compares to the early development of evidence-based medicine: Hierarchies of evidence (Level 1, A-B-C, etc.) were established to weed out low-quality findings, but it wasn’t long before we had 1a, 1b, 1c and so forth.

Hierarchies, metrics, and similar simplifying mechanisms are valuable guides unless they become weaponized, and people use them to the exclusion of experience, logic, and ideas. Weighing evidence quality is never as simple as identifying how it was collected (such as randomized trial, observation, survey); decision makers, even non-experts, must understand the basics of available analytical methods to fully evaluate the explanatory value of findings.

Algorithms are pretty good decision makers. “Eliminating bias... requires constant vigilance on the part of not only data scientists but up and down the corporate ranks.” In an insightful Information Week commentary, James Kobielus (@jameskobielus) considers the importance of Debiasing Our Statistical Algorithms Down to Their Roots.

“Rest assured that AI, machine learning (ML), and other statistical algorithms are not inherently biased against the human race.”

“Rest assured that AI, machine learning (ML), and other statistical algorithms are not inherently biased against the human race. Whatever their limitations, most have a bias in favor of action. Algorithms tend to make decisions faster than human decision makers, do it more efficiently and consistently, consider far more relevant data and variables, and leave a more comprehensive audit trail. For their part, human decision makers often have their own biases, such as giving undue weight to their personal experiences, basing decisions on comfortable but unwarranted assumptions, and only considering data that supports preconceived beliefs.”

DATA FOR DECIDING: As everyone rushes to debias, Kobielus cautions against overstating the importance of bias reduction, “only one of the outcomes that algorithm-driven decision-making processes are designed to promote. Algorithmic processes may also need to ensure that other outcomes — such as boosting speed, efficiency, throughput, profitability, quality, and the like — are achieved as well, without unduly compromising fairness. Trade-offs among conflicting outcomes must always be addressed openly.”

Monday, 14 May 2018

Building a repeatable, evidence-based decision process.

Decision-making matrix by Tomasz Tunguz
How we decide is no less important than the evidence we use to decide. People are recognizing this and creating innovative ways to blend what, why, and how into decision processes.

1. Quality decision process → Predictable outcomes
After the Golden Rule, perhaps the most important management lesson is learning to evaluate the quality of a decision process separately from the outcome. Tomasz Tunguz (@ttunguz) reminds us in a great post about Annie Duke, a professional poker player: “Don’t be so hard on yourself when things go badly and don’t be so proud of yourself when they go well.... The wisdom in Duke’s advice is to focus on the process, because eventually the right process will lead to great outcomes.”

Example of a misguided “I'm feeling lucky” response: Running a crummy oil company, and thinking you‘re a genius even though profits arise from unexpected, $80/barrel prices.

2. Reinvent the meeting → Better decisions
Step back and examine your meeting style. Are you giving all the evidence a voice, or relying on the same old presentation theater? Kahlil Smith writes in strategy+business (@stratandbiz) “If I catch myself agreeing with everything a dominant, charismatic person is saying in a meeting, then I will privately ask a third person (not the presenter or the loudest person) to repeat the information, shortly after the meeting, to see if I still agree.” Other techniques include submitting ideas anonymously, considering multiple solutions and scenarios, and a decision pre-mortem with a diverse group of thinkers. More in Why Our Brains Fall for False Expertise, and How to Stop It.

3. How to Teach and Apply Evidence-Based Management. The Center for Evidence-Based Management (CEBMa) Annual Meeting is scheduled for August 9 in Chicago. There's no fee to attend.

Tuesday, 13 March 2018

Biased instructor response → Students shut out

Benjamin-dada-323461-unsplash

Definitely not awesome. Stanford’s Center for Education Policy Analysis reports Bias in Online Classes: Evidence from a Field Experiment. “We find that instructors are 94% more likely to respond to forum posts by white male students. In contrast, we do not find general evidence of biases in student responses…. We discuss the implications of our findings for our understanding of social identity dynamics in classrooms and the design of equitable online learning environments.”

“Genius is evenly distributed by zip code. Opportunity and access are not.” -Mitch Kapor

One simple solution – sometimes deployed for decision debiasing – is to make interactions anonymous. However, applying nudge concepts, a “more sophisticated approach would be to structure online environments that guide instructors to engage with students in more equitable ways (e.g., dashboards that provide real-time feedback on the characteristics of their course engagement).”

Wednesday, 09 August 2017

How evidence can guide, not replace, human decisions.

Bad Choices book cover

1. Underwriters + algorithms = Best of both worlds.
We hear so much about machine automation replacing humans. But several promising applications are designed to supplement complex human knowledge and guide decisions, not replace them: Think primary care physicians, policy makers, or underwriters. Leslie Scism writes in the Wall Street Journal that AIG “pairs its models with its underwriters. The approach reflects the company’s belief that human judgment is still needed in sizing up most of the midsize to large businesses that it insures.” See Insurance: Where Humans Still Rule Over Machines [paywall] or the podcast Insurance Rates Set by ... Machine Intelligence?

Who wants to be called a flat liner? Does this setup compel people to make changes to algorithmic findings - necessary or not - so their value/contributions are visible? Scism says “AIG even has a nickname for underwriters who keep the same price as the model every time: ‘flat liners.’” This observation is consistent with research we covered last week, showing that people are more comfortable with algorithms they can tweak to reflect their own methods.

AIG “analysts and executives say algorithms work well for standardized policies, such as for homes, cars and small businesses. Data scientists can feed millions of claims into computers to find patterns, and the risks are similar enough that a premium rate spit out by the model can be trusted.” On the human side, analytics teams work with AIG decision makers to foster more methodical, evidence-based decision making, as described in the excellent Harvard Business Review piece How AIG Moved Toward Evidence-Based Decision Making.


2. Another gem from Ali Almossawi.
An Illustrated Book of Bad Arguments was a grass-roots project that blossomed into a stellar book about logical fallacy and barriers to successful, evidence-based decisions. Now Ali Almossawi brings us Bad Choices: How Algorithms Can Help You Think Smarter and Live Happier.

It’s a superb example of explaining complex concepts in simple language. For instance, Chapter 7 on ‘Update that Status’ discusses how crafting a succinct Tweet draws on ideas from data compression. Granted, not everyone wants to understand algorithms - but Bad Choices illustrates useful ways to think methodically, and sort through evidence to solve problems more creatively. From the publisher: “With Bad Choices, Ali Almossawi presents twelve scenes from everyday life that help demonstrate and demystify the fundamental algorithms that drive computer science, bringing these seemingly elusive concepts into the understandable realms of the everyday.”


3. Value guidelines adjusted for novel treatment of rare disease.
Like it or not, oftentimes the assigned “value” of a health treatment depends on how much it costs, compared to how much benefit it provides. Healthcare, time, and money are scarce resources, and payers must balance effectiveness, ethics, and equity.

Guidelines for assessing value are useful when comparing alternative treatments for common diseases. But they fail when considering an emerging treatment or a small patient population suffering from a rare condition. ICER, the Institute for Clinical and Economic Review, has developed a value assessment framework that’s being widely adopted. However, acknowledging the need for more flexibility, ICER has proposed a Value Assessment Framework for Treatments That Represent a Potential Major Advance for Serious Ultra-Rare Conditions.

In a request for comments, ICER recognizes the challenges of generating evidence for rare treatments, including the difficulty of conducting randomized controlled trials, and the need to validate surrogate outcome measures. “They intend to calculate a value-based price benchmark for these treatments using the standard range from $100,000 to $150,000 per QALY [quality adjusted life year], but will [acknowledge] that decision-makers... often give special weighting to other benefits and to contextual considerations that lead to coverage and funding decisions at higher prices, and thus higher cost-effectiveness ratios, than applied to decisions about other treatments.”

Tuesday, 03 January 2017

Valuing patient perspective, moneyball for tenure, visualizing education impacts.

Patient_value
1. Formalized decision process → Conflict about criteria

It's usually a good idea to establish a methodology for making repeatable, complex decisions. But inevitably you'll have to allow wiggle room for the unquantifiable or the unexpected; leaving this gray area exposes you to criticism that it's not a rigorous methodology after all. Other sources of criticism are the weighting and the calculations applied in your decision formulas - and the extent of transparency provided.

How do you set priorities? In healthcare, how do you decide who to treat, at what cost? To formalize the process of choosing among options, several groups have created so-called value frameworks for assessing medical treatments - though not without criticism. Recently Ugly Research co-authored a post summarizing industry reaction to the ICER value framework developed by the Institute for Clinical and Economic Review. Incorporation of patient preferences (or lack thereof) is a hot topic of discussion.

To address this proactively, Faster Cures has led creation of the Patient Perspective Value Framework to inform other frameworks about what's important to patients (cost? impact on daily life? outcomes?). They're asking for comments on their draft report; comment using this questionnaire.

2. Analytics → Better tenure decisions
New analysis in the MIT Sloan Management Review observes "Using analytics to improve hiring decisions has transformed industries from baseball to investment banking. So why are tenure decisions for professors still made the old-fashioned way?"

Ironically, academia often proves to be one of the last fields to adopt change. Erik Brynjolfsson and John Silberholz explain that "Tenure decisions for the scholars of computer science, economics, and statistics — the very pioneers of quantitative metrics and predictive analytics — are often insulated from these tools." The authors say "data-driven models can significantly improve decisions for academic and financial committees. In fact, the scholars recommended for tenure by our model had better future research records, on average, than those who were actually granted tenure by the tenure committees at top institutions."

Education_evidence

3. Visuals of research findings → Useful evidence
The UK Sutton Trust-EEF Teaching and Learning Toolkit is an accessible summary of educational research. The purpose is to help teachers and schools more easily decide how to apply resources to improve outcomes for disadvantaged students. Research findings on selected topics are nicely visualized in terms of implementation cost, strength of supporting evidence, and the average impact on student attainment.

4. Absence of patterns → File-drawer problem
We're only human. We want to see patterns, and are often guilty of 'seeing' patterns that really aren't there. So it's no surprise we're uninterested in research that lacks significance, and disregard findings revealing no discernible pattern. When we stash away projects like this, it's called the file-drawer problem, because this lack of evidence could be valuable to others who might have otherwise pursued a similar line of investigation. But Data Colada says the file-drawer problem is unfixable, and that’s OK.

5. Optimal stopping algorithm → Practical advice?
In Algorithms to Live By, Stewart Brand describes an innovative way to help us make complex decisions. "Deciding when to stop your quest for the ideal apartment, or ideal spouse, depends entirely on how long you expect to be looking.... [Y]ou keep looking and keep finding new bests, though ever less frequently, and you start to wonder if maybe you refused the very best you’ll ever find. And the search is wearing you down. When should you take the leap and look no further?"

Optimal Stopping is a mathematical concept for optimizing a choice, such as making the right hire or landing the right job. Brand says "The answer from computer science is precise: 37% of the way through your search period." The question is, how can people translate this concept into practical steps guiding real decisions? And how can we apply it while we live with the consequences?

Tuesday, 20 December 2016

Choices, policy, and evidence-based investment.

Badarguments

1. Bad Arguments → Bad Choices
Great news. There will be a follow-on to the excellent Bad Arguments book by @alialmossawi. The book of Bad Choices will be released this April by major publishers. You can preorder now.

2. Evidence-based decisions → Effective policy outcomes
The conversative think tank, Heritage Foundation, is advocating for evidence-based decisions in the Trump administration. Their recommendations include resurrection of PART (the Program Assessment Rating Tool) from the George W. Bush era, which ranked federal programs according to effectiveness. "Blueprint for a New Administration offers specific steps that the new President and the top officers of all 15 cabinet-level departments and six key executive agencies can take to implement the long-term policy visions reflected in Blueprint for Reform." Read a nice summary here by Patrick Lester at the Social Innovation Research Center (@SIRC_tweets).

Pharmagellan

3. Pioneer drugs → Investment value
"Why do pharma firms sometimes prioritize 'me-too' R&D projects over high-risk, high-reward 'pioneer' programs?" asks Frank David at Pharmagellan (@Frank_S_David). "[M]any pharma financial models assume first-in-class drugs will gain commercial traction more slowly than 'followers.' The problem is that when a drug’s projected revenues are delayed in a financial forecast, this lowers its net present value – which can torpedo the already tenuous investment case for a risky, innovative R&D program." Their research suggests that pioneer drugs see peak sales around 6 years, similar to followers: "Our finding that pioneer drugs are adopted no more slowly than me-too ones could help level the economic playing field and make riskier, but often higher-impact, R&D programs more attractive to executives and investors."

Details appear in the Nature Reviews article, Drug launch curves in the modern era. Pharmagellan will soon release a book on biotech financial modeling.

4. Unrealistic expectations → Questioning 'evidence-based medicine'
As we've noted before, @EvidenceLive has a manifesto addressing how to make healthcare decisions, and how to communicate evidence. The online comments are telling: Evidence-based medicine is perhaps more of a concept than a practical thing. The spot-on @trishgreenhalgh says "The world is messy. There is no view from nowhere, no perspective that is free from bias."

Evidence & Insights Calendar.

Jan 23-25, London: Advanced Pharma Analytics 2017. Spans topics from machine learning to drug discovery, real-world evidence, and commercial decision making.

Feb 1-2, San Francisco. Advanced Analytics for Clinical Data 2017. All about accelerating clinical R&D with data-driven decision making for drug development.

Tuesday, 15 November 2016

Building trust with evidence-based insights.

Trust

This week we examine how executives can more fully grasp complex evidence/analysis affecting their outcomes - and how analytics professionals can better communicate these findings to executives. Better performance and more trust are the payoffs.

1. Show how A → B. Our new guide to Promoting Evidence-Based Insights explains how to engage stakeholders with a data value story. Shape content around four essential elements: Top-line, evidence-based, bite-size, and reusable. It's a suitable approach whether you're in marketing, R&D, analytics, or advocacy.

No knowledge salad. To avoid tl;dr or MEGO (My Eyes Glaze Over), be sure to emphasize insights that matter to stakeholders. Explicitly connect specific actions with important outcomes, identify your methods, and provide a simple visual - this establishes trust and crediblity. Be succint; you can drill down into detailed evidence later. The guide is free from Ugly Research.

Guide to Insights by Ugly Research


2. Lack of analytics understanding → Lack of trust.
Great stuff from KPMG: Building trust in analytics: Breaking the cycle of mistrust in D&A. "We believe that organizations must think about trusted analytics as a strategic way to bridge the gap between decision-makers, data scientists and customers, and deliver sustainable business results. In this study, we define four ‘anchors of trust’ which underpin trusted analytics. And we offer seven key recommendations to help executives improve trust throughout the D&A value chain.... It is not a one-time communication exercise or a compliance tick-box. It is a continuous endeavor that should span the D&A lifecycle from data through to insights and ultimately to generating value."

Analytics professionals aren't feeling the C-Suite love. Information Week laments the lack of transparency around analytics: When non-data professionals don't know or understand how it is performed, it leads to a lack of trust. But that doesn't mean the data analytics efforts themselves are not worthy of trust. It means that the non-data pros don't know enough about these efforts to trust them.

KPMG Trust in data and analytics


3. Execs understand advanced analytics → See how to improve business
McKinsey has an interesting take on this. "Execs can't avoid understanding advanced analytics - can no longer just 'leave it to the experts' because they must understand the art of the possible for improving their business."

Analytics expertise is widespread in operational realms such as manufacturing and HR. Finance data science must be a priority for CFOs to secure a place at the planning table. Mary Driscoll explains that CFOs want analysts trained in finance data science. "To be blunt: When [line-of-business] decision makers are using advanced analytics to compare, say, new strategies for volume, pricing and packaging, finance looks silly talking only in terms of past accounting results."

4. Macroeconomics is a pseudoscience.
NYU professor Paul Romer's The Trouble With Macroeconomics is a widely discussed, skeptical analysis of macroeconomics. The opening to his abstract is excellent, making a strong point right out of the gate. Great writing, great questioning of tradition. "For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as 'tight monetary policy can cause a recession.'" Other critics also seek transparency: Alan Jay Levinovitz writes in @aeonmag The new astrology: By fetishising mathematical models, economists turned economics into a highly paid pseudoscience.

5. Better health evidence to a wider audience.
From the Evidence Live Manifesto: Improving the development, dissemination. and implementation of research evidence for better health.

"7. Evidence Communication.... 7.2 Better communication of research: High quality, important research that matters has to be understandable and informative to a wide audience. Yet , much of what is currently produced is not directed to a lay audience, is often poorly constructed and is underpinned by a lack of training and guidance in this area." Thanks to Daniel Barth-Jones (@dbarthjones).

Photo credit: Steve Lav - Trust on Flickr

Tuesday, 11 October 2016

When nudging fails, defensive baseball stats, and cognitive bias cheat sheet.

What works reading list


1. When nudging fails, what else can be done?
Bravo to @CassSunstein, co-author of the popular book Nudge, for a journal abstract that is understandable and clearly identifies recommended actions. This from his upcoming article Nudges that Fail:

"Why are some nudges ineffective, or at least less effective than choice architects hope and expect? Focusing primarily on default rules, this essay emphasizes two reasons. The first involves strong antecedent preferences on the part of choosers. The second involves successful “counternudges,” which persuade people to choose in a way that confounds the efforts of choice architects. Nudges might also be ineffective, and less effective than expected, for five other reasons. (1) Some nudges produce confusion on the part of the target audience. (2) Some nudges have only short-term effects. (3) Some nudges produce “reactance” (though this appears to be rare) (4) Some nudges are based on an inaccurate (though initially plausible) understanding on the part of choice architects of what kinds of choice architecture will move people in particular contexts. (5) Some nudges produce compensating behavior, resulting in no net effect. When a nudge turns out to be insufficiently effective, choice architects have three potential responses: (1) Do nothing; (2) nudge better (or different); and (3) fortify the effects of the nudge, perhaps through counter-counternudges, perhaps through incentives, mandates, or bans."

This work will appear in a promising new journal, behavioral science & policy, "an international, peer-reviewed journal that features short, accessible articles describing actionable policy applications of behavioral scientific research that serves the public interest. articles submitted to bsp undergo a dual-review process. leading scholars from specific disciplinary areas review articles to assess their scientific rigor; at the same time, experts in relevant policy areas evaluate them for relevance and feasibility of implementation.... bsp is a publication of the behavioral science & policy association and the brookings institution press."

Slice of the week @ PepperSlice.

Author: Cass Sunstein

Analytical method: Behavioral economics

Relationship: Counter-nudges → interfere with → behavioral public policy initiatives


2. There will be defensive baseball stats!
Highly recommended: Bruce Schoenfeld's writeup about Statcast, and how it will support development of meaningful statistics for baseball fielding. Cool insight into the work done by insiders like Daren Willman (@darenw). Finally, it won't just be about the slash line.


3. Cognitive bias cheat sheet.
Buster Benson (@buster) posted a cognitive bias cheat sheet that's worth a look. (Thanks @brentrt.)


4. CATO says Donald Trump is wrong.
Conservative think tank @CatoInstitute shares evidence that immigrants don’t commit more crimes. "No matter how researchers slice the data, the numbers show that immigrants commit fewer crimes than native-born Americans.... What the anti-immigration crowd needs to understand is not only are immigrants less likely to commit crimes than native-born Americans, but they also protect us from crimes in several ways."


5. The What Works reading list.
Don't miss the #WhatWorks Reading List: Good Reads That Can Help Make Evidence-Based Policy-Making The New Normal. The group @Results4America has assembled a thought-provoking list of "resources from current and former government officials, university professors, economists and other thought-leaders committed to making evidence-based policy-making the new normal in government."


Evidence & Insights Calendar

Oct 18, online: How Nonprofits Can Attract Corporate Funding: What Goes On Behind Closed Doors. Presented by the Stanford Social Innovation Review (@SSIReview).

Nov 25, Oxford: Intro to Evidence-Based Medicine presented by CEBM. Note: In 2017 CEBM will offer a course on teaching evidence-based medicine.

Dec 13, San Francisco: The all-new Systems We Love, inspired by the excellent Papers We Love meetup series. Background here.

October 19-22, Copenhagen. ISOQOL 23rd annual conference on quality of life research. Pro tip: The Wall Street Journal says Copenhagen is hot.

November 9-10, Philadelphia: Real-World Evidence & Market Access Summit 2016. "No more scandals! Access for Patients. Value for Pharma."

Tuesday, 20 September 2016

Social program science, gut-bias decision test, and enough evidence already.

Paperwork

"The driving force behind MDRC is a conviction that reliable evidence, well communicated, can make an important difference in social policy." -Gordon L. Berlin, President, MDRC

1. Slice of the week: Can behavioral science improve the delivery of child support programs? Yes. Understanding how people respond to communications has improved outcomes. State programs shifted from heavy packets of detailed requirements to simple emails and postcard reminders. (Really, did this require behavioral science? Not to discount the excellent work by @CABS_MDRC, but it seems pretty obvious. Still, a promising outcome.)

Applying Behavioral Science to Child Support: Building a Body of Evidence comes to us from MRDC, a New-York based institute that builds knowledge around social policy.

Data: Collected using random assignment and analyzed with descriptive statistics.

Evidence: Support payments increased with reminders. Simple notices (email or postcards) sent to people not previously receiving them increased by 3% the number of parents making at least one payment.

Relationship: behaviorally informed interventions → solve → child support problems


“A commitment to using best evidence to support decision making in any field is an ethical commitment.”
-Dónal O’Mathuna @DublinCityUni

2. How to test your decision-making instincts.
McKinsey's Andrew Campbell and Jo Whitehead have studied decision-making for execs. They suggest asking yourself these four questions to ensure you're drawing on appropriate experiences and emotions. "Leaders cannot prevent gut instinct from influencing their judgments. What they can do is identify situations where it is likely to be biased and then strengthen the decision process to reduce the resulting risk."

Familiarity test: Have we frequently experienced identical or similar situations?
Feedback test: Did we get reliable feedback in past situations?
Measured-emotions test: Are the emotions we have experienced in similar or related situations measured?
Independence test: Are we likely to be influenced by any inappropriate personal interests or attachments?

Relationship: Test of instincts → reduces → decision bias


3. When is enough evidence enough?
At what point should we agree on the evidence, stop evaluating, and move on? Determining this is particularly difficult where public health is concerned. Despite all the available findings, investigators continue to study the costs and benefits of statin drugs. A new Lancet review takes a comprehensive look and makes a strong case for this important drug class. "Large-scale evidence from randomised trials shows that statin therapy reduces the risk of major vascular events" and "claims that statins commonly cause adverse effects reflect a failure to recognise the limitations of other sources of evidence about the effects of treatment".

The insightful Richard Lehman (@RichardLehman1) provides a straightforward summary: The treatment is so successful that the "main adverse effect of statins is to induce arrogance in their proponents." And Larry Husten explains that Statin Trialists Seek To Bury Debate With Evidence.


Photo credit: paperwork by Camilo Rueda López on Flickr.