Evidence Soup
How to find, use, and explain evidence.

79 posts categorized "presenting the evidence"

Tuesday, 03 January 2017

Valuing patient perspective, moneyball for tenure, visualizing education impacts.

Patient_value
1. Formalized decision process → Conflict about criteria

It's usually a good idea to establish a methodology for making repeatable, complex decisions. But inevitably you'll have to allow wiggle room for the unquantifiable or the unexpected; leaving this gray area exposes you to criticism that it's not a rigorous methodology after all. Other sources of criticism are the weighting and the calculations applied in your decision formulas - and the extent of transparency provided.

How do you set priorities? In healthcare, how do you decide who to treat, at what cost? To formalize the process of choosing among options, several groups have created so-called value frameworks for assessing medical treatments - though not without criticism. Recently Ugly Research co-authored a post summarizing industry reaction to the ICER value framework developed by the Institute for Clinical and Economic Review. Incorporation of patient preferences (or lack thereof) is a hot topic of discussion.

To address this proactively, Faster Cures has led creation of the Patient Perspective Value Framework to inform other frameworks about what's important to patients (cost? impact on daily life? outcomes?). They're asking for comments on their draft report; comment using this questionnaire.

2. Analytics → Better tenure decisions
New analysis in the MIT Sloan Management Review observes "Using analytics to improve hiring decisions has transformed industries from baseball to investment banking. So why are tenure decisions for professors still made the old-fashioned way?"

Ironically, academia often proves to be one of the last fields to adopt change. Erik Brynjolfsson and John Silberholz explain that "Tenure decisions for the scholars of computer science, economics, and statistics — the very pioneers of quantitative metrics and predictive analytics — are often insulated from these tools." The authors say "data-driven models can significantly improve decisions for academic and financial committees. In fact, the scholars recommended for tenure by our model had better future research records, on average, than those who were actually granted tenure by the tenure committees at top institutions."

Education_evidence

3. Visuals of research findings → Useful evidence
The UK Sutton Trust-EEF Teaching and Learning Toolkit is an accessible summary of educational research. The purpose is to help teachers and schools more easily decide how to apply resources to improve outcomes for disadvantaged students. Research findings on selected topics are nicely visualized in terms of implementation cost, strength of supporting evidence, and the average impact on student attainment.

4. Absence of patterns → File-drawer problem
We're only human. We want to see patterns, and are often guilty of 'seeing' patterns that really aren't there. So it's no surprise we're uninterested in research that lacks significance, and disregard findings revealing no discernible pattern. When we stash away projects like this, it's called the file-drawer problem, because this lack of evidence could be valuable to others who might have otherwise pursued a similar line of investigation. But Data Colada says the file-drawer problem is unfixable, and that’s OK.

5. Optimal stopping algorithm → Practical advice?
In Algorithms to Live By, Stewart Brand describes an innovative way to help us make complex decisions. "Deciding when to stop your quest for the ideal apartment, or ideal spouse, depends entirely on how long you expect to be looking.... [Y]ou keep looking and keep finding new bests, though ever less frequently, and you start to wonder if maybe you refused the very best you’ll ever find. And the search is wearing you down. When should you take the leap and look no further?"

Optimal Stopping is a mathematical concept for optimizing a choice, such as making the right hire or landing the right job. Brand says "The answer from computer science is precise: 37% of the way through your search period." The question is, how can people translate this concept into practical steps guiding real decisions? And how can we apply it while we live with the consequences?

Tuesday, 20 December 2016

Choices, policy, and evidence-based investment.

Badarguments

1. Bad Arguments → Bad Choices
Great news. There will be a follow-on to the excellent Bad Arguments book by @alialmossawi. The book of Bad Choices will be released this April by major publishers. You can preorder now.

2. Evidence-based decisions → Effective policy outcomes
The conversative think tank, Heritage Foundation, is advocating for evidence-based decisions in the Trump administration. Their recommendations include resurrection of PART (the Program Assessment Rating Tool) from the George W. Bush era, which ranked federal programs according to effectiveness. "Blueprint for a New Administration offers specific steps that the new President and the top officers of all 15 cabinet-level departments and six key executive agencies can take to implement the long-term policy visions reflected in Blueprint for Reform." Read a nice summary here by Patrick Lester at the Social Innovation Research Center (@SIRC_tweets).

Pharmagellan

3. Pioneer drugs → Investment value
"Why do pharma firms sometimes prioritize 'me-too' R&D projects over high-risk, high-reward 'pioneer' programs?" asks Frank David at Pharmagellan (@Frank_S_David). "[M]any pharma financial models assume first-in-class drugs will gain commercial traction more slowly than 'followers.' The problem is that when a drug’s projected revenues are delayed in a financial forecast, this lowers its net present value – which can torpedo the already tenuous investment case for a risky, innovative R&D program." Their research suggests that pioneer drugs see peak sales around 6 years, similar to followers: "Our finding that pioneer drugs are adopted no more slowly than me-too ones could help level the economic playing field and make riskier, but often higher-impact, R&D programs more attractive to executives and investors."

Details appear in the Nature Reviews article, Drug launch curves in the modern era. Pharmagellan will soon release a book on biotech financial modeling.

4. Unrealistic expectations → Questioning 'evidence-based medicine'
As we've noted before, @EvidenceLive has a manifesto addressing how to make healthcare decisions, and how to communicate evidence. The online comments are telling: Evidence-based medicine is perhaps more of a concept than a practical thing. The spot-on @trishgreenhalgh says "The world is messy. There is no view from nowhere, no perspective that is free from bias."

Evidence & Insights Calendar.

Jan 23-25, London: Advanced Pharma Analytics 2017. Spans topics from machine learning to drug discovery, real-world evidence, and commercial decision making.

Feb 1-2, San Francisco. Advanced Analytics for Clinical Data 2017. All about accelerating clinical R&D with data-driven decision making for drug development.

Tuesday, 15 November 2016

Building trust with evidence-based insights.

Trust

This week we examine how executives can more fully grasp complex evidence/analysis affecting their outcomes - and how analytics professionals can better communicate these findings to executives. Better performance and more trust are the payoffs.

1. Show how A → B. Our new guide to Promoting Evidence-Based Insights explains how to engage stakeholders with a data value story. Shape content around four essential elements: Top-line, evidence-based, bite-size, and reusable. It's a suitable approach whether you're in marketing, R&D, analytics, or advocacy.

No knowledge salad. To avoid tl;dr or MEGO (My Eyes Glaze Over), be sure to emphasize insights that matter to stakeholders. Explicitly connect specific actions with important outcomes, identify your methods, and provide a simple visual - this establishes trust and crediblity. Be succint; you can drill down into detailed evidence later. The guide is free from Ugly Research.

Guide to Insights by Ugly Research


2. Lack of analytics understanding → Lack of trust.
Great stuff from KPMG: Building trust in analytics: Breaking the cycle of mistrust in D&A. "We believe that organizations must think about trusted analytics as a strategic way to bridge the gap between decision-makers, data scientists and customers, and deliver sustainable business results. In this study, we define four ‘anchors of trust’ which underpin trusted analytics. And we offer seven key recommendations to help executives improve trust throughout the D&A value chain.... It is not a one-time communication exercise or a compliance tick-box. It is a continuous endeavor that should span the D&A lifecycle from data through to insights and ultimately to generating value."

Analytics professionals aren't feeling the C-Suite love. Information Week laments the lack of transparency around analytics: When non-data professionals don't know or understand how it is performed, it leads to a lack of trust. But that doesn't mean the data analytics efforts themselves are not worthy of trust. It means that the non-data pros don't know enough about these efforts to trust them.

KPMG Trust in data and analytics


3. Execs understand advanced analytics → See how to improve business
McKinsey has an interesting take on this. "Execs can't avoid understanding advanced analytics - can no longer just 'leave it to the experts' because they must understand the art of the possible for improving their business."

Analytics expertise is widespread in operational realms such as manufacturing and HR. Finance data science must be a priority for CFOs to secure a place at the planning table. Mary Driscoll explains that CFOs want analysts trained in finance data science. "To be blunt: When [line-of-business] decision makers are using advanced analytics to compare, say, new strategies for volume, pricing and packaging, finance looks silly talking only in terms of past accounting results."

4. Macroeconomics is a pseudoscience.
NYU professor Paul Romer's The Trouble With Macroeconomics is a widely discussed, skeptical analysis of macroeconomics. The opening to his abstract is excellent, making a strong point right out of the gate. Great writing, great questioning of tradition. "For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as 'tight monetary policy can cause a recession.'" Other critics also seek transparency: Alan Jay Levinovitz writes in @aeonmag The new astrology: By fetishising mathematical models, economists turned economics into a highly paid pseudoscience.

5. Better health evidence to a wider audience.
From the Evidence Live Manifesto: Improving the development, dissemination. and implementation of research evidence for better health.

"7. Evidence Communication.... 7.2 Better communication of research: High quality, important research that matters has to be understandable and informative to a wide audience. Yet , much of what is currently produced is not directed to a lay audience, is often poorly constructed and is underpinned by a lack of training and guidance in this area." Thanks to Daniel Barth-Jones (@dbarthjones).

Photo credit: Steve Lav - Trust on Flickr

Tuesday, 27 September 2016

Improving vs. proving, plus bad evidence reporting.

Turtle slow down and learn something

If you view gathering evidence as simply a means of demonstrating outcomes, you’re missing a trick. It’s most valuable when part of a journey of iterative improvement. - Frances Flaxington

1. Immigrants to US don't disrupt employment.
There is little evidence that immigration significantly affects overall employment of native-born US workers. This according to an expert panel's 500-page report. We thought you might like this condensed version from PepperSlice.

Bad presentation alert: The report, The Economic and Fiscal Consequences of Immigration, offers no summary visuals and buries its conclusions deep within dense chapters. Perhaps methodology is the problem, documenting the "evidence-based consensus of an authoring committee of experts". People need concise synthesis and actionable findings: What can policy makers do with this information?

Bad reporting alert: Perhaps unsatisfied with these findings, Julia Preston of the New York Times slipped her own claim into the coverage, saying the report "did not focus on American technology workers [true], many of whom have been displaced from their jobs in recent years by immigrants on temporary visas [unfounded claim]". Rather sloppy reporting, particularly when covering an extensive economic study of immigration impacts.


Immigration

Key evidence: "Empirical research in recent decades suggests that findings remain by and large consistent with those in The New Americans (National Research Council, 1997) in that, when measured over a period of 10 years or more, the impact of immigration on the wages of natives overall is very small." [page 204]

Immigration also contributes to the nation’s economic growth.... Perhaps even more important than the contribution to labor supply is the infusion by high-skilled immigration of human capital that has boosted the nation’s capacity for innovation and technological change. The contribution of immigrants to human and physical capital formation, entrepreneurship, and innovation are essential to long-run sustained economic growth. [page 243]

Author: @theNASEM, the National Academies of Sciences, Engineering, and Medicine.

Relationship: immigration → sustains → economic growth


2. Improving vs. proving.
On @A4UEvidence: "We often assume that generating evidence is a linear progression towards proving whether a service works. In reality the process is often two steps forward, one step back." Ugly Research supports the 'what works' concept, but wholeheartedly agrees that "The fact is that evidence rarely provides a clear-cut truth – that a service works or is cost-beneficial. Rather, evidence can support or challenge the beliefs that we, and others, have and it can point to ways in which a service might be improved."


3. Who should make sure policy is evidence-based and transparent?
Bad PR alert? Is it government's responsibility to make policy transparent and balanced? If so, some are accusing the FDA of not holding up their end on drug and medical device policy. A recent 'close-held embargo' of an FDA announcement made NPR squirm. Scientific American says the deal was this: "NPR, along with a select group of media outlets, would get a briefing about an upcoming announcement by the U.S. Food and Drug Administration a day before anyone else. But in exchange for the scoop, NPR would have to abandon its reportorial independence. The FDA would dictate whom NPR's reporter could and couldn't interview.

"'My editors are uncomfortable with the condition that we cannot seek reaction,' NPR reporter Rob Stein wrote back to the government officials offering the deal. Stein asked for a little bit of leeway to do some independent reporting but was turned down flat. Take the deal or leave it."


Evidence & Insights Calendar

November 9-10, Philadelphia: Real-World Evidence & Market Access Summit 2016. "No more scandals! Access for Patients. Value for Pharma."

29 Oct-2 Nov, Vienna, Austria: ISPOR 19th Annual European Congress. Plenary: "What Synergies Could Be Created Between Regulatory and Health Technology Assessments?"

October 3-6, National Harbor, Maryland. AMCP Nexus 2016. Special topic: "Behavioral Economics - What Does it All Mean?"


Photo credit: Turtle on Flickr.

Tuesday, 13 September 2016

Battling antimicrobial resistance, visualizing data, and value in health.

Dentist-antibiotic-board

PepperSlice Board of the Week: Dentists will slow down on antibiotics if you show them a chart of their prescribing numbers. 

Antimicrobial resistance is a serious public health concern. PLOS Medicine has published findings from an RCT studying whether quantitative feedback and intervention about prescribing patterns will reduce dentists' antibiotic RXs. An intervention group prescribed substantially fewer antibiotics per 100 cases.

The Evidence. Peer-reviewed: An Audit and Feedback Intervention for Reducing Antibiotic Prescribing in General Dental Practice.

Data: Collected using RAPiD Cluster Randomised Controlled Trial, and analyzed with ANCOVA.

Relationship: historical data ➞ influence ➞ dentist antibiotic prescribing rates

This study evaluated the impact of providing general-practice dentists with individualised feedback consisting of a line graph of their monthly antibiotic prescribing rate. Rates in the intervention group were substantially lower than in the control group.

From the authors: "The feedback provided in this study is a relatively straightforward, low-cost public health and patient safety intervention that could potentially help the entire healthcare profession address the increasing challenge of antimicrobial resistance." Authors: Paula Elouafkaoui et al.

#: evidentista, antibiotics, evidence-based practice


Distribution-plots1

2. Visualizing data distributions.
Nathan Yau's fantastic blog, Flowing Data, offers a simple explanation of distributions - the spread of a dataset - and how to compare them. Highly recommended. "Single data points from a large dataset can make it more relatable, but those individual numbers don’t mean much without something to compare to. That’s where distributions come in."


3. Calculating 'expected value' of health interventions.
Frank David provides a useful reminder of the realities of computing 'expected value'. Sooner or later, we must make simplifying assumptions, and compare costs and benefits on similar terms (usually $). On Forbes he walks us through a straightforward calculation of the value of an Epi-pen. (Frank's firm, Pharmagellan, is coming out with a book on biotech financial modeling, and we look forward to that.)


G20-bayes-johnoliver

4. What is Bayesian, really?
In the Three Faces of Bayes, @burrsettles beautifully describes three uses of the term Bayesian, and wonders "Why is it that Bayesian networks, for example, aren’t considered… y’know… Bayesian?" Recommended for readers wanting to know more about these algorithms for machine learning and decision analysis.


Fun Fact: Everyone can stop carrying around fake babies. Evidence tells us baby simulators don't deter teen pregnancy after all.


Evidence & Insights Calendar:

September 19-21; Boston. FierceBiotech Drug Development Forum. Evaluate challenges, trends, and innovation in drug discovery and R&D. Covering the entire drug development process, from basic research through clinical trials.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.

Tuesday, 30 August 2016

Social determinants of health, nonfinancial performance metrics, and satisficers.

Dear reader: Evidence Soup is starting a new chapter. Our spotlight topics are now accompanied by a 'newsletter' version of a PepperSlice, the capsule form of evidence-based analysis we've created at PepperSlice.com. Let me know what you think, and thanks for your continued readership. - Tracy Altman

1. Is social services spending associated with better health outcomes? Yes.
Socialhealth-pepperslice-thumbnail Evidence has revealed a significant association between healthcare outcomes and the ratio of social service to healthcare spending in various OECD countries. Now a new study, published in Health Affairs, finds this same pattern within the US. The health differences were substantial. For instance, a 20% change in the median social-to-health spending ratio was associated with 85,000 fewer adults with obesity and more than 950,000 adults with mental illness. Elizabeth Bradley and Lauren Taylor explain on the RWJF Culture of Health blog.

This is great, but we wonder: Where/what is the cause-effect relationship?

The Evidence. Peer-reviewed: Variation In Health Outcomes: The Role Of Spending On Social Services, Public Health, And Health Care, 2000-09.

Data: Collected using longitudinal state-level spending data and analyzed with repeated measures multivariable modeling, retrospective.

Relationship: Social : medical spending → associated → better health outcomes

From the authors: "Reorienting attention and resources from the health care sector to upstream social factors is critical, but it’s also an uphill battle. More research is needed to characterize how the health effects of social determinants like education and poverty act over longer time horizons. Stakeholders need to use information about data on health—not just health care—to make resource allocation decisions."

#: statistical_modeling social_determinants population_health social_services health_policy

2. Are nonfinancial metrics good leading indicators of financial performance? Maybe.
Nonfinancial-metrics During the '90s and early 00's we heard a lot about Kaplan and Norton's Balanced Scorecard. A key concept was the use of nonfinancial management metrics such as customer satisfaction, employee engagement, and openness to innovation. This was thought to encourage actions that increased a company’s long-term value, rather than maximizing short-term financials.

The idea has taken hold, and nonfinancial metrics are often used in designing performance management systems and executive compensation plans. But not everyone is a fan: Some argue this can actually be harmful; for instance, execs might prioritize customer sat over other performance areas. Recent findings in the MIT Sloan Management Review look at whether these metrics truly are leading indicators of financial performance.

The Evidence. Business journal: Are Nonfinancial Metrics Good Leading Indicators of Future Financial Performance?

Data: Collected from American Customer Satisfaction Index, ExecuComp, and Compustat and analyzed with econometrics: panel data analysis.

Relationship: Nonfinancial metrics → predict → Financial performance

From the authors: "We found that there were notable variations in the lead indicator strength of customer satisfaction in a sample of companies drawn from different industries. For instance, for a chemical company in our sample, customer satisfaction’s lead indicator strength was negative; this finding is consistent with prior research suggesting that in many industries, the expense required to increase customer satisfaction can’t be justified. By contrast, for a telecommunications company we studied, customer satisfaction was a strong leading indicator; this finding is consistent with evidence showing that in many service industries, customer satisfaction reduces customer churn and price sensitivity. For a professional service firm in our sample, the lead indicator strength of customer satisfaction was weak; this is consistent with evidence showing that for such services, measures such as trust provide a clearer indication of the economic benefits than customer satisfaction.... Knowledge of whether a nonfinancial metric such as customer satisfaction is a strongly positive, weakly positive, or negative lead indicator of future financial performance can help companies avoid the pitfalls of using a nonfinancial metric to incentivize the wrong behavior."

#: customer_satisfaction nonfinancial balanced_scorecard CEO_compensation performance_management

3. Reliable evidence about p values.
Daniël Lakens (@lakens) puts it very well, saying "One of the most robust findings in psychology replicates again: Psychologists misinterpret p-values." This from Frontiers in Psychology.

4. Satisficers are happier.
Fast Company's article sounds at first just like clickbait, but they have a point. You can change how you see things, and reset your expectations. The Surprising Scientific Link Between Happiness And Decision Making.

Evidence & Insights Calendar:

September 19-21; Boston. FierceBiotech Drug Development Forum. Evaluate challenges, trends, and innovation in drug discovery and R&D. Covering the entire drug development process, from basic research through clinical trials.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.


Photo credit: Fat cat by brokinhrt2 on Flickr.

Tuesday, 23 August 2016

Science of CEO success?, drug valuation kerfuffle, and event attribution science.

  Fingerpointing


1. Management research: Alchemy → Chemistry?
McKinsey's Michael Birshan and Thomas Meakin set out to "take a data-driven look" at the strategic moves of newly appointed CEOs, and how those moves influenced company returns. The accompanying podcast (with transcript), CEO transitions: The science of success, says "A lot of the existing literature is quite qualitative, anecdotal, and we’ve been able to build a database of 599 CEO transitions and add a bunch of other sources to it and really try and mine that database hard for what we hope are new insights. We are really trying to move the conversation from alchemy to chemistry, if you like."

The research was first reported in How new CEOs can boost their odds of success. McKinsey's evidence says new CEOs make similar moves, with similar frequency, whether they're taking over a struggling company or a profitable one (see chart). For companies not performing well, Birshan says the data support his advice to be bold, and make multiple moves at once. Depending how you slice the numbers, both external and internal hires fared well in the CEO role (8).

  CEO-science-success

Is this science? CEO performance was associated with the metric excess total returns to shareholders, "which is the performance of one company over or beneath the average performance of its industry peers over the same time period". Bottom line, can you attribute CEO activities directly to excess TRS? Organizational redesign was correlated with significant excess TRS (+1.9 percent) for well-performing companies. The authors say "We recognize that excess TRS CAGR does not prove a causal link; too many other variables, some beyond a CEO’s control, have an influence. But we do find the differences that emerged quite plausible." Hmm, correlation still does not equal causation.

Examine the evidence. The report's end notes answer some key questions: Can you observe or measure whether a CEO inspires the top team? Probably not (1). Where do you draw the line between a total re-org and a management change? They define 'management reshuffle' as 50+% turnover in first two years (5). But we have other questions: How were these data collected and analyzed? Some form of content analysis would likely be required to assign values to variables. How were the 599 CEOs chosen as the sample? Selection bias is a concern. Were some items self-reported? Interviews or survey results? Were findings validated by assigning a second team to check for internal reliability? External reliability?


2. ICER + pharma → Fingerpointing.
There's a kerfuffle between pharma companies and the nonprofit ICER (@ICER_review). The Institute for Clinical and Economic Review publishes reports on drug valuation, and studies comparative efficacy. Biopharma Dive explains that "Drugmakers have argued ICER's reviews are driven by the interests of insurers, and fail to take the patient perspective into account." The National Pharmaceutical Council (@npcnow) takes issue with how ICER characterizes its funding sources.

ICER has been doing some damage control, responding to a list of 'myths' about its purpose and methods. Its rebuttal, Addressing the Myths About ICER and Value Assessment, examines criticisms such as "ICER only cares about the short-term cost to insurers, and uses an arbitrary budget cap to suggest low-ball prices." Also, ICER's economic models "use the Quality-Adjusted Life Year (QALY) which discriminates against those with serious conditions and the disabled, 'devaluing' their lives in a way that diminishes the importance of treatments to help them."


Cupid-lesser-known-arrow

3. Immortal time bias → Overstated findings.
You can't get a heart transplant after you're dead. The must-read Hilda Bastian writes on Statistically Funny about immortal time bias, a/k/a event-free time or competing risk bias. This happens when an analysis mishandles events whose occurrence precludes the outcome of interest - such as heart transplant outcomes. Numerous published studies, particularly those including Kaplan-Meier analyses, may suffer from this bias.


4. Climate change → Weird weather?
This week the US is battling huge fires and disastrous floods: Climate change, right? Maybe. There's now a thing called event attribution science, where people apply probabilistic methods in an effort to determine whether an extreme weather resulted from climate change. The idea is to establish/predict adverse impacts.


Evidence & Insights Calendar:

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.


Photo credit: Fingerpointing by Tom Hilton.

Thursday, 21 July 2016

Academic clickbait, FCC doesn't use economics, and tobacco surcharges don't work.

Brady

1. Academics use crazy tricks for clickbait.
Turn to @TheWinnower for an insightful analysis of academic article titles, and how their authors sometimes mimic techniques used for clickbait. Positively framed titles (those stating a specific finding) fare better than vague ones: For example, 'smoking causes lung cancer' vs. 'the relationship between smoking and lung cancer'. Nice use of altmetrics to perform the analysis.

2. FCC doesn't use cost-benefit analysis.
Critics claim Federal Communications Commission policymaking has swerved away from econometric evidence and economic theory. Federal agencies including the EPA must submit cost-benefit analyses to support new regulations, but the FCC is exempted, "free to embrace populism as its guiding principle". @CALinnovates has published a new paper, The Curious Absence of Economic Analysis at the Federal Communications Commission: An Agency In Search of a Mission. Former FCC Chief Economist Gerald Faulhaber, PhD and Hal Singer, PhD review the agency’s "proud history at the cutting edge of industrial economics and its recent divergence from policymaking grounded in facts and analysis".

3. No bias in US police shootings?
There's plenty of evidence showing bias in US police use of force, but not in shootings, says one researcher. But Data Colada, among others, describes "an interesting empirical challenge for interpreting the shares of Whites vs Blacks shot by police while being arrested is that biased officers, those overestimating the threat posted by a Black civilian, will arrest less dangerous Blacks on average. They will arrest those posing a real threat, but also some not posing a real threat, resulting in lower average threat among those arrested by biased officers."

4. Tobacco surcharges don't work.
The Affordable Care Act imposes tobacco surcharges for smokers. But findings suggest the ACA has not led more people to stop smoking.

5. CEOs lose faith in forecasts.
Some CEOs say big-data predictions are failing. “The so-called experts and global economists are proven as often to be wrong as right these days,” claims a WSJ piece In Uncertain Times, CEOs Lose Faith in Forecasts. One consultant advises people to "rely less on forecasts and instead road-test ideas with customers and make fast adjustments when needed. He urges them to supplement big-data predictions with close observation of their customers."

6. Is fMRI evidence flawed?
Motherboard's Why Two Decades of Brain Research Could Be Seriously Flawed recaps research by Anders Eklund. Cost is one reason, he argues: fMRI scans are notoriously expensive. "That makes it hard for researchers to perform large-scale studies with lots of patients". Eklund has written elsewhere about this (Can parametric statistical methods be trusted for fMRI based group studies?), and the issue is being noticed by Neuroskeptic and Science-Based Medicine ("It’s tempting to think that the new idea or technology is going to revolutionize science or medicine, but history has taught us to be cautious. For instance, antioxidants, it turns out, are not going to cure a long list of diseases").

Evidence & Insights Calendar:

August 24-25; San Francisco. Sports Analytics Innovation Summit.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 21 June 2016

Free beer! and the "Science of X".

Chanteuse_flickr_Christian_Hornick

1. Free beer for a year for anyone who can work perfume, velvety voice, and 'Q1 revenue goals were met' into an appropriate C-Suite presentation.
Prezi is a very nice tool enabling you to structure a visual story, without forcing a linear, slide-by-slide presentation format. The best part is you can center an entire talk around one graphic or model, and then dive into details depending on audience response. (Learn more in our writeup on How to Present Like a Boss.)

Now there's a new marketing campaign, the Science of Presentations. Prezi made a darn nice web page. And the ebook offers several useful insights into how to craft and deliver a memorable presentation (e.g., enough with the bullet points already).

But in their pursuit of click-throughs, they've gone too far. It's tempting to claim you're following the "Science of X". To some extent, Prezi provides citations to support its recommendations: The ebook links to a few studies on audience response and so forth. But that's not a "science" - they don't always connect between what they're citing and what they're suggesting to business professionals. Example: "Numerous studies have found that metaphors and descriptive words or phrases — things like 'perfume' and 'she had a velvety voice' - trigger the sensory cortext.... On the other hand, when presented with nondescriptive information — for example, 'The marketing team reached all of its revenue goals in Q1' — the only parts of our brain that are activated are the ones responsible for understanding language. Instead of experiencing the content with which we are being presented, we are simply processing it."

Perhaps in this case "simply processing" the good news is enough experience for a busy executive. But our free beer offer still stands.

2. How should medical guidelines be communicated to patients?

And now for the 'Science of Explaining Guidelines'. It's hard enough to get healthcare professionals to agree on a medical guideline - and then follow it. But it's also hard to decide whether/how those recommendations should be communicated to patients. Many of the specifics are intended for providers' consumption, to improve their practice of medicine. Although it's essential that patients understand relevant evidence, translating a set of recommendations into lay terms is quite problematic.

Groups publish medical guidelines to capture evidence-based recommendations for addressing a particular disease. Sometimes these are widely accepted - and other times not. The poster-child example of breast cancer screening illustrates why patients, and not just providers, must be able to understand guidelines. Implementation Science recently published the first systematic review of methods for disseminating guidelines to patients.

Not surprisingly, the study found weak evidence of methods that are consistently feasible. "Key factors of success were a dissemination plan, written at the start of the recommendation development process, involvement of patients in this development process, and the use of a combination of traditional and innovative dissemination tools." (Schipper et al.)

3. Telling a story with data.
In the Stanford Social Innovation Review (SSIR), @JakePorway explains three things great data storytellers do differently [possible paywall]. Jake is with @DataKind, "harnessing the power of data science in service of humanity".

 

Photo credit: Christian Hornick on Flickr.

Wednesday, 11 May 2016

How Integrative Propositional Analysis shapes evidence into a graph.

  IPA beers

I admire any effort to create a simple presentation of complex evidence. Having developed some models of my own, I know I’m on the right track when someone’s initial response is “That’s too simplistic; it’s much more complicated.” I believe you’ll really struggle if you don’t begin with a top-down perspective.

Now we can choose from useful frameworks for synthesizing and rating the quality and relevance of evidence: GRADE for medical evidence, and the U.S. Dept. of Education's evidence guidelines are just two examples. Integrative Propositional Analysis (IPA) is a method of integrating and analyzing the propositions (theories) stated in a study, strategic plan, or other document.

Bernadette Wright and Steven Wallis write in Sage Open that IPA structures relationships and quantitatively measures the inter-connectedness among concepts found within theories. I think this is a promising idea. IPA is briefly introduced in Three Ways of Getting to Policy-Based Evidence: Why researchers and practitioners are shifting away from expensive new studies toward the effective synthesis of existing research (Stanford Social Innovation Review). The ‘three ways’ are Randomista (requiring randomized experiments to generate evidence), Explainista (requiring strong data with synthesized explanation), and Mapista (preferring a  holistic knowledge map of a policy, program, or issue).

Graphista? IPA falls under Mapista, but we might instead say Graphista, since it constructs a graph and applies straightforward analytics. These six steps are involved:

  1. Find the logical statements/propositions in a theory (found in a publication).
  2. Diagram the propositions (a box for each concept/ term, an arrow for each causal link).
  3. Combine those smaller diagrams where they overlap to create a larger diagram.
  4. Count the number of concepts with two or more causes (“concatenated” concepts).
  5. Count the total number of concepts in the theory (“Complexity”).
  6. Divide concatenated concepts by total concepts to assess “Systemicity.”

Quantifying complexity and systemicity. Wright and Wallis explain that “The systemicity score computed in the final step is a key measure of causal inter-relatedness in IPA. The greater the proportion of concepts in a theory that are concatenated, the more the theory’s concepts are causally interrelated (Wallis, 2013). In previous studies across diverse fields in the physical and social sciences, paradigm-changing scientific theories have shown greater systemicity (inter-connectedness among concepts) than earlier, less successful scientific theories (Wallis 2010a).” Returning to the graph comparison, this brings to mind graph connectivity.

Integrative Propositional Analysis map

Talking this week with Bernadette Wright, she added that “Policy research is an applied science. It has the same problem of all applied sciences. It’s not enough to make new discoveries. We also need to apply existing knowledge to real-world problems. Integrative Propositional Analysis and related mapping techniques provide a rigorous way to connect existing studies into a larger pattern. This lets managers quickly see what’s known on a topic. So they can use that information to make a bigger difference for the people they serve.”

I have some questions, such as: Do different people construct the same IPA maps for the same theories? The authors also raise the question of inter-rater reliability in their discussion (page 7).

Cool idea. Wright and Wallis have developed a gamified version of IPA, where people co-create knowledge maps for experiental learning. For more on that and some other insights, go to  AEA365 - A Tip-a-Day for Evaluators.