Evidence Soup
How to find, use, and explain evidence.

11 posts categorized "Data4Good: Evidence-based nonprofits"

Monday, 31 July 2017

Resistance to algorithms, evidence for home visits, and problems with wearables.

Kitty with laptop

I'm back, after time away from the keyboard. Yikes! Evidence is facing an uphill battle. Decision makers still resist handing control to others, even when new methods or machines make better predictions. And government agencies continue to, ahem, struggle with making evidence-based policy.  — Tracy Altman


1. Evidence-based home visit program loses funding.
The evidence base has developed over 30+ years: Advocates for home visit programs - where professionals visit at-risk families - cite immediate and long-term benefits for parents and for children. Things like positive health-related behavior, fewer arrests, community ties, lower substance abuse [Long-term Effects of Nurse Home Visitation on Children's Criminal and Antisocial Behavior: 15-Year Follow-up of a Randomized Controlled Trial (JAMA, 1998)]. Or Nobel Laureate-led findings that "Every dollar spent on high-quality, birth-to-five programs for disadvantaged children delivers a 13% per annum return on investment" [Research Summary: The Lifecycle Benefits of an Influential Early Childhood Program (2016)].

The Nurse-Family Partnership (@NFP_nursefamily), a well-known provider of home visit programs, is getting the word out in the New York Times and on NPR.

AEI_funnel_27jul17

Yet this bipartisan, evidence-based policy is now defunded. @Jyebreck explains that advocates are “staring down a Sept. 30 deadline.... The Maternal, Infant and Early Childhood Home Visiting program, or MIECHV, supports paying for trained counselors or medical professionals” where they establish long-term relationships.

It’s worth noting that the evidence on childhood programs is often conflated. AEI’s Katharine Stevens and Elizabeth English break it down in their excellent, deep-dive report Does Pre-K Work? They illustrate the dangers of drawing sweeping conclusions about research findings, especially when mixing studies about infants with studies of three- or four-year olds. And home visit advocates emphasize that disadvantage begins in utero and infancy, making a standard pre-K program inherently inadequate. This issue is complex, and Congress’ defunding decision will only hurt efforts to gather evidence about how best to level the playing field for children.

AEI Does Pre-K Work

2. Why do people reject algorithms?
Researchers want to understand our ‘irrational’ responses to algorithmic findings. Why do we resist change, despite evidence that a machine can reliably beat human judgment? Berkeley J. Dietvorst (great name, wasn’t he in Hunger Games?) comments in the MIT Sloan Management Review that “What I find so interesting is that it’s not limited to comparing human and algorithmic judgment; it’s my current method versus a new method, irrelevant of whether that new method is human or technology.”

Job-security concerns might help explain this reluctance. And Dietvorst has studied another cause: We lose trust in an algorithm when we see its imperfections. This hesitation extends to cases where an ‘imperfect’ algorithm remains demonstrably capable of outpredicting us. On the bright side, he found that “people were substantially more willing to use algorithms when they could tweak them, even if just a tiny amount”. Dietvorst is inspired by the work of Robyn Dawes, a pioneering behavioral decision scientist who investigated the Man vs. Machine dilemma. Dawes famously developed a simple model for predicting how students will rank against one another, which significantly outperformed admissions officers. Yet both then and now, humans don’t like to let go of the wheel.

Wearables Graveyard by Aaron Parecki

3. Massive data still does not equal evidence.
For those who doubted the viability of consumer health wearables and the notion of the quantified self, there’s plenty of validation: Jawbone liquidated, Intel dropped out, and Fitbit struggles. People need a compelling reason to wear one (such as fitness coach, or condition diagnosis and treatment).

Rather than a data stream, we need hard evidence about something actionable: Evidence is “the available body of facts or information indicating whether a belief or proposition is true or valid (Google: define evidence).” To be sure, some consumers enjoy wearing a device that tracks sleep patterns or spots out-of-normal-range values - but that market is proving to be limited.

But Rock Health points to positive developments, too. Some wearables demonstrate specific value: Clinical use cases are emerging, including assistance for the blind.

Photo credit: Kitty on Laptop by Ryan Forsythe, CC BY-SA 2.0 via Wikimedia Commons.
Photo credit: Wearables Graveyard by Aaron Parecki on Flickr.

Tuesday, 09 August 2016

Health innovation, foster teens, NBA, Gwyneth Paltrow.

Foster_care_youth

1. Behavioral economics → Healthcare innovation.
Jaan Sidorov (@DisMgtCareBlog) writes on the @Health_Affairs blog about roadblocks to healthcare innovation. Behavioral economics can help us truly understand resistance to change, including unconscious bias, so valuable improvements will gain more traction. Sidoro offers concise explanations of hyperbolic discounting, experience weighting, social utility, predictive value, and other relevant economic concepts. He also recommends specific tactics when presenting a technology-based innovation to the C-Suite.

2. Laptops → Foster teen success.
Nobody should have to type their high school essays on their phone. A coalition including Silicon Valley leaders and public sector agencies will ensure all California foster teens can own a laptop computer. Foster Care Counts reports evidence that "providing laptop computers to transition age youth shows measurable improvement in self-esteem and academic performance". KQED's California Report ran a fine story.

For a year, researchers at USC's School of Social Work surveyed 730 foster youth who received laptops, finding that "not only do grades and class attendance improve, but self-esteem and life satisfaction increase, while depression drops precipitously."

3. Analytical meritocracy → Better NBA outcomes.
The Innovation Enterprise Sports Channel explain how the NBA draft is becoming an analytical meritocracy. Predictive models help teams evaluate potential picks, including some they might have overlooked. Example: Andre Roberson, who played very little college ball, was drafted successfully by Oklahoma City based on analytics. It's tricky combining projections for active NBA teams with prospects who may never take the court. One decision aid is ESPN’s Draft Projection model, using Statistical Plus/Minus to predict how someone would perform through season five of a hypothetical NBA career. ESPN designates each player as a Superstar, Starter, Role Player, or Bust, to facilitate risk-reward assessments.

4. Celebrity culture → Clash with scientific evidence.
Health law and policy professor Timothy Caulfield (@CaulfieldTim) examines the impact of celebrity culture on people's choices of diet and healthcare. His new book asks Is Gwyneth Paltrow Wrong About Everything?: How the Famous Sell Us Elixirs of Health, Beauty & Happiness. Caulfield cites many, many peer-reviewed sources of evidence.

Evidence & Insights Calendar:

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

February 22-23; London UK. Evidence Europe 2017. How pharma, payers, and patients use real-world evidence to understand and demonstrate drug value and improve care.

Photo credit: Foster Care Counts.

Tuesday, 28 June 2016

Open innovation, the value of pharmaceuticals, and liberal-vs-conservative stalemates.

Evidence_from_openinnovation

1. Open Innovation can up your game.
Open Innovation → Better Evidence. Scientists with an agricultural company tell a fascinating story about open innovation success. Improving Analytics Capabilities Through Crowdsourcing (Sloan Review) describes a years-long effort to tap into expertise outside the organization. Over eight years, Syngenta used open-innovation platforms to develop a dozen data-analytics tools, which ultimately revolutionized the way it breeds soybean plants. "By replacing guesswork with science, we are able to grow more with less."

Many open innovation platforms run contests between individuals (think Kaggle), and some facilitate teams. One of these platforms, InnoCentive, hosts mathematicians, physicists, and computer scientists eager to put their problem-solving skills to the test. There was a learning curve, to be sure (example: divide big problems into smaller pieces). Articulating the research question was challenging to say the least.

Several of the associated projects could be tackled by people without subject matter expertise; other steps required knowledge of the biological science, complicating the task of finding team members. But eventually Syngenta "harnessed outside talent to come up with a tool that manages the genetic component of the breeding process — figuring out which soybean varieties to cross with one another and which breeding technique will most likely lead to success." The company reports substantial results from this collaboration: The average rate of improvement of its portfolio grew from 0.8 to 2.5 bushels per acre per year.

 

Value frameworks context matters

 

2. How do you tie drug prices to value?
Systematic Analysis → Better Value for Patients. It's the age-old question: How do you put a dollar value on intangibles - particularly human health and wellbeing? As sophisticated pharmaceuticals succeed in curing more diseases, their prices are climbing. Healthcare groups have developed 'value frameworks' to guide decision-making about these molecules. It's still a touchy subject to weigh the cost of a prescription against potential benefits to a human life.

These frameworks address classic problems, and are useful examples for anyone formalizing the steps of complex decision-making - inside or outside of healthcare. For example, one cancer treatment may be likely to extend a patient's life by 30 to 45 days compared to another, but at much higher cost, or with unacceptable side effects. Value frameworks help people consider these factors.

@ContextMatters studies processes for drug evaluation and regulatory approval. In Creating a Global Context for Value, they compare the different methods of determining whether patients are getting high value. Their Value Framework Comparison Table highlights key evaluation elements from three value frameworks (ASCO, NCCN, ICER) and three health technology assessments (CADTH, G-BA, NICE).

 

Evidencebased-povertyprograms

3. Evidence overcomes the liberal-vs-conservative stalemate.
Evidence-based Programs → Lower Poverty. Veterans of the Bloomberg mayoral administration describe a data-driven strategy to reduce poverty in New York. Results for America Senior Fellows Robert Doar and Linda Gibbs share an insider's perspective in "New York City's Turnaround on Poverty: Why poverty in New York – unlike in other major cities – is dropping."

Experimentation was combined with careful attention to which programs succeeded (Paycheck Plus) and which didn't (Family Rewards). A key factor, common to any successful decision analysis effort: When a program didn't produce the intended results, advocates weren't cast aside as failures. Instead, that evidence was blended with the rest to continuously improve. The authors found that "Solid evidence can trump the liberal-versus-conservative stalemate when the welfare of the country’s most vulnerable people is at stake."

Wednesday, 08 June 2016

Grit isn't the answer, plus Scrabble and golf analytics.

Scrabble

1. Poor kids already have grit: Educational Controversy, 2016 edition.
All too often, we run with a sophisticated, research-based idea, oversimplify it, and run it into the ground. 2016 seems to be the year for grit. Jean Rhodes, who heads up the Chronicle of Evidence-Based Mentoring (@UMBmentoring) explains that grit is not a panacea for the problems facing disadvantaged youth. "Grit: The power and passion of perseverance, Professor Angela Duckworth’s new bestseller, on the topic has fueled both enthusiasm for such efforts as well as debate among those of us who worry that it locates the problem (a lack of grit) and solution (training) in the child. Further, by focusing on exemplars of tenacity and success, the book romanticizes difficult circumstances. The forces of inequality that, for the most part, undermine children’s success are cast as contexts for developing grit. Moreover, when applied to low-income students, such self-regulation may privilege conformity over creative expression and leadership. Thus, it was a pleasure to come across a piece by Stanford doctoral student, Ethan Ris, on the history and application of the concept." Ris first published his critique in the Journal of Educational Controversy and recently wrote a piece for the Washington Post, The problem with teaching ‘grit’ to poor kids? They already have it.

2. Does Scrabble have its own Billy Beane?
It had to happen: Analytics for Scrabble. But it might not be what you expected. WSJ explains why For World’s Newest Scrabble Stars, SHORT Tops SHORTER.

Wellington Jighere and other players from Nigeria are shaking up the game, using analytics to support a winning strategy favoring five-letter words. Most champions follow a “long word” strategy, making as many seven- and eight-letter plays as possible. But analytics have brought that "sacred Scrabble shibboleth into question, exposing the hidden risks of big words."

Jighere has been called the Rachmaninoff of rack management, often saving good letters for a future play rather than scoring an available bingo. (For a pre-Jighere take on the world of Scrabble, see Word Wars.)

3. Golf may have a Billy Beane, too.
This also had to happen. Mark Broadie (@MarkBroadie) is disrupting golf analytics with his 'strokes gained' system. In his 2014 book, Every Shot Counts, Broadie rips apart assumptions long regarded as sacrosanct - maxims like 'drive for show, putt for dough'. "The long game explains about two-thirds of scoring differences and the short game and putting about one-third. This is true for amateurs as well as pros." To capture and analyze data, Broadie developed a GolfMetrics program. He is the Carson Family Professor of Business at Columbia Business School, and has a PhD in operations research from Stanford. He has presented at the Sloan Sports Analytics Conference.

Pros have begun benefiting from golf analytics, including Danny Willett, winner of this year's Masters. He has thanked @15thClub, a new analytics firm, for helping him prep better course strategy. 15th Club provided insight for Augusta’s par-5 holes. As WSJ explained, the numbers show that when players lay up, leaving their ball short of the green to avoid a water hazard, they fare better when doing so as close to the green as possible, rather than the more distant spots where players typically take their third shots.

4. Evidence-based government on the rise.
In the US, "The still small, but growing, field of pay for success made significant strides this week, with Congress readying pay for success legislation and the Obama administration announcing a second round of grants through the Social Innovation Fund (@SIFund)."

5. Man + Machine = Success.
Only Humans Need Apply is a new book by Tom Davenport (@tdav) and @JuliaKirby. Cognitive computing combined with human decision making is what will succeed in the future. @DeloitteBA led a recent Twitter chat: Man-machine: The dichotomy blurs, which included @RajeevRonanki, the lead for their cognitive consulting practice.

Tuesday, 03 May 2016

Bitcoin for learning, helping youth with evidence, and everyday health evidence.

College diploma

1. Bitcoin tech records people's learning.
Ten years from now, what if you could evaluate a job candidate by reviewing their learning ledger, a blockchain-administered record of their learning transactions - from courses they took, books they read, or work projects they completed? And what if you could see their work product (papers etc.) rather than just their transcript and grades? Would that be more relevant and useful than knowing what college degree they had?

This is the idea behind Learning is Earning 2026, a future system that would reward any kind of learning. The EduBlocks Ledger would use the same blockchain technology that runs Bitcoin. Anyone could award these blocks to anyone else. As explained by Marketplace Morning Report, the Institute for the Future is developing the EduBlocks concept.

 

Market share MIT-Sloan

2. Is market share a valuable metric?
Only in certain cases is market share an important metric for figuring out how to make more profits. Neil T. Bendle and Charan K. Bagga explain in the MIT Sloan Management Review that Popular marketing metrics, including market share, are regularly misunderstood and misused.

Well-known research in the 1970s suggested a link between market share and ROI. But now most evidence shows it's a correlational relationship, not causal.

 

Adolescent crime

3. Evidence-based ways to close gaps in crime, poverty, education.
The Laura and John Arnold Foundation launched a $15 million Moving the Needle Competition, which will fund state and local governments and nonprofits implementing highly effective ways to address poverty, education, and crime. The competition is recognized as a key evidence-based initiative in White House communications about My Brother’s Keeper, a federal effort to address persistent opportunity gaps.

Around 250 communities have responded to the My Brother’s Keeper Community Challenge with $600+ million in private sector and philanthropic grants, plus $1 billion in low-interest financing. Efforts include registering 90% of Detroit's 4-year-olds in preschool, private-sector “MBK STEM + Entrepreneurship” commitments, and a Summit on Preventing Youth Violence.

Here's hoping these initiatives are evaluated rigorously, and the ones demonstrating evidence of good or promising outcomes are continued.

 

Eddie Izzard

4. Everyday health evidence.
Evidence for Everyday Health Choices is a new series by @UKCochraneCentr, offering quick rundowns of the systematic reviews on a pertinent topic. @SarahChapman30 leads the effort. Nice recent example inspired by Eddie Izzard: Evidence on stretching and other techniques to improve marathon performance and recovery: Running marathons Izzard enough: what can help? [Photo credit: Evidence for Everyday Health Choices.]

5. Short Science = Understandable Science.
Short Science allows people to publish summaries of research papers; they're voted on and ranked until the best/most accessible summary has been identified. The goal is to make seminal ideas in science accessible to the people who want to understand them. Anyone can write a summary of any paper in the Short Science database. Thanks to Carl Anderson (@LeapingLlamas).

Tuesday, 05 April 2016

$15 minimum wage, evidence-based HR, and manmade earthquakes.

Fightfor15.org

Photo by Fightfor15.org

1. SPOTLIGHT: Will $15 wages destroy California jobs?
California is moving toward a $15/hour minimum wage (slowly, stepping up through 2023). Will employers be forced to eliminate jobs under the added financial pressure? As with all things economic, it depends who you ask. Lots of numbers have been thrown around during the recent push for higher pay. Fightfor15.org says 6.5 million workers are getting raises in California, and that 2/3 of New Yorkers support a similar increase. But small businesses, restaurants in particular, are concerned they'll have to trim menus and staff - they can charge only so much for a sandwich.

Moody's Analytics economist Adam Ozimek says it's not just about food service or home healthcare. Writing on The Dismal Scientist Blog, "[I]n past work I showed that California has 600,000 manufacturing workers who currently make $15 an hour or less. The massive job losses in manufacturing over the last few decades has shown that it is an intensely globally competitive industry where uncompetitive wages are not sustainable." 

It's not all so grim. Ozimek shows that early reports of steep job losses after Seattle's minimum-wage hike have been revised strongly upward. However, finding "the right comparison group is getting complicated."


Yellow Map Chance of Earthquake

2. Manmade events sharply increase earthquake risk.
Holy smokes. New USGS maps show north-central Oklahoma at high earthquake risk. The United States Geological Survey now includes potential ground-shaking hazards from both 'human-induced' and natural earthquakes, substantially changing their risk assessment for several areas. Oklahoma recorded 907 earthquakes last year at magnitude 3 or higher. Disposal of industrial wastewater has emerged as a substantial factor.

3. Evidence-based HR redefines leadership roles.
Applying evidence-based principles to talent management can boost strategic impact, but requires a different approach to leadership. The book Transformative HR: How Great Companies Use Evidence-Based Change for Sustainable Advantage (Jossey-Bass) describes practical uses of evidence to improve people management. John Boudreau and Ravin Jesuthasan suggest principles for evidence-based change, including logic-driven analytics. For instance, establishing appropriate metrics for each sphere of your business, rather than blanket adoption of measures like employee engagement and turnover.

4. Why we're not better at investing.
Gary Belsky does a great job of explaining why we think we're better investors than we are. By now our decision biases have been well-documented by behavioral economists. Plus we really hate to lose - yet we're overconfident, somehow thinking we can compete with Warren Buffet.

Tuesday, 29 March 2016

Rapid is the new black, how to ask for money, and should research articles be free?

Digitalhealthnetwork

1. #rapidisthenewblack

The need for speed is paramount, so it's crucial that we test ideas and synthesize evidence quickly without losing necessary rigor. Examples of people working hard to get it right:

  • The Digital Health Breakthrough Network is a very cool idea, supported by an A-list team. They (@AskDHBN) seek New York City-based startups who want to test technology in rigorous pilot studies. The goal is rapid validation of early-stage startups with real end users. Apply here.
  • The UK's fantastic Alliance for Useful Evidence (@A4UEvidence) asks Rapid Evidence Assessments: A bright idea or a false dawn? "Research synthesis will be at the heart of the government’s new What Works centres" - equally true in the US. The idea is "seductive: the rigour of a systematic review, but one that is cheaper and quicker to complete." Much depends on whether the review maps easily onto an existing field of study.
  • Jon Brassey of the Trip database is exploring methods for rapid reviews of health evidence. See Rapid-Reviews.info or @rapidreviews_i.
  • Miles McNall and Pennie G. Foster-Fishman of Michigan State (ouch, still can't get over that bracket-busting March Madness loss) present methods and case studies for rapid evaluations and assessments. In the American Journal of Evaluation, they caution that the central issue is balancing speed and trustworthiness.

2. The science of asking for donations: Unit asking method.
How much would you give to help one person in need? How much would you give to help 20 people? This is the concept behind the unit asking method, a way to make philanthropic fund-raising more successful.

3. Should all research papers be free? 
Good stuff from the New York Times on the conflict between scholarly journal paywalls and Sci-Hub.

4. Now your spreadsheet can tell you what's going on.
Savvy generates a narrative for business intelligence charts in Qlik or Excel.

Monday, 14 December 2015

'Evidence-based' is a thing. It was a very good year.

2015 was kind to the 'evidence-based' movement. Leaders in important sectors - ranging from healthcare to education policy - are adopting standardized, rigorous methods for data gathering, analytics, and decision making. Evaluation of interventions will never be the same.

With so much data available, it's a non-stop effort to pinpoint which sources possess the validity, value, and power to identify, describe, or predict transformational changes to important outcomes. But this is the only path to sustaining executives' confidence in evidence-based methods.

Here's a few examples of evidence-based game-changers, followed by a brief summary of challenges for 2016.

What works: What Works Cities is using data and evidence to improve results for city residents. The Laura and John Arnold Foundation is expanding funding for low-cost, randomized controlled trials (RCTs) - part of its effort to expand the evidence base for “what works” in U.S. social spending.

Evidence-based HR: KPMG consulting practice leaders say "HR isn’t soft science, it’s about hard numbers, big data, evidence."

Comparative effectiveness research: Evidence-based medicine continues to thrive. Despite some challenges with over-generalizing the patient populations, CER provides great examples of systematic evidence synthesis. This AHRQ reportillustrates a process for transparently identifying research questions and reviewing findings, supported by panels of experts.

Youth mentoring: Evidence-based programs are connecting research findings with practices and standards for mentoring distinct youth populations (such as children with incarcerated parents). Nothing could be more important. #MentoringSummit2016

Nonprofit management: The UK-based Alliance for Useful Evidence (@A4UEvidence) is sponsoring The Science of Using Science Evidence: A systematic review, policy report, and conference to explore what approaches best enable research use in decision-making for policy and practice. 

Education: The U.S. House passed the Every Student Succeeds Act, outlining provisions for evidence collection, analysis, and use in education policy. Intended to improve outcomes by shifting $2 billion in annual funding toward evidence-based solutions.

Issues for 2016.

Red tape. Explicitly recognizing tiers of acceptable evidence, and how they're collected, is an essential part of evidence-based decision making. But with standardizing also comes bureacracy, particularly for government programs. The U.S. Social Innovation Fund raises awareness for rigorous social program evidence - but runs the risk of slowing progress with exhaustive recognition of various sanctioned study designs (we're at 72 and counting).

Meta-evidence. We'll need lots more evidence about the evidence, to answer questions like: Which forms of evidence are most valuable, useful, and reliable - and which ones are actually applied to important decisions? When should we standardize decision making, and when should we allow a more fluid process?

Tuesday, 17 November 2015

ROI from evidence-based government, milking data for cows, and flu shot benefits diminishing.

This week's 5 links on evidence-based decision making.

1. Evidence standards → Knowing what works → Pay for success
Susan Urahn says we've reached a Tipping Point on Evidence-Based Policymaking. She explains in @Governing that 24 US governments have directed $152M to programs with an estimated $521M ROI: "an innovative and rigorous approach to policymaking: Create an inventory of currently funded programs; review which ones work based on research; use a customized benefit-cost model to compare programs based on their return on investment; and use the results to inform budget and policy decisions."

2. Sensors → Analytics → Farming profits
Precision dairy farming uses RFID tags, sensors, and analytics to track the health of cows. Brian T. Horowitz (@bthorowitz) writes on TechCrunch about how farmers are milking big data for insight. Literally. Thanks to @ShellySwanback.

3. Public acceptance → Annual flu shots → Weaker response?
Yikes. Now that flu shot programs are gaining acceptance, there's preliminary evidence suggesting that repeated annual shots can gradually reduce their effectiveness under some circumstances. Scientists at the Marshfield Clinic Research Foundation recently reported that "children who had been vaccinated annually over a number of years were more likely to contract the flu than kids who were only vaccinated in the season in which they were studied." Helen Branswell explains on STAT.

4. PCSK9 → Cholesterol control → Premium increases
Ezekiel J. Emanuel says in a New York Times Op-Ed I Am Paying for Your Expensive Medicine. PCSK9 inihibitors newly approved by US FDA can effectively lower bad cholesterol, though data aren't definitive whether this actually reduces heart attacks, strokes, and deaths from heart disease. This new drug category comes at a high cost. Based on projected usage levels, soem analysts predict insurance premiums could rise >$100 for everyone in that plan.

5. Opportunistic experiments → Efficient evidence → Informed family policy
New guidance details how researchers and program administrators can recognize opportunities for experiments and carry them out. This allows people to discover effects of planned initiatives, as opposed to analyzing interventions being developed specifically for research studies. Advancing Evidence-Based Decision Making: A Toolkit on Recognizing and Conducting Opportunistic Experiments in the Family Self-Sufficiency and Stability Policy Area.

Wednesday, 21 October 2015

5 practical ways to build an evidence-based social program.

Notes from my recent presentation on how social programs can become evidence-based - in our lifetime. Get the slides: How Can Social Programs Become Evidence-Based? 5 Practical Steps. #data4good

Highlights: Recent developments in evidence-based decision making in the nonprofit/social sector. Practical ways to discover and exchange evidence-based insights. References, resources, and links to organizations with innovative programs.

Social Innovation Fund Evidence Evaluation

Data-Driven is No Longer Optional

Whether you're the funder or the funded, data-driven management is now mandatory. Evaluations and decisions must incorporate rigorous methods, and evidence review is becoming standardized. Many current concepts are modeled after evidence-based medicine, where research-based findings are slotted into categories depending on their quality and generalizibility.

SIF: Simple or Bewildering? The Social Innovation Fund (US) recognizes three levels of evidence: preliminary, moderate, and strong. Efforts are being made to standardize evaluation, but they're recognizing 72 evaluation designs (!).

What is an evidence-based decision? There's a long answer and a short answer. The short answer is it's a decision reflecting current, best evidence: Internal and external sources for findings; high-quality methods of data collection and analysis; and a feedback loop to bring in new evidence.

On one end of the spectrum, evidence-based decisions bring needed rigor to processes and programs with questionable outcomes. At the other end, we risk creating a cookie-cutter, rubber-stamp approach that sustains bureaucracy and sacrifices innovation.

What's a 'good' decision? A 'good' decision should follow a 'good' process: Transparent and repeatable. This doesn't necessarily guarantee a good result - one must judge the quality of a decision process separately from its outcomes. That said, when a decision process continues to deliver suboptimal results, adjustments are needed.

Where does the evidence come from? Many organizations have relied on gathering their own evidence, but are now overwhelmed by requirements to support decision processes with data. Marketplaces for evidence are emerging, as the Social Innovation Research Center's Patrick Lester recently explained. There's a supply and a demand for rigorous evidence on the performance of social programs. PepperSlice is a marketplace where nonprofits can share, buy, and sell evidence-based insights using a standard format.

Avoid the GPOC (Giant PDF of Crap). Standardized evidence is already happening, but standardized dissemination of findings - communcating results - is still mostly a free-for-all. Traditional reports, articles, and papers, combined with PowerPoints and other free-form presentations, make it difficult to exchange evidence systematically and quickly.

Practical ways to get there. So how can a nonprofit or publicly financed social program compete?

  1. Focus on what deciders need. Before launching efforts to gather evidence, examine how decisions are being made. What evidence do they want? Social Impact Bonds, a/k/a Pay for Success Bonds, are a perfect example because they specify desired outcomes and explicit success measures.
  2. Use insider vocabulary. Recognize and follow the terminology for desired categories of evidence. Be explicit about how data were collected (randomized trial, quasi-experimental design, etc.) and how analyzed (statistics, complex modeling, ...).
  3. Live better through OPE. Whenever possible, use Other People's Evidence. Get research findings from peer organizations, academia, NGOs, and government agencies. Translate their evidence to your program and avoid rolling your own.
  4. Manage and exchange. Once valuable insights are discovered, be sure to manage and reuse them. Trade/exchange them with other organizations.
  5. Share systematically. Follow a method for exchanging insights, reflecting key evidence categories. Use a common vocabulary and a common format.

 Resources and References

Don’t end the Social Innovation Fund (yet). Angela Rachidi, American Enterprise Institute (@AngelaRachidi).

Why Evidence-Based Policymaking Is Just the Beginning. Susan Urahn, Pew Charitable Trusts.

Alliance for Useful Evidence (UK). How do charities use research evidence? Seeking case studies (@A4UEvidence). http://www.surveygizmo.com/s3/2226076/bab129060657

Social Innovation Fund: Early Results Are Promising. Patrick Lester, Social Innovation Research Center, 30-June-2015. "One of its primary missions is to build evidence of what works in three areas: economic opportunity, health, and youth development." Also, SIF "could nurture a supply/demand evidence marketplace when grantees need to demonstrate success" (page 27).

What Works Cities supports US cities that are using evidence to improve results for their residents (@WhatWorksCities).

Urban Institute Pay for Succes Initiative (@UrbanInstitute). "Once strategic planning is complete, jurisdictions should follow a five step process that uses cost-benefit analysis to price the transaction and a randomized control trial to evaluate impact." Ultimately, evidence will support standardized pricing and defined program models.

Results 4 America works to drive resources to results-driven solutions that improve lives of young people & their families (@Results4America).

How to Evaluate Evidence: Evaluation Guidance for Social Innovation Fund.

Evidence Exchange within the US federal network. Some formats are still traditional papers, free-form, big pdf's.

Social Innovation Fund evidence categories: Preliminary, moderate, strong. "This framework is very similar to those used by other federal evidence-based programs such as the Investing in Innovation (i3) program at the Department of Education. Preliminary evidence means the model has evidence based on a reasonable hypothesis and supported by credible research findings. Examples of research that meet the standards include: 1) outcome studies that track participants through a program and measure participants’ responses at the end of the program.... Moderate evidence means... designs of which can support causal conclusions (i.e., studies with high internal validity)... or studies that only support moderate causal conclusions but have broad general applicability.... Strong evidence means... designs of which can support causal conclusions (i.e., studies with high internal validity)" and generalizability (i.e., studies with high external validity).