Evidence Soup
How to find, use, and explain evidence.

67 posts categorized "learning & education"

Monday, 14 May 2018

Building a repeatable, evidence-based decision process.

Decision-making matrix by Tomasz Tunguz
How we decide is no less important than the evidence we use to decide. People are recognizing this and creating innovative ways to blend what, why, and how into decision processes.

1. Quality decision process → Predictable outcomes
After the Golden Rule, perhaps the most important management lesson is learning to evaluate the quality of a decision process separately from the outcome. Tomasz Tunguz (@ttunguz) reminds us in a great post about Annie Duke, a professional poker player: “Don’t be so hard on yourself when things go badly and don’t be so proud of yourself when they go well.... The wisdom in Duke’s advice is to focus on the process, because eventually the right process will lead to great outcomes.”

Example of a misguided “I'm feeling lucky” response: Running a crummy oil company, and thinking you‘re a genius even though profits arise from unexpected, $80/barrel prices.

2. Reinvent the meeting → Better decisions
Step back and examine your meeting style. Are you giving all the evidence a voice, or relying on the same old presentation theater? Kahlil Smith writes in strategy+business (@stratandbiz) “If I catch myself agreeing with everything a dominant, charismatic person is saying in a meeting, then I will privately ask a third person (not the presenter or the loudest person) to repeat the information, shortly after the meeting, to see if I still agree.” Other techniques include submitting ideas anonymously, considering multiple solutions and scenarios, and a decision pre-mortem with a diverse group of thinkers. More in Why Our Brains Fall for False Expertise, and How to Stop It.

3. How to Teach and Apply Evidence-Based Management. The Center for Evidence-Based Management (CEBMa) Annual Meeting is scheduled for August 9 in Chicago. There's no fee to attend.

Tuesday, 13 March 2018

Biased instructor response → Students shut out

Benjamin-dada-323461-unsplash

Definitely not awesome. Stanford’s Center for Education Policy Analysis reports Bias in Online Classes: Evidence from a Field Experiment. “We find that instructors are 94% more likely to respond to forum posts by white male students. In contrast, we do not find general evidence of biases in student responses…. We discuss the implications of our findings for our understanding of social identity dynamics in classrooms and the design of equitable online learning environments.”

“Genius is evenly distributed by zip code. Opportunity and access are not.” -Mitch Kapor

One simple solution – sometimes deployed for decision debiasing – is to make interactions anonymous. However, applying nudge concepts, a “more sophisticated approach would be to structure online environments that guide instructors to engage with students in more equitable ways (e.g., dashboards that provide real-time feedback on the characteristics of their course engagement).”

Tuesday, 06 March 2018

Biased evidence skews poverty policy.

Decision bias: food-desert map

In Biased Ways We Look at Poverty, Adam Ozimek reviews new evidence suggesting that food deserts aren’t the problem, behavior is. His Modeled Behavior (Forbes) piece asks why the food desert theory got so much play, claiming “I would argue it reflects liberal bias when it comes to understanding poverty.”

So it seems this poverty-diet debate is about linking cause with effect - always dangerous, bias-prone territory. And citizen-data scientists, academics, and everyone in between are at risk of mapping objective data (food store availability vs. income) and subjectively attributing a cause for poor habits.

The study shows very convincingly that the difference in healthy eating is about behavior and demand, not supply.

Ozimek looks at the study The Geography of Poverty and Nutrition: Food Deserts and Food Choices Across the United States, published by the National Bureau of Economic Research. The authors found that differences in healthy eating aren’t explained by prices, concluding that “after excluding fresh produce, healthy foods are actually about eight percent less expensive than unhealthy foods.” Also, people who moved from food deserts to locations with better options continued to make similar dietary choices.

Food for thought, indeed. Rather than following behavioral explanations, Ozimek believes liberal thinking supported the food desert concept “because supply-side differences are more complimentary to poor people, and liberals are biased towards theories of poverty that are complimentary to those in poverty.” Meanwhile, conservatives “are biased towards viewing the behavioral and cultural factors that cause poverty as something that we can’t do anything about.”

Tuesday, 09 August 2016

Health innovation, foster teens, NBA, Gwyneth Paltrow.

Foster_care_youth

1. Behavioral economics → Healthcare innovation.
Jaan Sidorov (@DisMgtCareBlog) writes on the @Health_Affairs blog about roadblocks to healthcare innovation. Behavioral economics can help us truly understand resistance to change, including unconscious bias, so valuable improvements will gain more traction. Sidoro offers concise explanations of hyperbolic discounting, experience weighting, social utility, predictive value, and other relevant economic concepts. He also recommends specific tactics when presenting a technology-based innovation to the C-Suite.

2. Laptops → Foster teen success.
Nobody should have to type their high school essays on their phone. A coalition including Silicon Valley leaders and public sector agencies will ensure all California foster teens can own a laptop computer. Foster Care Counts reports evidence that "providing laptop computers to transition age youth shows measurable improvement in self-esteem and academic performance". KQED's California Report ran a fine story.

For a year, researchers at USC's School of Social Work surveyed 730 foster youth who received laptops, finding that "not only do grades and class attendance improve, but self-esteem and life satisfaction increase, while depression drops precipitously."

3. Analytical meritocracy → Better NBA outcomes.
The Innovation Enterprise Sports Channel explain how the NBA draft is becoming an analytical meritocracy. Predictive models help teams evaluate potential picks, including some they might have overlooked. Example: Andre Roberson, who played very little college ball, was drafted successfully by Oklahoma City based on analytics. It's tricky combining projections for active NBA teams with prospects who may never take the court. One decision aid is ESPN’s Draft Projection model, using Statistical Plus/Minus to predict how someone would perform through season five of a hypothetical NBA career. ESPN designates each player as a Superstar, Starter, Role Player, or Bust, to facilitate risk-reward assessments.

4. Celebrity culture → Clash with scientific evidence.
Health law and policy professor Timothy Caulfield (@CaulfieldTim) examines the impact of celebrity culture on people's choices of diet and healthcare. His new book asks Is Gwyneth Paltrow Wrong About Everything?: How the Famous Sell Us Elixirs of Health, Beauty & Happiness. Caulfield cites many, many peer-reviewed sources of evidence.

Evidence & Insights Calendar:

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

February 22-23; London UK. Evidence Europe 2017. How pharma, payers, and patients use real-world evidence to understand and demonstrate drug value and improve care.

Photo credit: Foster Care Counts.

Tuesday, 02 August 2016

Business coaching, manipulating memory for market research, and female VCs.

Hoosiers_coach

1. Systematic review: Does business coaching make a difference?
In PLOSOne, Grover and Furnham present findings of their systematic review of coaching impacts within organizations. They found glimmers of hope for positive results from coaching, but also spotted numerous holes in research designs and data quality.

Over the years, outcome measures have included job satisfaction, performance, self-awareness, anxiety, resilience, hope, autonomy, and goal attainment. Some have measured ROI, although this one seems particularly subjective. In terms of organizational impacts, researchers have measured transformational leadership and performance as rated by others. This systematic review included only professional coaches, whether internal or external to the organization. Thanks @Rob_Briner and @IOPractitioners.

2. Memory bias pollutes market research.
David Paull of Dialsmith hosted a series about how flawed recall and memory bias affect market research. (Thanks to @kristinluck.)

All data is not necessarily good data. “We were consistently seeing a 13–20% misattribution rate on surveys due in large part to recall problems. Resultantly, you get this chaos in your data and have to wonder what you can trust.... Rather than just trying to mitigate memory bias, can we actually use it to our advantage to offset issues with our brands?”

The ethics of manipulating memory. “We can actually affect people’s nutrition and the types of foods they prefer eating.... But should we deliberately plant memories in the minds of people so they can live healthier or happier lives, or should we be banning the use of these techniques?”

Mitigating researchers' memory bias. “We’ve been talking about memory biases for respondents, but we, as researchers, are also very prone to memory biases.... There’s a huge opportunity in qual research to apply an impartial technique that can mitigate (researcher) biases too....[I]n the next few years, it’s going to be absolutely required that anytime you do something that is qualitative in nature that the analysis is not totally reliant on humans.”

3. Female VC --> No gender gap for startup funding.
New evidence suggests female entrepreneurs should choose venture capital firms with female partners (SF Business Times). Michigan's Sahil Raina analyzed data to compare the gender gap in successful exits from VC financing between two sets of startups: those initially financed by VCs with only male general partners (GPs), and those initially financed by VCs that include female GPs. “I find a large performance gender gap among startups financed by VCs with only male GPs, but no such gap among startups financed by VCs that include female GPs.”

4. Sharing evidence about student outcomes.
Results for America is launching an Evidence in Education Lab to help states, school districts, and individual schools build and use evidence of 'what works' to improve student outcomes. A handful of states and districts will work closely with RFA to tackle specific data challenges.

Background: The bipartisan Every Student Succeeds Act (ESSA) became law in December 2015. ESSA requires, allows, and encourages the use of evidence-based approaches that can help improve student outcomes. Results for America estimates that ESSA's evidence provisions could help shift more than $2B US of federal education funds in each of the next four years toward evidence-based, results-driven solutions.

Evidence & Insights Calendar:

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 26 July 2016

Evidence relativism, innovation as a process, and decision analysis pioneer.

Gold-panning

1. Panning for gold in the evidence stream.
Patrick Lester introduces his new SSIR article by saying "With evidence-based policy, we need to acknowledge that some evidence is more valid than others. Pretending all evidence is equal will only preserve the status quo." In Defining Evidence Down, the director of the Social Innovation Research Center responds to analysts skeptical of evidence hierarchies developed to steer funding toward programs that fit the "what works" concept.

Are levels valid? Hierarchies recognize different levels of evidence according to their rigor and certainty. These rankings are well-established in healthcare, and are becoming the standard for evidence evaluation within the Dept of Education and other US government agencies. Critics of this prevailing thinking (Gopal & Schorr, Friends of Evidence) want to ensure decision-makers embrace an inclusive definition of evidence that values qualitative research, case studies, insights from experience, and professional judgment. Lester contends that "Unfortunately, to reject evidence hierarchies is to promote is a form of evidence relativism, where everyone is entitled to his or her own views about what constitutes good evidence in his or her own local or individualized context.

Ideology vs. evidence. "By resisting the notion that some evidence is more valid than others, they are defining evidence down. Such relativism would risk a return to the past, where change has too often been driven by fads, ideology, and politics, and where entrenched interests have often preserved the status quo." Other highlights: "...supporting a broad definition of evidence is not the same thing as saying that all evidence is equally valid." And "...randomized evaluations are not the only rigorous way to examine systems-level change. Researchers can often use quasi-experimental evaluations to examine policy changes...."

2. Can innovation be systematic?
Everyone wants innovation nowadays, but how do you make it happen? @HighVizAbility reviews a method called Systematic Inventive Thinking, an approach to creativity, innovation, and problem solving. The idea is to execute as a process, rather than relying on random ideas. Advocates say SIT doesn't replace unbridled creativity, but instead complements it.

3. Remembering decision analysis pioneer Howard Raiffa.
Howard Raiffa, co-founder of the Harvard Kennedy School of Government and decision analysis pioneer, passed away recently. He was also a Bayesian decision theorist and well-known author on negotiation strategies. Raiffa considered negotiation analysis an opportunity for both sides to get value, describing it as The Science and Art of Collaborative Decision Making.

4. Journal impact factor redux?
In the wake of news that Thomson Reuters sold its formula, Stat says changes may finally be coming to the "hated" journal impact factor. Ivan Oransky (@ivanoransky) and Adam Marcus (@armarcus) explain that some evidence suggests science articles don't receive the high number of citations supposedly predicted by the IF. The American Society of Microbiologists has announced that it will abandon the metric completely. Meanwhile, top editors from Nature — which in the past has taken pride in its IF — have coauthored a paper widely seen as critical of the factor.

Photo credit: Poke of Gold by Mike Beauregard

Tuesday, 05 July 2016

Brain training isn't smart, physician peer pressure, and #AskforEvidence.

Brain-Training

1. Spending $ on brain training isn't so smart.
It seems impossible to listen to NPR without hearing from their sponsor, Lumosity, the brain-training company. The target demo is spot on: NPR will be the first to tell you its listeners are the "nation's best and brightest". And bright people don't want to slow down. Alas, spending hard-earned money on brain training isn't looking like a smart investment. New evidence seems to confirm suspicions that this $1 billion industry is built on hope, sampling bias, and placebo effect. Arstechnica says researchers have concluded that earlier, mildly positive "findings suggest that recruitment methods used in past studies created a self-selected groups of participants who believed the training would improve cognition and thus were susceptible to the placebo effect." The study, Placebo Effects in Cognitive Training, was published in the Proceedings of the National Academy of Sciences.

It's not a new theme: In 2014, 70 cognitive scientists signed a statement saying "The strong consensus of this group is that the scientific literature does not support claims that the use of software-based 'brain games' alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease."


Journal.pmed.1002049.t001

2. Ioannidis speaks out on usefulness of research.
After famously claiming that most published research findings are false, John Ioannidis now tells us Why Most Clinical Research Is Not Useful (PLOS Medicine). So, what are the key features of 'useful' research? The problem needs to be important enough to fix. Prior evidence must be evaluated to place the problem into context. Plus, we should expect pragmatism, patient-centeredness, monetary value, and transparency.


Antibiotic_use

3. To nudge physicians, compare them to peers.
Doctors are overwhelmed with alerts and guidance. So how do you intervene when a physician prescribes antibiotics for a virus, despite boatloads of evidence showing they're ineffective? Comparing a doc's records to peers is one promising strategy. Laura Landro recaps research by Jeffrey Linder (Brigham and Women's, Harvard): "Peer comparison helped reduce prescriptions that weren’t warranted from 20% to 4% as doctors got monthly individual feedback about their own prescribing habits for 18 months.

"Doctors with the lower rates were told they were top performers, while the rest were pointedly told they weren’t, in an email that included the number and proportion of antibiotic prescriptions they wrote compared with the top performers." Linder says “You can imagine a bunch of doctors at Harvard being told ‘You aren’t a top performer.’ We expected and got a lot of pushback, but it was the most effective intervention.” Perhaps this same approach would work outside the medical field.

4. Sports analytics taxonomy.
INFORMS is a professional society focused on Operations Research and Management Science. The June issue of their ORMS Today magazine presents v1.0 of a sports analytics taxonomy (page 40). This work, by Gary Cokins et al., demonstrates how classification techniques can be applied to better understand sports analytics. Naturally this includes analytics for players and managers in the major leagues. But it also includes individual sports, amateur sports, franchise management, and venue management.

5. Who writes the Internet, anyway? #AskforEvidence
Ask for Evidence is a public campaign that helps people request for themselves the evidence behind news stories, marketing claims, and policies. Sponsored by @senseaboutsci, the campaign has new animations on YouTube, Twitter, and Facebook. Definitely worth a like or a retweet.

Calendar:
September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 21 June 2016

Free beer! and the "Science of X".

Chanteuse_flickr_Christian_Hornick

1. Free beer for a year for anyone who can work perfume, velvety voice, and 'Q1 revenue goals were met' into an appropriate C-Suite presentation.
Prezi is a very nice tool enabling you to structure a visual story, without forcing a linear, slide-by-slide presentation format. The best part is you can center an entire talk around one graphic or model, and then dive into details depending on audience response. (Learn more in our writeup on How to Present Like a Boss.)

Now there's a new marketing campaign, the Science of Presentations. Prezi made a darn nice web page. And the ebook offers several useful insights into how to craft and deliver a memorable presentation (e.g., enough with the bullet points already).

But in their pursuit of click-throughs, they've gone too far. It's tempting to claim you're following the "Science of X". To some extent, Prezi provides citations to support its recommendations: The ebook links to a few studies on audience response and so forth. But that's not a "science" - they don't always connect between what they're citing and what they're suggesting to business professionals. Example: "Numerous studies have found that metaphors and descriptive words or phrases — things like 'perfume' and 'she had a velvety voice' - trigger the sensory cortext.... On the other hand, when presented with nondescriptive information — for example, 'The marketing team reached all of its revenue goals in Q1' — the only parts of our brain that are activated are the ones responsible for understanding language. Instead of experiencing the content with which we are being presented, we are simply processing it."

Perhaps in this case "simply processing" the good news is enough experience for a busy executive. But our free beer offer still stands.

2. How should medical guidelines be communicated to patients?

And now for the 'Science of Explaining Guidelines'. It's hard enough to get healthcare professionals to agree on a medical guideline - and then follow it. But it's also hard to decide whether/how those recommendations should be communicated to patients. Many of the specifics are intended for providers' consumption, to improve their practice of medicine. Although it's essential that patients understand relevant evidence, translating a set of recommendations into lay terms is quite problematic.

Groups publish medical guidelines to capture evidence-based recommendations for addressing a particular disease. Sometimes these are widely accepted - and other times not. The poster-child example of breast cancer screening illustrates why patients, and not just providers, must be able to understand guidelines. Implementation Science recently published the first systematic review of methods for disseminating guidelines to patients.

Not surprisingly, the study found weak evidence of methods that are consistently feasible. "Key factors of success were a dissemination plan, written at the start of the recommendation development process, involvement of patients in this development process, and the use of a combination of traditional and innovative dissemination tools." (Schipper et al.)

3. Telling a story with data.
In the Stanford Social Innovation Review (SSIR), @JakePorway explains three things great data storytellers do differently [possible paywall]. Jake is with @DataKind, "harnessing the power of data science in service of humanity".

 

Photo credit: Christian Hornick on Flickr.

Tuesday, 14 June 2016

Mistakes we make, Evidence Index, and Naturals vs Strivers.

Putin_pianist

1. Mistakes we make when sharing insights.
We've all done this: Hurried to share valuable, new information and neglected to frame it meaningfully, thus slowing the impact and possibly alienating our audience. Michael Shrage describes a perfect example, taken from The Only Rule Is It Has to Work, a fantastic book about analytics innovation.

The cool thing about the book is that it's a Moneyball for the rest of us. Ben Lindbergh and Sam Miller had the rare opportunity to experiment and apply statistics to improve the performance of the Sonoma Stompers, a minor league baseball team in California wine country. But they had to do it with few resources, and learn leadership skills along the way.

The biggest lesson they learned was the importance of making their findings easy to understand. As Shrage points out in his excellent Harvard Business Review piece, the authors were frustrated at the lack of uptake: They didn't know how to make the information meaningful and accessible to managers and coaches. Some people were threatened, others merely annoyed: "Predictive analytics create organizational winners and losers, not just insights."

2. Naturals vs. Strivers: Why we lie about our efforts.
Since I live in Oakland, I'd be remiss without a Steph Curry story this week. But there's lots more to it: Lebron James is a natural basketball player, and Steph is a striver ; they're both enormously popular, of course. But Ben Cohen explains that people tend to prefer naturals, whether we recognize it or not: We favor those who just show up and do things really well. So strivers lie about their efforts.

Overachievers launch into bad behavior, such as claiming to sleep only four hours a night. Competitive pianists practice in secret. Social psychology research has found that we like people described as naturals, even when we're being fooled.

3. How do government agencies apply evidence?
Results for America has evaluated how U.S. agencies apply evidence to decisions, and developed an index synthesizing their findings. It's not easily done. @Results4America studied factors such as "Did the agency use evidence of effectiveness when allocating funds from its five largest competitive grant programs in FY16?" The Departments of Housing and Labor scored fairly high. See the details behind the index [pdf here].

Photo credit: Putin classical pianist on Flickr.

 

Wednesday, 08 June 2016

Grit isn't the answer, plus Scrabble and golf analytics.

Scrabble

1. Poor kids already have grit: Educational Controversy, 2016 edition.
All too often, we run with a sophisticated, research-based idea, oversimplify it, and run it into the ground. 2016 seems to be the year for grit. Jean Rhodes, who heads up the Chronicle of Evidence-Based Mentoring (@UMBmentoring) explains that grit is not a panacea for the problems facing disadvantaged youth. "Grit: The power and passion of perseverance, Professor Angela Duckworth’s new bestseller, on the topic has fueled both enthusiasm for such efforts as well as debate among those of us who worry that it locates the problem (a lack of grit) and solution (training) in the child. Further, by focusing on exemplars of tenacity and success, the book romanticizes difficult circumstances. The forces of inequality that, for the most part, undermine children’s success are cast as contexts for developing grit. Moreover, when applied to low-income students, such self-regulation may privilege conformity over creative expression and leadership. Thus, it was a pleasure to come across a piece by Stanford doctoral student, Ethan Ris, on the history and application of the concept." Ris first published his critique in the Journal of Educational Controversy and recently wrote a piece for the Washington Post, The problem with teaching ‘grit’ to poor kids? They already have it.

2. Does Scrabble have its own Billy Beane?
It had to happen: Analytics for Scrabble. But it might not be what you expected. WSJ explains why For World’s Newest Scrabble Stars, SHORT Tops SHORTER.

Wellington Jighere and other players from Nigeria are shaking up the game, using analytics to support a winning strategy favoring five-letter words. Most champions follow a “long word” strategy, making as many seven- and eight-letter plays as possible. But analytics have brought that "sacred Scrabble shibboleth into question, exposing the hidden risks of big words."

Jighere has been called the Rachmaninoff of rack management, often saving good letters for a future play rather than scoring an available bingo. (For a pre-Jighere take on the world of Scrabble, see Word Wars.)

3. Golf may have a Billy Beane, too.
This also had to happen. Mark Broadie (@MarkBroadie) is disrupting golf analytics with his 'strokes gained' system. In his 2014 book, Every Shot Counts, Broadie rips apart assumptions long regarded as sacrosanct - maxims like 'drive for show, putt for dough'. "The long game explains about two-thirds of scoring differences and the short game and putting about one-third. This is true for amateurs as well as pros." To capture and analyze data, Broadie developed a GolfMetrics program. He is the Carson Family Professor of Business at Columbia Business School, and has a PhD in operations research from Stanford. He has presented at the Sloan Sports Analytics Conference.

Pros have begun benefiting from golf analytics, including Danny Willett, winner of this year's Masters. He has thanked @15thClub, a new analytics firm, for helping him prep better course strategy. 15th Club provided insight for Augusta’s par-5 holes. As WSJ explained, the numbers show that when players lay up, leaving their ball short of the green to avoid a water hazard, they fare better when doing so as close to the green as possible, rather than the more distant spots where players typically take their third shots.

4. Evidence-based government on the rise.
In the US, "The still small, but growing, field of pay for success made significant strides this week, with Congress readying pay for success legislation and the Obama administration announcing a second round of grants through the Social Innovation Fund (@SIFund)."

5. Man + Machine = Success.
Only Humans Need Apply is a new book by Tom Davenport (@tdav) and @JuliaKirby. Cognitive computing combined with human decision making is what will succeed in the future. @DeloitteBA led a recent Twitter chat: Man-machine: The dichotomy blurs, which included @RajeevRonanki, the lead for their cognitive consulting practice.