Evidence Soup
How to find, use, and explain evidence.

111 posts categorized "science & research methods"

Tuesday, 26 July 2016

Evidence relativism, innovation as a process, and decision analysis pioneer.

Gold-panning

1. Panning for gold in the evidence stream.
Patrick Lester introduces his new SSIR article by saying "With evidence-based policy, we need to acknowledge that some evidence is more valid than others. Pretending all evidence is equal will only preserve the status quo." In Defining Evidence Down, the director of the Social Innovation Research Center responds to analysts skeptical of evidence hierarchies developed to steer funding toward programs that fit the "what works" concept.

Are levels valid? Hierarchies recognize different levels of evidence according to their rigor and certainty. These rankings are well-established in healthcare, and are becoming the standard for evidence evaluation within the Dept of Education and other US government agencies. Critics of this prevailing thinking (Gopal & Schorr, Friends of Evidence) want to ensure decision-makers embrace an inclusive definition of evidence that values qualitative research, case studies, insights from experience, and professional judgment. Lester contends that "Unfortunately, to reject evidence hierarchies is to promote is a form of evidence relativism, where everyone is entitled to his or her own views about what constitutes good evidence in his or her own local or individualized context.

Ideology vs. evidence. "By resisting the notion that some evidence is more valid than others, they are defining evidence down. Such relativism would risk a return to the past, where change has too often been driven by fads, ideology, and politics, and where entrenched interests have often preserved the status quo." Other highlights: "...supporting a broad definition of evidence is not the same thing as saying that all evidence is equally valid." And "...randomized evaluations are not the only rigorous way to examine systems-level change. Researchers can often use quasi-experimental evaluations to examine policy changes...."

2. Can innovation be systematic?
Everyone wants innovation nowadays, but how do you make it happen? @HighVizAbility reviews a method called Systematic Inventive Thinking, an approach to creativity, innovation, and problem solving. The idea is to execute as a process, rather than relying on random ideas. Advocates say SIT doesn't replace unbridled creativity, but instead complements it.

3. Remembering decision analysis pioneer Howard Raiffa.
Howard Raiffa, co-founder of the Harvard Kennedy School of Government and decision analysis pioneer, passed away recently. He was also a Bayesian decision theorist and well-known author on negotiation strategies. Raiffa considered negotiation analysis an opportunity for both sides to get value, describing it as The Science and Art of Collaborative Decision Making.

4. Journal impact factor redux?
In the wake of news that Thomson Reuters sold its formula, Stat says changes may finally be coming to the "hated" journal impact factor. Ivan Oransky (@ivanoransky) and Adam Marcus (@armarcus) explain that some evidence suggests science articles don't receive the high number of citations supposedly predicted by the IF. The American Society of Microbiologists has announced that it will abandon the metric completely. Meanwhile, top editors from Nature — which in the past has taken pride in its IF — have coauthored a paper widely seen as critical of the factor.

Photo credit: Poke of Gold by Mike Beauregard

Tuesday, 19 July 2016

Are you causing a ripple? How to assess research impact.

Raindrops-in-a-bucket

People are recognizing the critical need for meta-research, or the 'science of science'. One focus area is understanding whether research produces desired outcomes, and identifying how to ensure that truly happens going forward. Research impact assessment (RIA) is particularly important when holding organizations accountable for their management of public and donor funding. An RIA community of practice is emerging.

Are you causing a ripple? For those wanting to lead an RIA effort, the International School on Research Impact Assessment was developed to "empower participants on how to assess, measure and optimise research impact with a focus on biomedical and health sciences." ISRIA is a partnership between Alberta Innovates Heath Solutions, the Agency for Health Quality and Assessment of Catalonia, and RAND Europe. They're presenting their fourth annual program Sept 19- 23 in Melbourne, Australia, hosted by the Commonwealth Scientific and Industrial Research Organisation, Australia’s national research agency.

ISRIA participants are typically in program management, evaluation, knowledge translation, or policy roles. They learn a range of frameworks, tools, and approaches for assessing research impact, and how to develop evidence about 'what works’. 

Make an impact with your impact assessment. Management strategies are also part of the curriculum: Embedding RIA systemically into organizational practice, reaching agreement on effective methods and reporting, understanding the audience for RIAs, and knowing how to effectively communicate results to various stakeholders.

The 2016 program will cover both qualitative and quantitative analytical methods, along with a mixed design. It will include sessions on evaluating economic, environmental and social impacts. The aim is to expose participants to as many options as possible, including new methods, such as altmetrics. (Plus, there's a black tie event on the first evening.)

 

Photo credit: Raindrops in a Bucket by Smabs Sputzer.

 

Tuesday, 05 July 2016

Brain training isn't smart, physician peer pressure, and #AskforEvidence.

Brain-Training

1. Spending $ on brain training isn't so smart.
It seems impossible to listen to NPR without hearing from their sponsor, Lumosity, the brain-training company. The target demo is spot on: NPR will be the first to tell you its listeners are the "nation's best and brightest". And bright people don't want to slow down. Alas, spending hard-earned money on brain training isn't looking like a smart investment. New evidence seems to confirm suspicions that this $1 billion industry is built on hope, sampling bias, and placebo effect. Arstechnica says researchers have concluded that earlier, mildly positive "findings suggest that recruitment methods used in past studies created a self-selected groups of participants who believed the training would improve cognition and thus were susceptible to the placebo effect." The study, Placebo Effects in Cognitive Training, was published in the Proceedings of the National Academy of Sciences.

It's not a new theme: In 2014, 70 cognitive scientists signed a statement saying "The strong consensus of this group is that the scientific literature does not support claims that the use of software-based 'brain games' alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease."


Journal.pmed.1002049.t001

2. Ioannidis speaks out on usefulness of research.
After famously claiming that most published research findings are false, John Ioannidis now tells us Why Most Clinical Research Is Not Useful (PLOS Medicine). So, what are the key features of 'useful' research? The problem needs to be important enough to fix. Prior evidence must be evaluated to place the problem into context. Plus, we should expect pragmatism, patient-centeredness, monetary value, and transparency.


Antibiotic_use

3. To nudge physicians, compare them to peers.
Doctors are overwhelmed with alerts and guidance. So how do you intervene when a physician prescribes antibiotics for a virus, despite boatloads of evidence showing they're ineffective? Comparing a doc's records to peers is one promising strategy. Laura Landro recaps research by Jeffrey Linder (Brigham and Women's, Harvard): "Peer comparison helped reduce prescriptions that weren’t warranted from 20% to 4% as doctors got monthly individual feedback about their own prescribing habits for 18 months.

"Doctors with the lower rates were told they were top performers, while the rest were pointedly told they weren’t, in an email that included the number and proportion of antibiotic prescriptions they wrote compared with the top performers." Linder says “You can imagine a bunch of doctors at Harvard being told ‘You aren’t a top performer.’ We expected and got a lot of pushback, but it was the most effective intervention.” Perhaps this same approach would work outside the medical field.

4. Sports analytics taxonomy.
INFORMS is a professional society focused on Operations Research and Management Science. The June issue of their ORMS Today magazine presents v1.0 of a sports analytics taxonomy (page 40). This work, by Gary Cokins et al., demonstrates how classification techniques can be applied to better understand sports analytics. Naturally this includes analytics for players and managers in the major leagues. But it also includes individual sports, amateur sports, franchise management, and venue management.

5. Who writes the Internet, anyway? #AskforEvidence
Ask for Evidence is a public campaign that helps people request for themselves the evidence behind news stories, marketing claims, and policies. Sponsored by @senseaboutsci, the campaign has new animations on YouTube, Twitter, and Facebook. Definitely worth a like or a retweet.

Calendar:
September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 28 June 2016

Open innovation, the value of pharmaceuticals, and liberal-vs-conservative stalemates.

Evidence_from_openinnovation

1. Open Innovation can up your game.
Open Innovation → Better Evidence. Scientists with an agricultural company tell a fascinating story about open innovation success. Improving Analytics Capabilities Through Crowdsourcing (Sloan Review) describes a years-long effort to tap into expertise outside the organization. Over eight years, Syngenta used open-innovation platforms to develop a dozen data-analytics tools, which ultimately revolutionized the way it breeds soybean plants. "By replacing guesswork with science, we are able to grow more with less."

Many open innovation platforms run contests between individuals (think Kaggle), and some facilitate teams. One of these platforms, InnoCentive, hosts mathematicians, physicists, and computer scientists eager to put their problem-solving skills to the test. There was a learning curve, to be sure (example: divide big problems into smaller pieces). Articulating the research question was challenging to say the least.

Several of the associated projects could be tackled by people without subject matter expertise; other steps required knowledge of the biological science, complicating the task of finding team members. But eventually Syngenta "harnessed outside talent to come up with a tool that manages the genetic component of the breeding process — figuring out which soybean varieties to cross with one another and which breeding technique will most likely lead to success." The company reports substantial results from this collaboration: The average rate of improvement of its portfolio grew from 0.8 to 2.5 bushels per acre per year.

 

Value frameworks context matters

 

2. How do you tie drug prices to value?
Systematic Analysis → Better Value for Patients. It's the age-old question: How do you put a dollar value on intangibles - particularly human health and wellbeing? As sophisticated pharmaceuticals succeed in curing more diseases, their prices are climbing. Healthcare groups have developed 'value frameworks' to guide decision-making about these molecules. It's still a touchy subject to weigh the cost of a prescription against potential benefits to a human life.

These frameworks address classic problems, and are useful examples for anyone formalizing the steps of complex decision-making - inside or outside of healthcare. For example, one cancer treatment may be likely to extend a patient's life by 30 to 45 days compared to another, but at much higher cost, or with unacceptable side effects. Value frameworks help people consider these factors.

@ContextMatters studies processes for drug evaluation and regulatory approval. In Creating a Global Context for Value, they compare the different methods of determining whether patients are getting high value. Their Value Framework Comparison Table highlights key evaluation elements from three value frameworks (ASCO, NCCN, ICER) and three health technology assessments (CADTH, G-BA, NICE).

 

Evidencebased-povertyprograms

3. Evidence overcomes the liberal-vs-conservative stalemate.
Evidence-based Programs → Lower Poverty. Veterans of the Bloomberg mayoral administration describe a data-driven strategy to reduce poverty in New York. Results for America Senior Fellows Robert Doar and Linda Gibbs share an insider's perspective in "New York City's Turnaround on Poverty: Why poverty in New York – unlike in other major cities – is dropping."

Experimentation was combined with careful attention to which programs succeeded (Paycheck Plus) and which didn't (Family Rewards). A key factor, common to any successful decision analysis effort: When a program didn't produce the intended results, advocates weren't cast aside as failures. Instead, that evidence was blended with the rest to continuously improve. The authors found that "Solid evidence can trump the liberal-versus-conservative stalemate when the welfare of the country’s most vulnerable people is at stake."

Tuesday, 12 April 2016

Better evidence for patients, and geeking out on baseball.

Health tech wearables

1. SPOTLIGHT: Redefining how patients get health evidence.

How can people truly understand evidence and the tradeoffs associated with health treatments? How can the medical community lead them through decision-making that's shared - but also evidence-based?

Hoping for cures, patients and their families anxiously Google medical research. Meanwhile, the quantified selves are gathering data at breakneck speed. These won't solve the problem. However, this month's entire Health Affairs issue (April 2016) focuses on consumer uses of evidence and highlights promising ideas.

  • Translating medical evidence. Lots of synthesis and many guidelines are targeted at healthcare professionals, not civilians. Knowledge translation has become an essential piece, although it doesn't always involve patients at early stages. The Boot Camp Translation process is changing that. The method enables leaders to engage patients and develop healthcare language that is accessible and understandable. Topics include colon cancer, asthma, and blood pressure management.
  • Truly patient-centered medicine. Patient engagement is a buzzword, but capturing patient-reported outcomes in the clinical environment is a real thing that might make a big difference. Danielle Lavallee led an investigation into how patients and providers can find more common ground for communicating.
  • Meaningful insight from wearables. These are early days, so it's probably not fair to take shots at the gizmos out there. It will be a beautiful thing when sensors and other devices can deliver more than alerts and reports - and make valuable recommendations in a consumable way. And of course these wearables can play a role in routine collection of patient-reported outcomes.


Statcast

2. Roll your own analytics for fantasy baseball.
For some of us, it's that special time of year when we come to the realization that our favorite baseball team is likely going home early again this season. There's always fantasy baseball, and it's getting easier to geek out with analytics to improve your results.

3. AI engine emerges after 30 years.
No one ever said machine learning was easy. Cyc is an AI engine that reflects 30 years of building a knowledge base. Now its creator, Doug Lenat, says it's ready for prime time. Lucid is commercializing the technology. Personal assistants and healthcare applications are in the works.

Photo credit: fitbit one by Tatsuo Yamashita on Flickr.

Tuesday, 05 April 2016

$15 minimum wage, evidence-based HR, and manmade earthquakes.

Fightfor15.org

Photo by Fightfor15.org

1. SPOTLIGHT: Will $15 wages destroy California jobs?
California is moving toward a $15/hour minimum wage (slowly, stepping up through 2023). Will employers be forced to eliminate jobs under the added financial pressure? As with all things economic, it depends who you ask. Lots of numbers have been thrown around during the recent push for higher pay. Fightfor15.org says 6.5 million workers are getting raises in California, and that 2/3 of New Yorkers support a similar increase. But small businesses, restaurants in particular, are concerned they'll have to trim menus and staff - they can charge only so much for a sandwich.

Moody's Analytics economist Adam Ozimek says it's not just about food service or home healthcare. Writing on The Dismal Scientist Blog, "[I]n past work I showed that California has 600,000 manufacturing workers who currently make $15 an hour or less. The massive job losses in manufacturing over the last few decades has shown that it is an intensely globally competitive industry where uncompetitive wages are not sustainable." 

It's not all so grim. Ozimek shows that early reports of steep job losses after Seattle's minimum-wage hike have been revised strongly upward. However, finding "the right comparison group is getting complicated."


Yellow Map Chance of Earthquake

2. Manmade events sharply increase earthquake risk.
Holy smokes. New USGS maps show north-central Oklahoma at high earthquake risk. The United States Geological Survey now includes potential ground-shaking hazards from both 'human-induced' and natural earthquakes, substantially changing their risk assessment for several areas. Oklahoma recorded 907 earthquakes last year at magnitude 3 or higher. Disposal of industrial wastewater has emerged as a substantial factor.

3. Evidence-based HR redefines leadership roles.
Applying evidence-based principles to talent management can boost strategic impact, but requires a different approach to leadership. The book Transformative HR: How Great Companies Use Evidence-Based Change for Sustainable Advantage (Jossey-Bass) describes practical uses of evidence to improve people management. John Boudreau and Ravin Jesuthasan suggest principles for evidence-based change, including logic-driven analytics. For instance, establishing appropriate metrics for each sphere of your business, rather than blanket adoption of measures like employee engagement and turnover.

4. Why we're not better at investing.
Gary Belsky does a great job of explaining why we think we're better investors than we are. By now our decision biases have been well-documented by behavioral economists. Plus we really hate to lose - yet we're overconfident, somehow thinking we can compete with Warren Buffet.

Wednesday, 24 February 2016

How to show your evidence is reliable, repeatable.

audience clapping

When presenting findings, it’s essential to show their reliability and relevance. This post explains how to demonstrate that evidence is reproducible; next week in Part 2, we’ll cover how to show it’s relevant.

Show your evidence is reproducible. With complexity on the rise, there’s no shortage of quality problems with traditional research: People are finding it impossible to replicate everything from peer-reviewed, published findings to Amy Cuddy's power pose study. A recent examination of psychology evidence was particularly painful.

In a corporate setting, the problem is no less difficult. How do you know a data scientist’s results can be replicated?* How can you be sure an analyst’s Excel model is flawless? Much confusion could be avoided if people produced documentation to add transparency.

Demystify, demystify, demystify. To establish credibility, the audience needs to believe your numbers and your methods are reliable and reproducible. Numerous efforts are bringing transparency to academic research (@figshare, #openscience). Technologies such as self-serve business intelligence and data visualization have added traceability to corporate analyses. Data scientists are coming to grips with the need for replication, evidenced by the Johns Hopkins/Coursera class on reproducible research. At presentation time, include highlights of data collection and analysis so the audience clearly understands the source of your evidence.

Make a list: What would you need to know? Imagine a colleague will be auditing or replicating your work - whether it’s a straightforward business analysis, data science, or scientific research. Put together a list of the things they would need to do, and the data they would access, to arrive at your result. Work with your team to set expectations for how projects are completed and documented. No doubt this can be a burdensome task, but the more good habits people develop (e.g., no one-off spreadsheet tweaking), the less pain they’ll experience when defending their insights.

*What is a “reproducible” finding, anyway? Does this mean literally replicated, as in producing essentially the exact same result? Or does it mean a concept or research theory is supported? Is a finding replicated if effect size is different, but direction is the same? Sanjay Srivastava has an excellent explanation of the differences as they apply to psychology in What counts as a successful or failed replication?

image source: Barney Moss (creative commons)

Tuesday, 02 February 2016

The new ISPOR #pharma health decision guidance: How it's like HouseHunters.

Househunters_decision_checklist

'Multiple criteria decision analysis' is a crummy name for a great concept (aren't all big decisions analyzed using multiple criteria?). MCDA means assessing alternatives while simultaneously considering several objectives. It's a useful way to look at difficult choices in healthcare, oil production, or real estate. But oftentimes, results of these analyses aren't communicated clearly, limiting their usefulness (more about that below).

The International Society For Pharmacoeconomics and Outcomes Research (ISPOR) has developed new MCDA guidance, available in the latest issue of Value for Health (paywall). To be sure, healthcare decision makers have always weighed medical, social, and economic factors: MCDA helps stakeholders bring concrete choices and transparency to the process of evaluating outcomes research - where as we know, controversy is always a possibility.

Anyone can use MCDA. To put it mildly, it’s difficult to balance saving lives with saving money. Fundamentally, MCDA means listing options, defining decision criteria, weighting those criteria, and then scoring each option. Some experts build complex economic models, but anyone can apply this decision technique in effective, less rigorous ways.

You know those checklists at the end of every HouseHunters episode where buyers weigh location and size against budget? That's essentially it: People making important decisions, applying judgment, and weighing multiple goals (raise the kids in the city or the burbs?) - and even though they start out by ranking priorities, once buyers see their actual options, deciding on a house becomes substantially more complex.

MCDA gains traction in health economics. As shown in the diagram (source: ISPOR), the analysis hinges on assigning relative weights to individual decision criteria. While this brings rationality and transparency to complex decisions, it also invites passionate discussions. Some might expect these techniques to remove human judgment from the process, but MCDA leaves it front and center.


Looking for new ways to communicate health economics research and other medical evidence? Join me and other speakers at the 2nd annual HEOR Writing workshop in March.                                      


Pros and cons. Let’s not kid ourselves: You have to optimize on something. MCDA is both beautiful and terrifying because it forces us to identify tradeoffs: Quality, quick improvement, long-term health benefits? Uncertain outcomes only complicate things further.

MCDA is a great way to bring interdisciplinary groups into a conversation. It's essential to communicate the analysis effectively, so stakeholders understand the data and why they matter - without burying them in so much detail that the audience is lost.

One of the downsides is that, upon seeing elaborate projections and models, people can become over-confident in the numbers. Uncertainty is never fully recognized or quantified. (Recall the Rumsfeldian unknown unknown.) Sensitivity analysis is essential, to illustrate which predicted outcomes are strongly influenced by small adjustments.

Resources to learn more. If you want to try MCDA, I strongly recommend picking up one of the classic texts, such as Smart Choices: A Practical Guide to Making Better Decisions. Additionally, ISPOR's members offer useful insights into the pluses and minuses of this methodology - see, for example, Does the Future Belong to MCDA? The level of discourse over this guidance illustrates how challenging healthcare decisions have become.  

I'm presenting at the HEOR Writing workshop on communicating value messages clearly with data. March 17-18 in Philadelphia.

Tuesday, 12 January 2016

Game theory for Jeopardy!, evidence for gun control, and causality.

This week's 5 links on evidence-based decision making.

1. Deep knowledge → Wagering strategy → Jeopardy! win
Some Jeopardy! contestants struggle with the strategic elements of the show. Rescuing us is Keith Williams (@TheFinalWager), with the definitive primer on Jeopardy! strategy, applying game theory to every episode and introducing "the fascinating world of determining the optimal approach to almost anything".

2. Gun controls → Less violence? → Less tragedy?
Does the evidence support new US gun control proposals? In the Pacific Standard, Francie Diep cites several supporting scientific studies.

3. New data sources → Transparent methods → Health evidence
Is 'real-world' health evidence closer to the truth than data from more traditional categories? FDA staff explain in What We Mean When We Talk About Data. Thanks to @MandiBPro.

4. Data model → Cause → Effect
In Why: A Guide to Finding and Using Causes, Samantha Kleinberg aims to explain why causality is often misunderstood and misused: What is it, why is it so hard to find, and how can we do better at interpreting it? The book excerpt explains that "Understanding when our inferences are likely to be wrong is particularly important for data science, where we’re often confronted with observational data that is large and messy (rather than well-curated for research)."

5. Empirical results → Verification → Scientific understanding
Independent verification is essential to scientific progress. But in academia, verifying empirical results is difficult and not rewarded. This is the reason for Curate Science, a tool making it easier for researchers to independently verify each other’s evidence and award credit for doing so. Follow @CurateScience.

Join me at the HEOR writing workshop March 17 in Philadelphia. I'm speaking about communicating data, and leading an interactive session on data visualization. Save $300 before Jan 15.

Tuesday, 22 December 2015

Asthma heartbreak, cranky economists, and prediction markets.

This week's 5 links on evidence-based decision making.

1. Childhood stress → Cortisol → Asthma
Heartbreaking stories explain likely connections between difficult childhoods and asthma. Children in Detroit suffer a high incidence of attacks - regardless of allergens, air quality, and other factors. Peer-reviewed research shows excess cortisol may be to blame.

2. Prediction → Research heads up → Better evidence
Promising technique for meta-research. A prediction market was created to quantify the reproducibility of 44 studies published in prominent psychology journals, and estimate likelihood of hypothesis acceptance at different stages. The market outperformed individual forecasts, as described in PNAS (Proceedings of the National Academy of Sciences.)

3. Fuzzy evidence → Wage debate → Policy fail
More fuel for the minimum-wage fire. Depending on who you ask, a high minimum wage either bolsters the security of hourly workers or destroys the jobs they depend on. Recent example: David Neumark's claims about unfavorable evidence.

4. Decision tools → Flexible analysis → Value-based medicine
Drug Abacus is an interactive tool for understanding drug pricing. This very interesting project, led by Peter Bach at Memorial Sloan Kettering, compares the price of a drug (US$) with its "worth", based on outcomes, toxicity, and other factors. Hopefully @drugabacus signals the future for health technology assessment and value-based medicine.

5. Cognitive therapy → Depression relief → Fewer side effects
A BMJ systematic review and meta-analysis show that depression can be treated with cognitive behavior therapy, possibly with outcomes equivalent to antidepressants. Consistent CBT treatment is a challenge, however. AHRQ reports similar findings from comparative effectiveness research; the CER study illustrates how to employ expert panels to transparently select research questions and parameters.