Evidence Soup
How to find, use, and explain evidence.

Tuesday, 27 September 2016

Improving vs. proving, plus bad evidence reporting.

Turtle slow down and learn something

If you view gathering evidence as simply a means of demonstrating outcomes, you’re missing a trick. It’s most valuable when part of a journey of iterative improvement. - Frances Flaxington

1. Immigrants to US don't disrupt employment.
There is little evidence that immigration significantly affects overall employment of native-born US workers. This according to an expert panel's 500-page report. We thought you might like this condensed version from PepperSlice.

Bad presentation alert: The report, The Economic and Fiscal Consequences of Immigration, offers no summary visuals and buries its conclusions deep within dense chapters. Perhaps methodology is the problem, documenting the "evidence-based consensus of an authoring committee of experts". People need concise synthesis and actionable findings: What can policy makers do with this information?

Bad reporting alert: Perhaps unsatisfied with these findings, Julia Preston of the New York Times slipped her own claim into the coverage, saying the report "did not focus on American technology workers [true], many of whom have been displaced from their jobs in recent years by immigrants on temporary visas [unfounded claim]". Rather sloppy reporting, particularly when covering an extensive economic study of immigration impacts.


Immigration

Key evidence: "Empirical research in recent decades suggests that findings remain by and large consistent with those in The New Americans (National Research Council, 1997) in that, when measured over a period of 10 years or more, the impact of immigration on the wages of natives overall is very small." [page 204]

Immigration also contributes to the nation’s economic growth.... Perhaps even more important than the contribution to labor supply is the infusion by high-skilled immigration of human capital that has boosted the nation’s capacity for innovation and technological change. The contribution of immigrants to human and physical capital formation, entrepreneurship, and innovation are essential to long-run sustained economic growth. [page 243]

Author: @theNASEM, the National Academies of Sciences, Engineering, and Medicine.

Relationship: immigration → sustains → economic growth


2. Improving vs. proving.
On @A4UEvidence: "We often assume that generating evidence is a linear progression towards proving whether a service works. In reality the process is often two steps forward, one step back." Ugly Research supports the 'what works' concept, but wholeheartedly agrees that "The fact is that evidence rarely provides a clear-cut truth – that a service works or is cost-beneficial. Rather, evidence can support or challenge the beliefs that we, and others, have and it can point to ways in which a service might be improved."


3. Who should make sure policy is evidence-based and transparent?
Bad PR alert? Is it government's responsibility to make policy transparent and balanced? If so, some are accusing the FDA of not holding up their end on drug and medical device policy. A recent 'close-held embargo' of an FDA announcement made NPR squirm. Scientific American says the deal was this: "NPR, along with a select group of media outlets, would get a briefing about an upcoming announcement by the U.S. Food and Drug Administration a day before anyone else. But in exchange for the scoop, NPR would have to abandon its reportorial independence. The FDA would dictate whom NPR's reporter could and couldn't interview.

"'My editors are uncomfortable with the condition that we cannot seek reaction,' NPR reporter Rob Stein wrote back to the government officials offering the deal. Stein asked for a little bit of leeway to do some independent reporting but was turned down flat. Take the deal or leave it."


Evidence & Insights Calendar

November 9-10, Philadelphia: Real-World Evidence & Market Access Summit 2016. "No more scandals! Access for Patients. Value for Pharma."

29 Oct-2 Nov, Vienna, Austria: ISPOR 19th Annual European Congress. Plenary: "What Synergies Could Be Created Between Regulatory and Health Technology Assessments?"

October 3-6, National Harbor, Maryland. AMCP Nexus 2016. Special topic: "Behavioral Economics - What Does it All Mean?"


Photo credit: Turtle on Flickr.

Tuesday, 20 September 2016

Social program science, gut-bias decision test, and enough evidence already.

Paperwork

"The driving force behind MDRC is a conviction that reliable evidence, well communicated, can make an important difference in social policy." -Gordon L. Berlin, President, MDRC

1. Slice of the week: Can behavioral science improve the delivery of child support programs? Yes. Understanding how people respond to communications has improved outcomes. State programs shifted from heavy packets of detailed requirements to simple emails and postcard reminders. (Really, did this require behavioral science? Not to discount the excellent work by @CABS_MDRC, but it seems pretty obvious. Still, a promising outcome.)

Applying Behavioral Science to Child Support: Building a Body of Evidence comes to us from MRDC, a New-York based institute that builds knowledge around social policy.

Data: Collected using random assignment and analyzed with descriptive statistics.

Evidence: Support payments increased with reminders. Simple notices (email or postcards) sent to people not previously receiving them increased by 3% the number of parents making at least one payment.

Relationship: behaviorally informed interventions → solve → child support problems


“A commitment to using best evidence to support decision making in any field is an ethical commitment.”
-Dónal O’Mathuna @DublinCityUni

2. How to test your decision-making instincts.
McKinsey's Andrew Campbell and Jo Whitehead have studied decision-making for execs. They suggest asking yourself these four questions to ensure you're drawing on appropriate experiences and emotions. "Leaders cannot prevent gut instinct from influencing their judgments. What they can do is identify situations where it is likely to be biased and then strengthen the decision process to reduce the resulting risk."

Familiarity test: Have we frequently experienced identical or similar situations?
Feedback test: Did we get reliable feedback in past situations?
Measured-emotions test: Are the emotions we have experienced in similar or related situations measured?
Independence test: Are we likely to be influenced by any inappropriate personal interests or attachments?

Relationship: Test of instincts → reduces → decision bias


3. When is enough evidence enough?
At what point should we agree on the evidence, stop evaluating, and move on? Determining this is particularly difficult where public health is concerned. Despite all the available findings, investigators continue to study the costs and benefits of statin drugs. A new Lancet review takes a comprehensive look and makes a strong case for this important drug class. "Large-scale evidence from randomised trials shows that statin therapy reduces the risk of major vascular events" and "claims that statins commonly cause adverse effects reflect a failure to recognise the limitations of other sources of evidence about the effects of treatment".

The insightful Richard Lehman (@RichardLehman1) provides a straightforward summary: The treatment is so successful that the "main adverse effect of statins is to induce arrogance in their proponents." And Larry Husten explains that Statin Trialists Seek To Bury Debate With Evidence.


Photo credit: paperwork by Camilo Rueda López on Flickr.

Tuesday, 13 September 2016

Battling antimicrobial resistance, visualizing data, and value in health.

Dentist-antibiotic-board

PepperSlice Board of the Week: Dentists will slow down on antibiotics if you show them a chart of their prescribing numbers. 

Antimicrobial resistance is a serious public health concern. PLOS Medicine has published findings from an RCT studying whether quantitative feedback and intervention about prescribing patterns will reduce dentists' antibiotic RXs. An intervention group prescribed substantially fewer antibiotics per 100 cases.

The Evidence. Peer-reviewed: An Audit and Feedback Intervention for Reducing Antibiotic Prescribing in General Dental Practice.

Data: Collected using RAPiD Cluster Randomised Controlled Trial, and analyzed with ANCOVA.

Relationship: historical data ➞ influence ➞ dentist antibiotic prescribing rates

This study evaluated the impact of providing general-practice dentists with individualised feedback consisting of a line graph of their monthly antibiotic prescribing rate. Rates in the intervention group were substantially lower than in the control group.

From the authors: "The feedback provided in this study is a relatively straightforward, low-cost public health and patient safety intervention that could potentially help the entire healthcare profession address the increasing challenge of antimicrobial resistance." Authors: Paula Elouafkaoui et al.

#: evidentista, antibiotics, evidence-based practice


Distribution-plots1

2. Visualizing data distributions.
Nathan Yau's fantastic blog, Flowing Data, offers a simple explanation of distributions - the spread of a dataset - and how to compare them. Highly recommended. "Single data points from a large dataset can make it more relatable, but those individual numbers don’t mean much without something to compare to. That’s where distributions come in."


3. Calculating 'expected value' of health interventions.
Frank David provides a useful reminder of the realities of computing 'expected value'. Sooner or later, we must make simplifying assumptions, and compare costs and benefits on similar terms (usually $). On Forbes he walks us through a straightforward calculation of the value of an Epi-pen. (Frank's firm, Pharmagellan, is coming out with a book on biotech financial modeling, and we look forward to that.)


G20-bayes-johnoliver

4. What is Bayesian, really?
In the Three Faces of Bayes, @burrsettles beautifully describes three uses of the term Bayesian, and wonders "Why is it that Bayesian networks, for example, aren’t considered… y’know… Bayesian?" Recommended for readers wanting to know more about these algorithms for machine learning and decision analysis.


Fun Fact: Everyone can stop carrying around fake babies. Evidence tells us baby simulators don't deter teen pregnancy after all.


Evidence & Insights Calendar:

September 19-21; Boston. FierceBiotech Drug Development Forum. Evaluate challenges, trends, and innovation in drug discovery and R&D. Covering the entire drug development process, from basic research through clinical trials.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.

Tuesday, 30 August 2016

Social determinants of health, nonfinancial performance metrics, and satisficers.

Dear reader: Evidence Soup is starting a new chapter. Our spotlight topics are now accompanied by a 'newsletter' version of a PepperSlice, the capsule form of evidence-based analysis we've created at PepperSlice.com. Let me know what you think, and thanks for your continued readership. - Tracy Altman

1. Is social services spending associated with better health outcomes? Yes.
Socialhealth-pepperslice-thumbnail Evidence has revealed a significant association between healthcare outcomes and the ratio of social service to healthcare spending in various OECD countries. Now a new study, published in Health Affairs, finds this same pattern within the US. The health differences were substantial. For instance, a 20% change in the median social-to-health spending ratio was associated with 85,000 fewer adults with obesity and more than 950,000 adults with mental illness. Elizabeth Bradley and Lauren Taylor explain on the RWJF Culture of Health blog.

This is great, but we wonder: Where/what is the cause-effect relationship?

The Evidence. Peer-reviewed: Variation In Health Outcomes: The Role Of Spending On Social Services, Public Health, And Health Care, 2000-09.

Data: Collected using longitudinal state-level spending data and analyzed with repeated measures multivariable modeling, retrospective.

Relationship: Social : medical spending → associated → better health outcomes

From the authors: "Reorienting attention and resources from the health care sector to upstream social factors is critical, but it’s also an uphill battle. More research is needed to characterize how the health effects of social determinants like education and poverty act over longer time horizons. Stakeholders need to use information about data on health—not just health care—to make resource allocation decisions."

#: statistical_modeling social_determinants population_health social_services health_policy

2. Are nonfinancial metrics good leading indicators of financial performance? Maybe.
Nonfinancial-metrics During the '90s and early 00's we heard a lot about Kaplan and Norton's Balanced Scorecard. A key concept was the use of nonfinancial management metrics such as customer satisfaction, employee engagement, and openness to innovation. This was thought to encourage actions that increased a company’s long-term value, rather than maximizing short-term financials.

The idea has taken hold, and nonfinancial metrics are often used in designing performance management systems and executive compensation plans. But not everyone is a fan: Some argue this can actually be harmful; for instance, execs might prioritize customer sat over other performance areas. Recent findings in the MIT Sloan Management Review look at whether these metrics truly are leading indicators of financial performance.

The Evidence. Business journal: Are Nonfinancial Metrics Good Leading Indicators of Future Financial Performance?

Data: Collected from American Customer Satisfaction Index, ExecuComp, and Compustat and analyzed with econometrics: panel data analysis.

Relationship: Nonfinancial metrics → predict → Financial performance

From the authors: "We found that there were notable variations in the lead indicator strength of customer satisfaction in a sample of companies drawn from different industries. For instance, for a chemical company in our sample, customer satisfaction’s lead indicator strength was negative; this finding is consistent with prior research suggesting that in many industries, the expense required to increase customer satisfaction can’t be justified. By contrast, for a telecommunications company we studied, customer satisfaction was a strong leading indicator; this finding is consistent with evidence showing that in many service industries, customer satisfaction reduces customer churn and price sensitivity. For a professional service firm in our sample, the lead indicator strength of customer satisfaction was weak; this is consistent with evidence showing that for such services, measures such as trust provide a clearer indication of the economic benefits than customer satisfaction.... Knowledge of whether a nonfinancial metric such as customer satisfaction is a strongly positive, weakly positive, or negative lead indicator of future financial performance can help companies avoid the pitfalls of using a nonfinancial metric to incentivize the wrong behavior."

#: customer_satisfaction nonfinancial balanced_scorecard CEO_compensation performance_management

3. Reliable evidence about p values.
Daniël Lakens (@lakens) puts it very well, saying "One of the most robust findings in psychology replicates again: Psychologists misinterpret p-values." This from Frontiers in Psychology.

4. Satisficers are happier.
Fast Company's article sounds at first just like clickbait, but they have a point. You can change how you see things, and reset your expectations. The Surprising Scientific Link Between Happiness And Decision Making.

Evidence & Insights Calendar:

September 19-21; Boston. FierceBiotech Drug Development Forum. Evaluate challenges, trends, and innovation in drug discovery and R&D. Covering the entire drug development process, from basic research through clinical trials.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.


Photo credit: Fat cat by brokinhrt2 on Flickr.

Tuesday, 23 August 2016

Science of CEO success?, drug valuation kerfuffle, and event attribution science.

  Fingerpointing


1. Management research: Alchemy → Chemistry?
McKinsey's Michael Birshan and Thomas Meakin set out to "take a data-driven look" at the strategic moves of newly appointed CEOs, and how those moves influenced company returns. The accompanying podcast (with transcript), CEO transitions: The science of success, says "A lot of the existing literature is quite qualitative, anecdotal, and we’ve been able to build a database of 599 CEO transitions and add a bunch of other sources to it and really try and mine that database hard for what we hope are new insights. We are really trying to move the conversation from alchemy to chemistry, if you like."

The research was first reported in How new CEOs can boost their odds of success. McKinsey's evidence says new CEOs make similar moves, with similar frequency, whether they're taking over a struggling company or a profitable one (see chart). For companies not performing well, Birshan says the data support his advice to be bold, and make multiple moves at once. Depending how you slice the numbers, both external and internal hires fared well in the CEO role (8).

  CEO-science-success

Is this science? CEO performance was associated with the metric excess total returns to shareholders, "which is the performance of one company over or beneath the average performance of its industry peers over the same time period". Bottom line, can you attribute CEO activities directly to excess TRS? Organizational redesign was correlated with significant excess TRS (+1.9 percent) for well-performing companies. The authors say "We recognize that excess TRS CAGR does not prove a causal link; too many other variables, some beyond a CEO’s control, have an influence. But we do find the differences that emerged quite plausible." Hmm, correlation still does not equal causation.

Examine the evidence. The report's end notes answer some key questions: Can you observe or measure whether a CEO inspires the top team? Probably not (1). Where do you draw the line between a total re-org and a management change? They define 'management reshuffle' as 50+% turnover in first two years (5). But we have other questions: How were these data collected and analyzed? Some form of content analysis would likely be required to assign values to variables. How were the 599 CEOs chosen as the sample? Selection bias is a concern. Were some items self-reported? Interviews or survey results? Were findings validated by assigning a second team to check for internal reliability? External reliability?


2. ICER + pharma → Fingerpointing.
There's a kerfuffle between pharma companies and the nonprofit ICER (@ICER_review). The Institute for Clinical and Economic Review publishes reports on drug valuation, and studies comparative efficacy. Biopharma Dive explains that "Drugmakers have argued ICER's reviews are driven by the interests of insurers, and fail to take the patient perspective into account." The National Pharmaceutical Council (@npcnow) takes issue with how ICER characterizes its funding sources.

ICER has been doing some damage control, responding to a list of 'myths' about its purpose and methods. Its rebuttal, Addressing the Myths About ICER and Value Assessment, examines criticisms such as "ICER only cares about the short-term cost to insurers, and uses an arbitrary budget cap to suggest low-ball prices." Also, ICER's economic models "use the Quality-Adjusted Life Year (QALY) which discriminates against those with serious conditions and the disabled, 'devaluing' their lives in a way that diminishes the importance of treatments to help them."


Cupid-lesser-known-arrow

3. Immortal time bias → Overstated findings.
You can't get a heart transplant after you're dead. The must-read Hilda Bastian writes on Statistically Funny about immortal time bias, a/k/a event-free time or competing risk bias. This happens when an analysis mishandles events whose occurrence precludes the outcome of interest - such as heart transplant outcomes. Numerous published studies, particularly those including Kaplan-Meier analyses, may suffer from this bias.


4. Climate change → Weird weather?
This week the US is battling huge fires and disastrous floods: Climate change, right? Maybe. There's now a thing called event attribution science, where people apply probabilistic methods in an effort to determine whether an extreme weather resulted from climate change. The idea is to establish/predict adverse impacts.


Evidence & Insights Calendar:

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.


Photo credit: Fingerpointing by Tom Hilton.

Tuesday, 09 August 2016

Health innovation, foster teens, NBA, Gwyneth Paltrow.

Foster_care_youth

1. Behavioral economics → Healthcare innovation.
Jaan Sidorov (@DisMgtCareBlog) writes on the @Health_Affairs blog about roadblocks to healthcare innovation. Behavioral economics can help us truly understand resistance to change, including unconscious bias, so valuable improvements will gain more traction. Sidoro offers concise explanations of hyperbolic discounting, experience weighting, social utility, predictive value, and other relevant economic concepts. He also recommends specific tactics when presenting a technology-based innovation to the C-Suite.

2. Laptops → Foster teen success.
Nobody should have to type their high school essays on their phone. A coalition including Silicon Valley leaders and public sector agencies will ensure all California foster teens can own a laptop computer. Foster Care Counts reports evidence that "providing laptop computers to transition age youth shows measurable improvement in self-esteem and academic performance". KQED's California Report ran a fine story.

For a year, researchers at USC's School of Social Work surveyed 730 foster youth who received laptops, finding that "not only do grades and class attendance improve, but self-esteem and life satisfaction increase, while depression drops precipitously."

3. Analytical meritocracy → Better NBA outcomes.
The Innovation Enterprise Sports Channel explain how the NBA draft is becoming an analytical meritocracy. Predictive models help teams evaluate potential picks, including some they might have overlooked. Example: Andre Roberson, who played very little college ball, was drafted successfully by Oklahoma City based on analytics. It's tricky combining projections for active NBA teams with prospects who may never take the court. One decision aid is ESPN’s Draft Projection model, using Statistical Plus/Minus to predict how someone would perform through season five of a hypothetical NBA career. ESPN designates each player as a Superstar, Starter, Role Player, or Bust, to facilitate risk-reward assessments.

4. Celebrity culture → Clash with scientific evidence.
Health law and policy professor Timothy Caulfield (@CaulfieldTim) examines the impact of celebrity culture on people's choices of diet and healthcare. His new book asks Is Gwyneth Paltrow Wrong About Everything?: How the Famous Sell Us Elixirs of Health, Beauty & Happiness. Caulfield cites many, many peer-reviewed sources of evidence.

Evidence & Insights Calendar:

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

February 22-23; London UK. Evidence Europe 2017. How pharma, payers, and patients use real-world evidence to understand and demonstrate drug value and improve care.

Photo credit: Foster Care Counts.

Tuesday, 02 August 2016

Business coaching, manipulating memory for market research, and female VCs.

Hoosiers_coach

1. Systematic review: Does business coaching make a difference?
In PLOSOne, Grover and Furnham present findings of their systematic review of coaching impacts within organizations. They found glimmers of hope for positive results from coaching, but also spotted numerous holes in research designs and data quality.

Over the years, outcome measures have included job satisfaction, performance, self-awareness, anxiety, resilience, hope, autonomy, and goal attainment. Some have measured ROI, although this one seems particularly subjective. In terms of organizational impacts, researchers have measured transformational leadership and performance as rated by others. This systematic review included only professional coaches, whether internal or external to the organization. Thanks @Rob_Briner and @IOPractitioners.

2. Memory bias pollutes market research.
David Paull of Dialsmith hosted a series about how flawed recall and memory bias affect market research. (Thanks to @kristinluck.)

All data is not necessarily good data. “We were consistently seeing a 13–20% misattribution rate on surveys due in large part to recall problems. Resultantly, you get this chaos in your data and have to wonder what you can trust.... Rather than just trying to mitigate memory bias, can we actually use it to our advantage to offset issues with our brands?”

The ethics of manipulating memory. “We can actually affect people’s nutrition and the types of foods they prefer eating.... But should we deliberately plant memories in the minds of people so they can live healthier or happier lives, or should we be banning the use of these techniques?”

Mitigating researchers' memory bias. “We’ve been talking about memory biases for respondents, but we, as researchers, are also very prone to memory biases.... There’s a huge opportunity in qual research to apply an impartial technique that can mitigate (researcher) biases too....[I]n the next few years, it’s going to be absolutely required that anytime you do something that is qualitative in nature that the analysis is not totally reliant on humans.”

3. Female VC --> No gender gap for startup funding.
New evidence suggests female entrepreneurs should choose venture capital firms with female partners (SF Business Times). Michigan's Sahil Raina analyzed data to compare the gender gap in successful exits from VC financing between two sets of startups: those initially financed by VCs with only male general partners (GPs), and those initially financed by VCs that include female GPs. “I find a large performance gender gap among startups financed by VCs with only male GPs, but no such gap among startups financed by VCs that include female GPs.”

4. Sharing evidence about student outcomes.
Results for America is launching an Evidence in Education Lab to help states, school districts, and individual schools build and use evidence of 'what works' to improve student outcomes. A handful of states and districts will work closely with RFA to tackle specific data challenges.

Background: The bipartisan Every Student Succeeds Act (ESSA) became law in December 2015. ESSA requires, allows, and encourages the use of evidence-based approaches that can help improve student outcomes. Results for America estimates that ESSA's evidence provisions could help shift more than $2B US of federal education funds in each of the next four years toward evidence-based, results-driven solutions.

Evidence & Insights Calendar:

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 26 July 2016

Evidence relativism, innovation as a process, and decision analysis pioneer.

Gold-panning

1. Panning for gold in the evidence stream.
Patrick Lester introduces his new SSIR article by saying "With evidence-based policy, we need to acknowledge that some evidence is more valid than others. Pretending all evidence is equal will only preserve the status quo." In Defining Evidence Down, the director of the Social Innovation Research Center responds to analysts skeptical of evidence hierarchies developed to steer funding toward programs that fit the "what works" concept.

Are levels valid? Hierarchies recognize different levels of evidence according to their rigor and certainty. These rankings are well-established in healthcare, and are becoming the standard for evidence evaluation within the Dept of Education and other US government agencies. Critics of this prevailing thinking (Gopal & Schorr, Friends of Evidence) want to ensure decision-makers embrace an inclusive definition of evidence that values qualitative research, case studies, insights from experience, and professional judgment. Lester contends that "Unfortunately, to reject evidence hierarchies is to promote is a form of evidence relativism, where everyone is entitled to his or her own views about what constitutes good evidence in his or her own local or individualized context.

Ideology vs. evidence. "By resisting the notion that some evidence is more valid than others, they are defining evidence down. Such relativism would risk a return to the past, where change has too often been driven by fads, ideology, and politics, and where entrenched interests have often preserved the status quo." Other highlights: "...supporting a broad definition of evidence is not the same thing as saying that all evidence is equally valid." And "...randomized evaluations are not the only rigorous way to examine systems-level change. Researchers can often use quasi-experimental evaluations to examine policy changes...."

2. Can innovation be systematic?
Everyone wants innovation nowadays, but how do you make it happen? @HighVizAbility reviews a method called Systematic Inventive Thinking, an approach to creativity, innovation, and problem solving. The idea is to execute as a process, rather than relying on random ideas. Advocates say SIT doesn't replace unbridled creativity, but instead complements it.

3. Remembering decision analysis pioneer Howard Raiffa.
Howard Raiffa, co-founder of the Harvard Kennedy School of Government and decision analysis pioneer, passed away recently. He was also a Bayesian decision theorist and well-known author on negotiation strategies. Raiffa considered negotiation analysis an opportunity for both sides to get value, describing it as The Science and Art of Collaborative Decision Making.

4. Journal impact factor redux?
In the wake of news that Thomson Reuters sold its formula, Stat says changes may finally be coming to the "hated" journal impact factor. Ivan Oransky (@ivanoransky) and Adam Marcus (@armarcus) explain that some evidence suggests science articles don't receive the high number of citations supposedly predicted by the IF. The American Society of Microbiologists has announced that it will abandon the metric completely. Meanwhile, top editors from Nature — which in the past has taken pride in its IF — have coauthored a paper widely seen as critical of the factor.

Photo credit: Poke of Gold by Mike Beauregard

Thursday, 21 July 2016

Academic clickbait, FCC doesn't use economics, and tobacco surcharges don't work.

Brady

1. Academics use crazy tricks for clickbait.
Turn to @TheWinnower for an insightful analysis of academic article titles, and how their authors sometimes mimic techniques used for clickbait. Positively framed titles (those stating a specific finding) fare better than vague ones: For example, 'smoking causes lung cancer' vs. 'the relationship between smoking and lung cancer'. Nice use of altmetrics to perform the analysis.

2. FCC doesn't use cost-benefit analysis.
Critics claim Federal Communications Commission policymaking has swerved away from econometric evidence and economic theory. Federal agencies including the EPA must submit cost-benefit analyses to support new regulations, but the FCC is exempted, "free to embrace populism as its guiding principle". @CALinnovates has published a new paper, The Curious Absence of Economic Analysis at the Federal Communications Commission: An Agency In Search of a Mission. Former FCC Chief Economist Gerald Faulhaber, PhD and Hal Singer, PhD review the agency’s "proud history at the cutting edge of industrial economics and its recent divergence from policymaking grounded in facts and analysis".

3. No bias in US police shootings?
There's plenty of evidence showing bias in US police use of force, but not in shootings, says one researcher. But Data Colada, among others, describes "an interesting empirical challenge for interpreting the shares of Whites vs Blacks shot by police while being arrested is that biased officers, those overestimating the threat posted by a Black civilian, will arrest less dangerous Blacks on average. They will arrest those posing a real threat, but also some not posing a real threat, resulting in lower average threat among those arrested by biased officers."

4. Tobacco surcharges don't work.
The Affordable Care Act imposes tobacco surcharges for smokers. But findings suggest the ACA has not led more people to stop smoking.

5. CEOs lose faith in forecasts.
Some CEOs say big-data predictions are failing. “The so-called experts and global economists are proven as often to be wrong as right these days,” claims a WSJ piece In Uncertain Times, CEOs Lose Faith in Forecasts. One consultant advises people to "rely less on forecasts and instead road-test ideas with customers and make fast adjustments when needed. He urges them to supplement big-data predictions with close observation of their customers."

6. Is fMRI evidence flawed?
Motherboard's Why Two Decades of Brain Research Could Be Seriously Flawed recaps research by Anders Eklund. Cost is one reason, he argues: fMRI scans are notoriously expensive. "That makes it hard for researchers to perform large-scale studies with lots of patients". Eklund has written elsewhere about this (Can parametric statistical methods be trusted for fMRI based group studies?), and the issue is being noticed by Neuroskeptic and Science-Based Medicine ("It’s tempting to think that the new idea or technology is going to revolutionize science or medicine, but history has taught us to be cautious. For instance, antioxidants, it turns out, are not going to cure a long list of diseases").

Evidence & Insights Calendar:

August 24-25; San Francisco. Sports Analytics Innovation Summit.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 19 July 2016

Are you causing a ripple? How to assess research impact.

Raindrops-in-a-bucket

People are recognizing the critical need for meta-research, or the 'science of science'. One focus area is understanding whether research produces desired outcomes, and identifying how to ensure that truly happens going forward. Research impact assessment (RIA) is particularly important when holding organizations accountable for their management of public and donor funding. An RIA community of practice is emerging.

Are you causing a ripple? For those wanting to lead an RIA effort, the International School on Research Impact Assessment was developed to "empower participants on how to assess, measure and optimise research impact with a focus on biomedical and health sciences." ISRIA is a partnership between Alberta Innovates, the Agency for Health Quality and Assessment of Catalonia, and RAND Europe. They're presenting their fourth annual program Sept 19- 23 in Melbourne, Australia, hosted by the Commonwealth Scientific and Industrial Research Organisation, Australia’s national research agency.

ISRIA participants are typically in program management, evaluation, knowledge translation, or policy roles. They learn a range of frameworks, tools, and approaches for assessing research impact, and how to develop evidence about 'what works’. 

Make an impact with your impact assessment. Management strategies are also part of the curriculum: Embedding RIA systemically into organizational practice, reaching agreement on effective methods and reporting, understanding the audience for RIAs, and knowing how to effectively communicate results to various stakeholders.

The 2016 program will cover both qualitative and quantitative analytical methods, along with a mixed design. It will include sessions on evaluating economic, environmental and social impacts. The aim is to expose participants to as many options as possible, including new methods, such as altmetrics. (Plus, there's a black tie event on the first evening.)

 

Photo credit: Raindrops in a Bucket by Smabs Sputzer.