Evidence Soup
How to find, use, and explain evidence.

22 posts categorized "evidence for the C-Suite"

Tuesday, 20 December 2016

Choices, policy, and evidence-based investment.

Badarguments

1. Bad Arguments → Bad Choices
Great news. There will be a follow-on to the excellent Bad Arguments book by @alialmossawi. The book of Bad Choices will be released this April by major publishers. You can preorder now.

2. Evidence-based decisions → Effective policy outcomes
The conversative think tank, Heritage Foundation, is advocating for evidence-based decisions in the Trump administration. Their recommendations include resurrection of PART (the Program Assessment Rating Tool) from the George W. Bush era, which ranked federal programs according to effectiveness. "Blueprint for a New Administration offers specific steps that the new President and the top officers of all 15 cabinet-level departments and six key executive agencies can take to implement the long-term policy visions reflected in Blueprint for Reform." Read a nice summary here by Patrick Lester at the Social Innovation Research Center (@SIRC_tweets).

Pharmagellan

3. Pioneer drugs → Investment value
"Why do pharma firms sometimes prioritize 'me-too' R&D projects over high-risk, high-reward 'pioneer' programs?" asks Frank David at Pharmagellan (@Frank_S_David). "[M]any pharma financial models assume first-in-class drugs will gain commercial traction more slowly than 'followers.' The problem is that when a drug’s projected revenues are delayed in a financial forecast, this lowers its net present value – which can torpedo the already tenuous investment case for a risky, innovative R&D program." Their research suggests that pioneer drugs see peak sales around 6 years, similar to followers: "Our finding that pioneer drugs are adopted no more slowly than me-too ones could help level the economic playing field and make riskier, but often higher-impact, R&D programs more attractive to executives and investors."

Details appear in the Nature Reviews article, Drug launch curves in the modern era. Pharmagellan will soon release a book on biotech financial modeling.

4. Unrealistic expectations → Questioning 'evidence-based medicine'
As we've noted before, @EvidenceLive has a manifesto addressing how to make healthcare decisions, and how to communicate evidence. The online comments are telling: Evidence-based medicine is perhaps more of a concept than a practical thing. The spot-on @trishgreenhalgh says "The world is messy. There is no view from nowhere, no perspective that is free from bias."

Evidence & Insights Calendar.

Jan 23-25, London: Advanced Pharma Analytics 2017. Spans topics from machine learning to drug discovery, real-world evidence, and commercial decision making.

Feb 1-2, San Francisco. Advanced Analytics for Clinical Data 2017. All about accelerating clinical R&D with data-driven decision making for drug development.

Tuesday, 15 November 2016

Building trust with evidence-based insights.

Trust

This week we examine how executives can more fully grasp complex evidence/analysis affecting their outcomes - and how analytics professionals can better communicate these findings to executives. Better performance and more trust are the payoffs.

1. Show how A → B. Our new guide to Promoting Evidence-Based Insights explains how to engage stakeholders with a data value story. Shape content around four essential elements: Top-line, evidence-based, bite-size, and reusable. It's a suitable approach whether you're in marketing, R&D, analytics, or advocacy.

No knowledge salad. To avoid tl;dr or MEGO (My Eyes Glaze Over), be sure to emphasize insights that matter to stakeholders. Explicitly connect specific actions with important outcomes, identify your methods, and provide a simple visual - this establishes trust and crediblity. Be succint; you can drill down into detailed evidence later. The guide is free from Ugly Research.

Guide to Insights by Ugly Research


2. Lack of analytics understanding → Lack of trust.
Great stuff from KPMG: Building trust in analytics: Breaking the cycle of mistrust in D&A. "We believe that organizations must think about trusted analytics as a strategic way to bridge the gap between decision-makers, data scientists and customers, and deliver sustainable business results. In this study, we define four ‘anchors of trust’ which underpin trusted analytics. And we offer seven key recommendations to help executives improve trust throughout the D&A value chain.... It is not a one-time communication exercise or a compliance tick-box. It is a continuous endeavor that should span the D&A lifecycle from data through to insights and ultimately to generating value."

Analytics professionals aren't feeling the C-Suite love. Information Week laments the lack of transparency around analytics: When non-data professionals don't know or understand how it is performed, it leads to a lack of trust. But that doesn't mean the data analytics efforts themselves are not worthy of trust. It means that the non-data pros don't know enough about these efforts to trust them.

KPMG Trust in data and analytics


3. Execs understand advanced analytics → See how to improve business
McKinsey has an interesting take on this. "Execs can't avoid understanding advanced analytics - can no longer just 'leave it to the experts' because they must understand the art of the possible for improving their business."

Analytics expertise is widespread in operational realms such as manufacturing and HR. Finance data science must be a priority for CFOs to secure a place at the planning table. Mary Driscoll explains that CFOs want analysts trained in finance data science. "To be blunt: When [line-of-business] decision makers are using advanced analytics to compare, say, new strategies for volume, pricing and packaging, finance looks silly talking only in terms of past accounting results."

4. Macroeconomics is a pseudoscience.
NYU professor Paul Romer's The Trouble With Macroeconomics is a widely discussed, skeptical analysis of macroeconomics. The opening to his abstract is excellent, making a strong point right out of the gate. Great writing, great questioning of tradition. "For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as 'tight monetary policy can cause a recession.'" Other critics also seek transparency: Alan Jay Levinovitz writes in @aeonmag The new astrology: By fetishising mathematical models, economists turned economics into a highly paid pseudoscience.

5. Better health evidence to a wider audience.
From the Evidence Live Manifesto: Improving the development, dissemination. and implementation of research evidence for better health.

"7. Evidence Communication.... 7.2 Better communication of research: High quality, important research that matters has to be understandable and informative to a wide audience. Yet , much of what is currently produced is not directed to a lay audience, is often poorly constructed and is underpinned by a lack of training and guidance in this area." Thanks to Daniel Barth-Jones (@dbarthjones).

Photo credit: Steve Lav - Trust on Flickr

Tuesday, 20 September 2016

Social program science, gut-bias decision test, and enough evidence already.

Paperwork

"The driving force behind MDRC is a conviction that reliable evidence, well communicated, can make an important difference in social policy." -Gordon L. Berlin, President, MDRC

1. Slice of the week: Can behavioral science improve the delivery of child support programs? Yes. Understanding how people respond to communications has improved outcomes. State programs shifted from heavy packets of detailed requirements to simple emails and postcard reminders. (Really, did this require behavioral science? Not to discount the excellent work by @CABS_MDRC, but it seems pretty obvious. Still, a promising outcome.)

Applying Behavioral Science to Child Support: Building a Body of Evidence comes to us from MRDC, a New-York based institute that builds knowledge around social policy.

Data: Collected using random assignment and analyzed with descriptive statistics.

Evidence: Support payments increased with reminders. Simple notices (email or postcards) sent to people not previously receiving them increased by 3% the number of parents making at least one payment.

Relationship: behaviorally informed interventions → solve → child support problems


“A commitment to using best evidence to support decision making in any field is an ethical commitment.”
-Dónal O’Mathuna @DublinCityUni

2. How to test your decision-making instincts.
McKinsey's Andrew Campbell and Jo Whitehead have studied decision-making for execs. They suggest asking yourself these four questions to ensure you're drawing on appropriate experiences and emotions. "Leaders cannot prevent gut instinct from influencing their judgments. What they can do is identify situations where it is likely to be biased and then strengthen the decision process to reduce the resulting risk."

Familiarity test: Have we frequently experienced identical or similar situations?
Feedback test: Did we get reliable feedback in past situations?
Measured-emotions test: Are the emotions we have experienced in similar or related situations measured?
Independence test: Are we likely to be influenced by any inappropriate personal interests or attachments?

Relationship: Test of instincts → reduces → decision bias


3. When is enough evidence enough?
At what point should we agree on the evidence, stop evaluating, and move on? Determining this is particularly difficult where public health is concerned. Despite all the available findings, investigators continue to study the costs and benefits of statin drugs. A new Lancet review takes a comprehensive look and makes a strong case for this important drug class. "Large-scale evidence from randomised trials shows that statin therapy reduces the risk of major vascular events" and "claims that statins commonly cause adverse effects reflect a failure to recognise the limitations of other sources of evidence about the effects of treatment".

The insightful Richard Lehman (@RichardLehman1) provides a straightforward summary: The treatment is so successful that the "main adverse effect of statins is to induce arrogance in their proponents." And Larry Husten explains that Statin Trialists Seek To Bury Debate With Evidence.


Photo credit: paperwork by Camilo Rueda López on Flickr.

Tuesday, 30 August 2016

Social determinants of health, nonfinancial performance metrics, and satisficers.

Dear reader: Evidence Soup is starting a new chapter. Our spotlight topics are now accompanied by a 'newsletter' version of a PepperSlice, the capsule form of evidence-based analysis we've created at PepperSlice.com. Let me know what you think, and thanks for your continued readership. - Tracy Altman

1. Is social services spending associated with better health outcomes? Yes.
Socialhealth-pepperslice-thumbnail Evidence has revealed a significant association between healthcare outcomes and the ratio of social service to healthcare spending in various OECD countries. Now a new study, published in Health Affairs, finds this same pattern within the US. The health differences were substantial. For instance, a 20% change in the median social-to-health spending ratio was associated with 85,000 fewer adults with obesity and more than 950,000 adults with mental illness. Elizabeth Bradley and Lauren Taylor explain on the RWJF Culture of Health blog.

This is great, but we wonder: Where/what is the cause-effect relationship?

The Evidence. Peer-reviewed: Variation In Health Outcomes: The Role Of Spending On Social Services, Public Health, And Health Care, 2000-09.

Data: Collected using longitudinal state-level spending data and analyzed with repeated measures multivariable modeling, retrospective.

Relationship: Social : medical spending → associated → better health outcomes

From the authors: "Reorienting attention and resources from the health care sector to upstream social factors is critical, but it’s also an uphill battle. More research is needed to characterize how the health effects of social determinants like education and poverty act over longer time horizons. Stakeholders need to use information about data on health—not just health care—to make resource allocation decisions."

#: statistical_modeling social_determinants population_health social_services health_policy

2. Are nonfinancial metrics good leading indicators of financial performance? Maybe.
Nonfinancial-metrics During the '90s and early 00's we heard a lot about Kaplan and Norton's Balanced Scorecard. A key concept was the use of nonfinancial management metrics such as customer satisfaction, employee engagement, and openness to innovation. This was thought to encourage actions that increased a company’s long-term value, rather than maximizing short-term financials.

The idea has taken hold, and nonfinancial metrics are often used in designing performance management systems and executive compensation plans. But not everyone is a fan: Some argue this can actually be harmful; for instance, execs might prioritize customer sat over other performance areas. Recent findings in the MIT Sloan Management Review look at whether these metrics truly are leading indicators of financial performance.

The Evidence. Business journal: Are Nonfinancial Metrics Good Leading Indicators of Future Financial Performance?

Data: Collected from American Customer Satisfaction Index, ExecuComp, and Compustat and analyzed with econometrics: panel data analysis.

Relationship: Nonfinancial metrics → predict → Financial performance

From the authors: "We found that there were notable variations in the lead indicator strength of customer satisfaction in a sample of companies drawn from different industries. For instance, for a chemical company in our sample, customer satisfaction’s lead indicator strength was negative; this finding is consistent with prior research suggesting that in many industries, the expense required to increase customer satisfaction can’t be justified. By contrast, for a telecommunications company we studied, customer satisfaction was a strong leading indicator; this finding is consistent with evidence showing that in many service industries, customer satisfaction reduces customer churn and price sensitivity. For a professional service firm in our sample, the lead indicator strength of customer satisfaction was weak; this is consistent with evidence showing that for such services, measures such as trust provide a clearer indication of the economic benefits than customer satisfaction.... Knowledge of whether a nonfinancial metric such as customer satisfaction is a strongly positive, weakly positive, or negative lead indicator of future financial performance can help companies avoid the pitfalls of using a nonfinancial metric to incentivize the wrong behavior."

#: customer_satisfaction nonfinancial balanced_scorecard CEO_compensation performance_management

3. Reliable evidence about p values.
Daniël Lakens (@lakens) puts it very well, saying "One of the most robust findings in psychology replicates again: Psychologists misinterpret p-values." This from Frontiers in Psychology.

4. Satisficers are happier.
Fast Company's article sounds at first just like clickbait, but they have a point. You can change how you see things, and reset your expectations. The Surprising Scientific Link Between Happiness And Decision Making.

Evidence & Insights Calendar:

September 19-21; Boston. FierceBiotech Drug Development Forum. Evaluate challenges, trends, and innovation in drug discovery and R&D. Covering the entire drug development process, from basic research through clinical trials.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.


Photo credit: Fat cat by brokinhrt2 on Flickr.

Tuesday, 02 August 2016

Business coaching, manipulating memory for market research, and female VCs.

Hoosiers_coach

1. Systematic review: Does business coaching make a difference?
In PLOSOne, Grover and Furnham present findings of their systematic review of coaching impacts within organizations. They found glimmers of hope for positive results from coaching, but also spotted numerous holes in research designs and data quality.

Over the years, outcome measures have included job satisfaction, performance, self-awareness, anxiety, resilience, hope, autonomy, and goal attainment. Some have measured ROI, although this one seems particularly subjective. In terms of organizational impacts, researchers have measured transformational leadership and performance as rated by others. This systematic review included only professional coaches, whether internal or external to the organization. Thanks @Rob_Briner and @IOPractitioners.

2. Memory bias pollutes market research.
David Paull of Dialsmith hosted a series about how flawed recall and memory bias affect market research. (Thanks to @kristinluck.)

All data is not necessarily good data. “We were consistently seeing a 13–20% misattribution rate on surveys due in large part to recall problems. Resultantly, you get this chaos in your data and have to wonder what you can trust.... Rather than just trying to mitigate memory bias, can we actually use it to our advantage to offset issues with our brands?”

The ethics of manipulating memory. “We can actually affect people’s nutrition and the types of foods they prefer eating.... But should we deliberately plant memories in the minds of people so they can live healthier or happier lives, or should we be banning the use of these techniques?”

Mitigating researchers' memory bias. “We’ve been talking about memory biases for respondents, but we, as researchers, are also very prone to memory biases.... There’s a huge opportunity in qual research to apply an impartial technique that can mitigate (researcher) biases too....[I]n the next few years, it’s going to be absolutely required that anytime you do something that is qualitative in nature that the analysis is not totally reliant on humans.”

3. Female VC --> No gender gap for startup funding.
New evidence suggests female entrepreneurs should choose venture capital firms with female partners (SF Business Times). Michigan's Sahil Raina analyzed data to compare the gender gap in successful exits from VC financing between two sets of startups: those initially financed by VCs with only male general partners (GPs), and those initially financed by VCs that include female GPs. “I find a large performance gender gap among startups financed by VCs with only male GPs, but no such gap among startups financed by VCs that include female GPs.”

4. Sharing evidence about student outcomes.
Results for America is launching an Evidence in Education Lab to help states, school districts, and individual schools build and use evidence of 'what works' to improve student outcomes. A handful of states and districts will work closely with RFA to tackle specific data challenges.

Background: The bipartisan Every Student Succeeds Act (ESSA) became law in December 2015. ESSA requires, allows, and encourages the use of evidence-based approaches that can help improve student outcomes. Results for America estimates that ESSA's evidence provisions could help shift more than $2B US of federal education funds in each of the next four years toward evidence-based, results-driven solutions.

Evidence & Insights Calendar:

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 26 July 2016

Evidence relativism, innovation as a process, and decision analysis pioneer.

Gold-panning

1. Panning for gold in the evidence stream.
Patrick Lester introduces his new SSIR article by saying "With evidence-based policy, we need to acknowledge that some evidence is more valid than others. Pretending all evidence is equal will only preserve the status quo." In Defining Evidence Down, the director of the Social Innovation Research Center responds to analysts skeptical of evidence hierarchies developed to steer funding toward programs that fit the "what works" concept.

Are levels valid? Hierarchies recognize different levels of evidence according to their rigor and certainty. These rankings are well-established in healthcare, and are becoming the standard for evidence evaluation within the Dept of Education and other US government agencies. Critics of this prevailing thinking (Gopal & Schorr, Friends of Evidence) want to ensure decision-makers embrace an inclusive definition of evidence that values qualitative research, case studies, insights from experience, and professional judgment. Lester contends that "Unfortunately, to reject evidence hierarchies is to promote is a form of evidence relativism, where everyone is entitled to his or her own views about what constitutes good evidence in his or her own local or individualized context.

Ideology vs. evidence. "By resisting the notion that some evidence is more valid than others, they are defining evidence down. Such relativism would risk a return to the past, where change has too often been driven by fads, ideology, and politics, and where entrenched interests have often preserved the status quo." Other highlights: "...supporting a broad definition of evidence is not the same thing as saying that all evidence is equally valid." And "...randomized evaluations are not the only rigorous way to examine systems-level change. Researchers can often use quasi-experimental evaluations to examine policy changes...."

2. Can innovation be systematic?
Everyone wants innovation nowadays, but how do you make it happen? @HighVizAbility reviews a method called Systematic Inventive Thinking, an approach to creativity, innovation, and problem solving. The idea is to execute as a process, rather than relying on random ideas. Advocates say SIT doesn't replace unbridled creativity, but instead complements it.

3. Remembering decision analysis pioneer Howard Raiffa.
Howard Raiffa, co-founder of the Harvard Kennedy School of Government and decision analysis pioneer, passed away recently. He was also a Bayesian decision theorist and well-known author on negotiation strategies. Raiffa considered negotiation analysis an opportunity for both sides to get value, describing it as The Science and Art of Collaborative Decision Making.

4. Journal impact factor redux?
In the wake of news that Thomson Reuters sold its formula, Stat says changes may finally be coming to the "hated" journal impact factor. Ivan Oransky (@ivanoransky) and Adam Marcus (@armarcus) explain that some evidence suggests science articles don't receive the high number of citations supposedly predicted by the IF. The American Society of Microbiologists has announced that it will abandon the metric completely. Meanwhile, top editors from Nature — which in the past has taken pride in its IF — have coauthored a paper widely seen as critical of the factor.

Photo credit: Poke of Gold by Mike Beauregard

Monday, 18 July 2016

Stand up for science, evidence for surgery, and labeling data for quality.

Understand evidence

1. Know someone who effectively promotes evidence?
Nominations are open for the 2016 John Maddox Prize for Standing up for Science, recognizing an individual who promotes sound science and evidence on a matter of public interest, facing difficulty or hostility in doing so.

Researchers in any area of science or engineering, or those who work to address misleading information and bring evidence to the public, are eligible. Sense About Science (@senseaboutsci) explains that the winner will be someone who effectively promotes evidence despite challenge, difficulty, or adversity, and who takes responsibility for public discussion beyond what would be expected of someone in their position. Nominations are welcome until August 1.

2. Evidence to improve surgical outcomes.
Based on Oxford, UK, the IDEAL Collaboration is an initiative to improve the quality of research in surgery, radiotherapy, physiotherapy, and other complex interventions. The IDEAL model (@IDEALCollab) describes the stages of innovation in surgery: Idea, Development, Exploration, Assessment, and Long-Term Study. Besides its annual conference, the collaborative also proposes and advocates for assessment frameworks, such as the recent IDEAL-D for assessing medical device safety and efficacy.

3. Can data be labeled for quality?
Jim Harris (@ocdqblog) describes must-haves for data quality. His SAS blog post compares consuming data without knowing its quality to purchasing unlabeled food. Possible solution: A data-quality 'label' could be implemented as a series of yes/no or pass/fail flags appended to all data structures. These could indicate whether all critical fields were completed, and whether specific fields were populated with a valid format and value.

4. Could artificial intelligence replace executives?
In the MIT Sloan Management Review, Sam Ransbotham asks Can Artificial Intelligence Replace Executive Decision Making? ***insert joke here*** Most problems faced by executives are unique, not well-documented, and lack structured data, so they're not available to train an artificial intelligence system. What would be more useful would be analogies and examples of similar decisions - not a search for concrete patterns. AI needs repetition, and most executive decisions don't lend themselves to A/B testing or other research methods. However, some routine/small issues could eventually be handled by cognitive computing.

Evidence & Insights Calendar:

August 24-25; San Francisco. Sports Analytics Innovation Summit.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded by the Agency of Health Quality and Assessment, RAND Europe, and Alberta Innovates.

Tuesday, 28 June 2016

Open innovation, the value of pharmaceuticals, and liberal-vs-conservative stalemates.

Evidence_from_openinnovation

1. Open Innovation can up your game.
Open Innovation → Better Evidence. Scientists with an agricultural company tell a fascinating story about open innovation success. Improving Analytics Capabilities Through Crowdsourcing (Sloan Review) describes a years-long effort to tap into expertise outside the organization. Over eight years, Syngenta used open-innovation platforms to develop a dozen data-analytics tools, which ultimately revolutionized the way it breeds soybean plants. "By replacing guesswork with science, we are able to grow more with less."

Many open innovation platforms run contests between individuals (think Kaggle), and some facilitate teams. One of these platforms, InnoCentive, hosts mathematicians, physicists, and computer scientists eager to put their problem-solving skills to the test. There was a learning curve, to be sure (example: divide big problems into smaller pieces). Articulating the research question was challenging to say the least.

Several of the associated projects could be tackled by people without subject matter expertise; other steps required knowledge of the biological science, complicating the task of finding team members. But eventually Syngenta "harnessed outside talent to come up with a tool that manages the genetic component of the breeding process — figuring out which soybean varieties to cross with one another and which breeding technique will most likely lead to success." The company reports substantial results from this collaboration: The average rate of improvement of its portfolio grew from 0.8 to 2.5 bushels per acre per year.

 

Value frameworks context matters

 

2. How do you tie drug prices to value?
Systematic Analysis → Better Value for Patients. It's the age-old question: How do you put a dollar value on intangibles - particularly human health and wellbeing? As sophisticated pharmaceuticals succeed in curing more diseases, their prices are climbing. Healthcare groups have developed 'value frameworks' to guide decision-making about these molecules. It's still a touchy subject to weigh the cost of a prescription against potential benefits to a human life.

These frameworks address classic problems, and are useful examples for anyone formalizing the steps of complex decision-making - inside or outside of healthcare. For example, one cancer treatment may be likely to extend a patient's life by 30 to 45 days compared to another, but at much higher cost, or with unacceptable side effects. Value frameworks help people consider these factors.

@ContextMatters studies processes for drug evaluation and regulatory approval. In Creating a Global Context for Value, they compare the different methods of determining whether patients are getting high value. Their Value Framework Comparison Table highlights key evaluation elements from three value frameworks (ASCO, NCCN, ICER) and three health technology assessments (CADTH, G-BA, NICE).

 

Evidencebased-povertyprograms

3. Evidence overcomes the liberal-vs-conservative stalemate.
Evidence-based Programs → Lower Poverty. Veterans of the Bloomberg mayoral administration describe a data-driven strategy to reduce poverty in New York. Results for America Senior Fellows Robert Doar and Linda Gibbs share an insider's perspective in "New York City's Turnaround on Poverty: Why poverty in New York – unlike in other major cities – is dropping."

Experimentation was combined with careful attention to which programs succeeded (Paycheck Plus) and which didn't (Family Rewards). A key factor, common to any successful decision analysis effort: When a program didn't produce the intended results, advocates weren't cast aside as failures. Instead, that evidence was blended with the rest to continuously improve. The authors found that "Solid evidence can trump the liberal-versus-conservative stalemate when the welfare of the country’s most vulnerable people is at stake."

Tuesday, 21 June 2016

Free beer! and the "Science of X".

Chanteuse_flickr_Christian_Hornick

1. Free beer for a year for anyone who can work perfume, velvety voice, and 'Q1 revenue goals were met' into an appropriate C-Suite presentation.
Prezi is a very nice tool enabling you to structure a visual story, without forcing a linear, slide-by-slide presentation format. The best part is you can center an entire talk around one graphic or model, and then dive into details depending on audience response. (Learn more in our writeup on How to Present Like a Boss.)

Now there's a new marketing campaign, the Science of Presentations. Prezi made a darn nice web page. And the ebook offers several useful insights into how to craft and deliver a memorable presentation (e.g., enough with the bullet points already).

But in their pursuit of click-throughs, they've gone too far. It's tempting to claim you're following the "Science of X". To some extent, Prezi provides citations to support its recommendations: The ebook links to a few studies on audience response and so forth. But that's not a "science" - they don't always connect between what they're citing and what they're suggesting to business professionals. Example: "Numerous studies have found that metaphors and descriptive words or phrases — things like 'perfume' and 'she had a velvety voice' - trigger the sensory cortext.... On the other hand, when presented with nondescriptive information — for example, 'The marketing team reached all of its revenue goals in Q1' — the only parts of our brain that are activated are the ones responsible for understanding language. Instead of experiencing the content with which we are being presented, we are simply processing it."

Perhaps in this case "simply processing" the good news is enough experience for a busy executive. But our free beer offer still stands.

2. How should medical guidelines be communicated to patients?

And now for the 'Science of Explaining Guidelines'. It's hard enough to get healthcare professionals to agree on a medical guideline - and then follow it. But it's also hard to decide whether/how those recommendations should be communicated to patients. Many of the specifics are intended for providers' consumption, to improve their practice of medicine. Although it's essential that patients understand relevant evidence, translating a set of recommendations into lay terms is quite problematic.

Groups publish medical guidelines to capture evidence-based recommendations for addressing a particular disease. Sometimes these are widely accepted - and other times not. The poster-child example of breast cancer screening illustrates why patients, and not just providers, must be able to understand guidelines. Implementation Science recently published the first systematic review of methods for disseminating guidelines to patients.

Not surprisingly, the study found weak evidence of methods that are consistently feasible. "Key factors of success were a dissemination plan, written at the start of the recommendation development process, involvement of patients in this development process, and the use of a combination of traditional and innovative dissemination tools." (Schipper et al.)

3. Telling a story with data.
In the Stanford Social Innovation Review (SSIR), @JakePorway explains three things great data storytellers do differently [possible paywall]. Jake is with @DataKind, "harnessing the power of data science in service of humanity".

 

Photo credit: Christian Hornick on Flickr.

Tuesday, 29 March 2016

Rapid is the new black, how to ask for money, and should research articles be free?

Digitalhealthnetwork

1. #rapidisthenewblack

The need for speed is paramount, so it's crucial that we test ideas and synthesize evidence quickly without losing necessary rigor. Examples of people working hard to get it right:

  • The Digital Health Breakthrough Network is a very cool idea, supported by an A-list team. They (@AskDHBN) seek New York City-based startups who want to test technology in rigorous pilot studies. The goal is rapid validation of early-stage startups with real end users. Apply here.
  • The UK's fantastic Alliance for Useful Evidence (@A4UEvidence) asks Rapid Evidence Assessments: A bright idea or a false dawn? "Research synthesis will be at the heart of the government’s new What Works centres" - equally true in the US. The idea is "seductive: the rigour of a systematic review, but one that is cheaper and quicker to complete." Much depends on whether the review maps easily onto an existing field of study.
  • Jon Brassey of the Trip database is exploring methods for rapid reviews of health evidence. See Rapid-Reviews.info or @rapidreviews_i.
  • Miles McNall and Pennie G. Foster-Fishman of Michigan State (ouch, still can't get over that bracket-busting March Madness loss) present methods and case studies for rapid evaluations and assessments. In the American Journal of Evaluation, they caution that the central issue is balancing speed and trustworthiness.

2. The science of asking for donations: Unit asking method.
How much would you give to help one person in need? How much would you give to help 20 people? This is the concept behind the unit asking method, a way to make philanthropic fund-raising more successful.

3. Should all research papers be free? 
Good stuff from the New York Times on the conflict between scholarly journal paywalls and Sci-Hub.

4. Now your spreadsheet can tell you what's going on.
Savvy generates a narrative for business intelligence charts in Qlik or Excel.