Evidence Soup
How to find, use, and explain evidence.

Tuesday, 26 July 2016

Evidence relativism, innovation as a process, and decision analysis pioneer.

Gold-panning

1. Panning for gold in the evidence stream.
Patrick Lester introduces his new SSIR article by saying "With evidence-based policy, we need to acknowledge that some evidence is more valid than others. Pretending all evidence is equal will only preserve the status quo." In Defining Evidence Down, the director of the Social Innovation Research Center responds to analysts skeptical of evidence hierarchies developed to steer funding toward programs that fit the "what works" concept.

Are levels valid? Hierarchies recognize different levels of evidence according to their rigor and certainty. These rankings are well-established in healthcare, and are becoming the standard for evidence evaluation within the Dept of Education and other US government agencies. Critics of this prevailing thinking (Gopal & Schorr, Friends of Evidence) want to ensure decision-makers embrace an inclusive definition of evidence that values qualitative research, case studies, insights from experience, and professional judgment. Lester contends that "Unfortunately, to reject evidence hierarchies is to promote is a form of evidence relativism, where everyone is entitled to his or her own views about what constitutes good evidence in his or her own local or individualized context.

Ideology vs. evidence. "By resisting the notion that some evidence is more valid than others, they are defining evidence down. Such relativism would risk a return to the past, where change has too often been driven by fads, ideology, and politics, and where entrenched interests have often preserved the status quo." Other highlights: "...supporting a broad definition of evidence is not the same thing as saying that all evidence is equally valid." And "...randomized evaluations are not the only rigorous way to examine systems-level change. Researchers can often use quasi-experimental evaluations to examine policy changes...."

2. Can innovation be systematic?
Everyone wants innovation nowadays, but how do you make it happen? @HighVizAbility reviews a method called Systematic Inventive Thinking, an approach to creativity, innovation, and problem solving. The idea is to execute as a process, rather than relying on random ideas. Advocates say SIT doesn't replace unbridled creativity, but instead complements it.

3. Remembering decision analysis pioneer Howard Raiffa.
Howard Raiffa, co-founder of the Harvard Kennedy School of Government and decision analysis pioneer, passed away recently. He was also a Bayesian decision theorist and well-known author on negotiation strategies. Raiffa considered negotiation analysis an opportunity for both sides to get value, describing it as The Science and Art of Collaborative Decision Making.

4. Journal impact factor redux?
In the wake of news that Thomson Reuters sold its formula, Stat says changes may finally be coming to the "hated" journal impact factor. Ivan Oransky (@ivanoransky) and Adam Marcus (@armarcus) explain that some evidence suggests science articles don't receive the high number of citations supposedly predicted by the IF. The American Society of Microbiologists has announced that it will abandon the metric completely. Meanwhile, top editors from Nature — which in the past has taken pride in its IF — have coauthored a paper widely seen as critical of the factor.

Photo credit: Poke of Gold by Mike Beauregard

Thursday, 21 July 2016

Academic clickbait, FCC doesn't use economics, and tobacco surcharges don't work.

Brady

1. Academics use crazy tricks for clickbait.
Turn to @TheWinnower for an insightful analysis of academic article titles, and how their authors sometimes mimic techniques used for clickbait. Positively framed titles (those stating a specific finding) fare better than vague ones: For example, 'smoking causes lung cancer' vs. 'the relationship between smoking and lung cancer'. Nice use of altmetrics to perform the analysis.

2. FCC doesn't use cost-benefit analysis.
Critics claim Federal Communications Commission policymaking has swerved away from econometric evidence and economic theory. Federal agencies including the EPA must submit cost-benefit analyses to support new regulations, but the FCC is exempted, "free to embrace populism as its guiding principle". @CALinnovates has published a new paper, The Curious Absence of Economic Analysis at the Federal Communications Commission: An Agency In Search of a Mission. Former FCC Chief Economist Gerald Faulhaber, PhD and Hal Singer, PhD review the agency’s "proud history at the cutting edge of industrial economics and its recent divergence from policymaking grounded in facts and analysis".

3. No bias in US police shootings?
There's plenty of evidence showing bias in US police use of force, but not in shootings, says one researcher. But Data Colada, among others, describes "an interesting empirical challenge for interpreting the shares of Whites vs Blacks shot by police while being arrested is that biased officers, those overestimating the threat posted by a Black civilian, will arrest less dangerous Blacks on average. They will arrest those posing a real threat, but also some not posing a real threat, resulting in lower average threat among those arrested by biased officers."

4. Tobacco surcharges don't work.
The Affordable Care Act imposes tobacco surcharges for smokers. But findings suggest the ACA has not led more people to stop smoking.

5. CEOs lose faith in forecasts.
Some CEOs say big-data predictions are failing. “The so-called experts and global economists are proven as often to be wrong as right these days,” claims a WSJ piece In Uncertain Times, CEOs Lose Faith in Forecasts. One consultant advises people to "rely less on forecasts and instead road-test ideas with customers and make fast adjustments when needed. He urges them to supplement big-data predictions with close observation of their customers."

6. Is fMRI evidence flawed?
Motherboard's Why Two Decades of Brain Research Could Be Seriously Flawed recaps research by Anders Eklund. Cost is one reason, he argues: fMRI scans are notoriously expensive. "That makes it hard for researchers to perform large-scale studies with lots of patients". Eklund has written elsewhere about this (Can parametric statistical methods be trusted for fMRI based group studies?), and the issue is being noticed by Neuroskeptic and Science-Based Medicine ("It’s tempting to think that the new idea or technology is going to revolutionize science or medicine, but history has taught us to be cautious. For instance, antioxidants, it turns out, are not going to cure a long list of diseases").

Evidence & Insights Calendar:

August 24-25; San Francisco. Sports Analytics Innovation Summit.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 19 July 2016

Are you causing a ripple? How to assess research impact.

Raindrops-in-a-bucket

People are recognizing the critical need for meta-research, or the 'science of science'. One focus area is understanding whether research produces desired outcomes, and identifying how to ensure that truly happens going forward. Research impact assessment (RIA) is particularly important when holding organizations accountable for their management of public and donor funding. An RIA community of practice is emerging.

Are you causing a ripple? For those wanting to lead an RIA effort, the International School on Research Impact Assessment was developed to "empower participants on how to assess, measure and optimise research impact with a focus on biomedical and health sciences." ISRIA is a partnership between Alberta Innovates Heath Solutions, the Agency for Health Quality and Assessment of Catalonia, and RAND Europe. They're presenting their fourth annual program Sept 19- 23 in Melbourne, Australia, hosted by the Commonwealth Scientific and Industrial Research Organisation, Australia’s national research agency.

ISRIA participants are typically in program management, evaluation, knowledge translation, or policy roles. They learn a range of frameworks, tools, and approaches for assessing research impact, and how to develop evidence about 'what works’. 

Make an impact with your impact assessment. Management strategies are also part of the curriculum: Embedding RIA systemically into organizational practice, reaching agreement on effective methods and reporting, understanding the audience for RIAs, and knowing how to effectively communicate results to various stakeholders.

The 2016 program will cover both qualitative and quantitative analytical methods, along with a mixed design. It will include sessions on evaluating economic, environmental and social impacts. The aim is to expose participants to as many options as possible, including new methods, such as altmetrics. (Plus, there's a black tie event on the first evening.)

 

Photo credit: Raindrops in a Bucket by Smabs Sputzer.

 

Monday, 18 July 2016

Stand up for science, evidence for surgery, and labeling data for quality.

Understand evidence

1. Know someone who effectively promotes evidence?
Nominations are open for the 2016 John Maddox Prize for Standing up for Science, recognizing an individual who promotes sound science and evidence on a matter of public interest, facing difficulty or hostility in doing so.

Researchers in any area of science or engineering, or those who work to address misleading information and bring evidence to the public, are eligible. Sense About Science (@senseaboutsci) explains that the winner will be someone who effectively promotes evidence despite challenge, difficulty, or adversity, and who takes responsibility for public discussion beyond what would be expected of someone in their position. Nominations are welcome until August 1.

2. Evidence to improve surgical outcomes.
Based on Oxford, UK, the IDEAL Collaboration is an initiative to improve the quality of research in surgery, radiotherapy, physiotherapy, and other complex interventions. The IDEAL model (@IDEALCollab) describes the stages of innovation in surgery: Idea, Development, Exploration, Assessment, and Long-Term Study. Besides its annual conference, the collaborative also proposes and advocates for assessment frameworks, such as the recent IDEAL-D for assessing medical device safety and efficacy.

3. Can data be labeled for quality?
Jim Harris (@ocdqblog) describes must-haves for data quality. His SAS blog post compares consuming data without knowing its quality to purchasing unlabeled food. Possible solution: A data-quality 'label' could be implemented as a series of yes/no or pass/fail flags appended to all data structures. These could indicate whether all critical fields were completed, and whether specific fields were populated with a valid format and value.

4. Could artificial intelligence replace executives?
In the MIT Sloan Management Review, Sam Ransbotham asks Can Artificial Intelligence Replace Executive Decision Making? ***insert joke here*** Most problems faced by executives are unique, not well-documented, and lack structured data, so they're not available to train an artificial intelligence system. What would be more useful would be analogies and examples of similar decisions - not a search for concrete patterns. AI needs repetition, and most executive decisions don't lend themselves to A/B testing or other research methods. However, some routine/small issues could eventually be handled by cognitive computing.

Evidence & Insights Calendar:

August 24-25; San Francisco. Sports Analytics Innovation Summit.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded by the Agency of Health Quality and Assessment, RAND Europe, and Alberta Innovates.

Tuesday, 05 July 2016

Brain training isn't smart, physician peer pressure, and #AskforEvidence.

Brain-Training

1. Spending $ on brain training isn't so smart.
It seems impossible to listen to NPR without hearing from their sponsor, Lumosity, the brain-training company. The target demo is spot on: NPR will be the first to tell you its listeners are the "nation's best and brightest". And bright people don't want to slow down. Alas, spending hard-earned money on brain training isn't looking like a smart investment. New evidence seems to confirm suspicions that this $1 billion industry is built on hope, sampling bias, and placebo effect. Arstechnica says researchers have concluded that earlier, mildly positive "findings suggest that recruitment methods used in past studies created a self-selected groups of participants who believed the training would improve cognition and thus were susceptible to the placebo effect." The study, Placebo Effects in Cognitive Training, was published in the Proceedings of the National Academy of Sciences.

It's not a new theme: In 2014, 70 cognitive scientists signed a statement saying "The strong consensus of this group is that the scientific literature does not support claims that the use of software-based 'brain games' alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease."


Journal.pmed.1002049.t001

2. Ioannidis speaks out on usefulness of research.
After famously claiming that most published research findings are false, John Ioannidis now tells us Why Most Clinical Research Is Not Useful (PLOS Medicine). So, what are the key features of 'useful' research? The problem needs to be important enough to fix. Prior evidence must be evaluated to place the problem into context. Plus, we should expect pragmatism, patient-centeredness, monetary value, and transparency.


Antibiotic_use

3. To nudge physicians, compare them to peers.
Doctors are overwhelmed with alerts and guidance. So how do you intervene when a physician prescribes antibiotics for a virus, despite boatloads of evidence showing they're ineffective? Comparing a doc's records to peers is one promising strategy. Laura Landro recaps research by Jeffrey Linder (Brigham and Women's, Harvard): "Peer comparison helped reduce prescriptions that weren’t warranted from 20% to 4% as doctors got monthly individual feedback about their own prescribing habits for 18 months.

"Doctors with the lower rates were told they were top performers, while the rest were pointedly told they weren’t, in an email that included the number and proportion of antibiotic prescriptions they wrote compared with the top performers." Linder says “You can imagine a bunch of doctors at Harvard being told ‘You aren’t a top performer.’ We expected and got a lot of pushback, but it was the most effective intervention.” Perhaps this same approach would work outside the medical field.

4. Sports analytics taxonomy.
INFORMS is a professional society focused on Operations Research and Management Science. The June issue of their ORMS Today magazine presents v1.0 of a sports analytics taxonomy (page 40). This work, by Gary Cokins et al., demonstrates how classification techniques can be applied to better understand sports analytics. Naturally this includes analytics for players and managers in the major leagues. But it also includes individual sports, amateur sports, franchise management, and venue management.

5. Who writes the Internet, anyway? #AskforEvidence
Ask for Evidence is a public campaign that helps people request for themselves the evidence behind news stories, marketing claims, and policies. Sponsored by @senseaboutsci, the campaign has new animations on YouTube, Twitter, and Facebook. Definitely worth a like or a retweet.

Calendar:
September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 28 June 2016

Open innovation, the value of pharmaceuticals, and liberal-vs-conservative stalemates.

Evidence_from_openinnovation

1. Open Innovation can up your game.
Open Innovation → Better Evidence. Scientists with an agricultural company tell a fascinating story about open innovation success. Improving Analytics Capabilities Through Crowdsourcing (Sloan Review) describes a years-long effort to tap into expertise outside the organization. Over eight years, Syngenta used open-innovation platforms to develop a dozen data-analytics tools, which ultimately revolutionized the way it breeds soybean plants. "By replacing guesswork with science, we are able to grow more with less."

Many open innovation platforms run contests between individuals (think Kaggle), and some facilitate teams. One of these platforms, InnoCentive, hosts mathematicians, physicists, and computer scientists eager to put their problem-solving skills to the test. There was a learning curve, to be sure (example: divide big problems into smaller pieces). Articulating the research question was challenging to say the least.

Several of the associated projects could be tackled by people without subject matter expertise; other steps required knowledge of the biological science, complicating the task of finding team members. But eventually Syngenta "harnessed outside talent to come up with a tool that manages the genetic component of the breeding process — figuring out which soybean varieties to cross with one another and which breeding technique will most likely lead to success." The company reports substantial results from this collaboration: The average rate of improvement of its portfolio grew from 0.8 to 2.5 bushels per acre per year.

 

Value frameworks context matters

 

2. How do you tie drug prices to value?
Systematic Analysis → Better Value for Patients. It's the age-old question: How do you put a dollar value on intangibles - particularly human health and wellbeing? As sophisticated pharmaceuticals succeed in curing more diseases, their prices are climbing. Healthcare groups have developed 'value frameworks' to guide decision-making about these molecules. It's still a touchy subject to weigh the cost of a prescription against potential benefits to a human life.

These frameworks address classic problems, and are useful examples for anyone formalizing the steps of complex decision-making - inside or outside of healthcare. For example, one cancer treatment may be likely to extend a patient's life by 30 to 45 days compared to another, but at much higher cost, or with unacceptable side effects. Value frameworks help people consider these factors.

@ContextMatters studies processes for drug evaluation and regulatory approval. In Creating a Global Context for Value, they compare the different methods of determining whether patients are getting high value. Their Value Framework Comparison Table highlights key evaluation elements from three value frameworks (ASCO, NCCN, ICER) and three health technology assessments (CADTH, G-BA, NICE).

 

Evidencebased-povertyprograms

3. Evidence overcomes the liberal-vs-conservative stalemate.
Evidence-based Programs → Lower Poverty. Veterans of the Bloomberg mayoral administration describe a data-driven strategy to reduce poverty in New York. Results for America Senior Fellows Robert Doar and Linda Gibbs share an insider's perspective in "New York City's Turnaround on Poverty: Why poverty in New York – unlike in other major cities – is dropping."

Experimentation was combined with careful attention to which programs succeeded (Paycheck Plus) and which didn't (Family Rewards). A key factor, common to any successful decision analysis effort: When a program didn't produce the intended results, advocates weren't cast aside as failures. Instead, that evidence was blended with the rest to continuously improve. The authors found that "Solid evidence can trump the liberal-versus-conservative stalemate when the welfare of the country’s most vulnerable people is at stake."

Tuesday, 21 June 2016

Free beer! and the "Science of X".

Chanteuse_flickr_Christian_Hornick

1. Free beer for a year for anyone who can work perfume, velvety voice, and 'Q1 revenue goals were met' into an appropriate C-Suite presentation.
Prezi is a very nice tool enabling you to structure a visual story, without forcing a linear, slide-by-slide presentation format. The best part is you can center an entire talk around one graphic or model, and then dive into details depending on audience response. (Learn more in our writeup on How to Present Like a Boss.)

Now there's a new marketing campaign, the Science of Presentations. Prezi made a darn nice web page. And the ebook offers several useful insights into how to craft and deliver a memorable presentation (e.g., enough with the bullet points already).

But in their pursuit of click-throughs, they've gone too far. It's tempting to claim you're following the "Science of X". To some extent, Prezi provides citations to support its recommendations: The ebook links to a few studies on audience response and so forth. But that's not a "science" - they don't always connect between what they're citing and what they're suggesting to business professionals. Example: "Numerous studies have found that metaphors and descriptive words or phrases — things like 'perfume' and 'she had a velvety voice' - trigger the sensory cortext.... On the other hand, when presented with nondescriptive information — for example, 'The marketing team reached all of its revenue goals in Q1' — the only parts of our brain that are activated are the ones responsible for understanding language. Instead of experiencing the content with which we are being presented, we are simply processing it."

Perhaps in this case "simply processing" the good news is enough experience for a busy executive. But our free beer offer still stands.

2. How should medical guidelines be communicated to patients?

And now for the 'Science of Explaining Guidelines'. It's hard enough to get healthcare professionals to agree on a medical guideline - and then follow it. But it's also hard to decide whether/how those recommendations should be communicated to patients. Many of the specifics are intended for providers' consumption, to improve their practice of medicine. Although it's essential that patients understand relevant evidence, translating a set of recommendations into lay terms is quite problematic.

Groups publish medical guidelines to capture evidence-based recommendations for addressing a particular disease. Sometimes these are widely accepted - and other times not. The poster-child example of breast cancer screening illustrates why patients, and not just providers, must be able to understand guidelines. Implementation Science recently published the first systematic review of methods for disseminating guidelines to patients.

Not surprisingly, the study found weak evidence of methods that are consistently feasible. "Key factors of success were a dissemination plan, written at the start of the recommendation development process, involvement of patients in this development process, and the use of a combination of traditional and innovative dissemination tools." (Schipper et al.)

3. Telling a story with data.
In the Stanford Social Innovation Review (SSIR), @JakePorway explains three things great data storytellers do differently [possible paywall]. Jake is with @DataKind, "harnessing the power of data science in service of humanity".

 

Photo credit: Christian Hornick on Flickr.

Tuesday, 14 June 2016

Mistakes we make, Evidence Index, and Naturals vs Strivers.

Putin_pianist

1. Mistakes we make when sharing insights.
We've all done this: Hurried to share valuable, new information and neglected to frame it meaningfully, thus slowing the impact and possibly alienating our audience. Michael Shrage describes a perfect example, taken from The Only Rule Is It Has to Work, a fantastic book about analytics innovation.

The cool thing about the book is that it's a Moneyball for the rest of us. Ben Lindbergh and Sam Miller had the rare opportunity to experiment and apply statistics to improve the performance of the Sonoma Stompers, a minor league baseball team in California wine country. But they had to do it with few resources, and learn leadership skills along the way.

The biggest lesson they learned was the importance of making their findings easy to understand. As Shrage points out in his excellent Harvard Business Review piece, the authors were frustrated at the lack of uptake: They didn't know how to make the information meaningful and accessible to managers and coaches. Some people were threatened, others merely annoyed: "Predictive analytics create organizational winners and losers, not just insights."

2. Naturals vs. Strivers: Why we lie about our efforts.
Since I live in Oakland, I'd be remiss without a Steph Curry story this week. But there's lots more to it: Lebron James is a natural basketball player, and Steph is a striver ; they're both enormously popular, of course. But Ben Cohen explains that people tend to prefer naturals, whether we recognize it or not: We favor those who just show up and do things really well. So strivers lie about their efforts.

Overachievers launch into bad behavior, such as claiming to sleep only four hours a night. Competitive pianists practice in secret. Social psychology research has found that we like people described as naturals, even when we're being fooled.

3. How do government agencies apply evidence?
Results for America has evaluated how U.S. agencies apply evidence to decisions, and developed an index synthesizing their findings. It's not easily done. @Results4America studied factors such as "Did the agency use evidence of effectiveness when allocating funds from its five largest competitive grant programs in FY16?" The Departments of Housing and Labor scored fairly high. See the details behind the index [pdf here].

Photo credit: Putin classical pianist on Flickr.

 

Wednesday, 08 June 2016

Grit isn't the answer, plus Scrabble and golf analytics.

Scrabble

1. Poor kids already have grit: Educational Controversy, 2016 edition.
All too often, we run with a sophisticated, research-based idea, oversimplify it, and run it into the ground. 2016 seems to be the year for grit. Jean Rhodes, who heads up the Chronicle of Evidence-Based Mentoring (@UMBmentoring) explains that grit is not a panacea for the problems facing disadvantaged youth. "Grit: The power and passion of perseverance, Professor Angela Duckworth’s new bestseller, on the topic has fueled both enthusiasm for such efforts as well as debate among those of us who worry that it locates the problem (a lack of grit) and solution (training) in the child. Further, by focusing on exemplars of tenacity and success, the book romanticizes difficult circumstances. The forces of inequality that, for the most part, undermine children’s success are cast as contexts for developing grit. Moreover, when applied to low-income students, such self-regulation may privilege conformity over creative expression and leadership. Thus, it was a pleasure to come across a piece by Stanford doctoral student, Ethan Ris, on the history and application of the concept." Ris first published his critique in the Journal of Educational Controversy and recently wrote a piece for the Washington Post, The problem with teaching ‘grit’ to poor kids? They already have it.

2. Does Scrabble have its own Billy Beane?
It had to happen: Analytics for Scrabble. But it might not be what you expected. WSJ explains why For World’s Newest Scrabble Stars, SHORT Tops SHORTER.

Wellington Jighere and other players from Nigeria are shaking up the game, using analytics to support a winning strategy favoring five-letter words. Most champions follow a “long word” strategy, making as many seven- and eight-letter plays as possible. But analytics have brought that "sacred Scrabble shibboleth into question, exposing the hidden risks of big words."

Jighere has been called the Rachmaninoff of rack management, often saving good letters for a future play rather than scoring an available bingo. (For a pre-Jighere take on the world of Scrabble, see Word Wars.)

3. Golf may have a Billy Beane, too.
This also had to happen. Mark Broadie (@MarkBroadie) is disrupting golf analytics with his 'strokes gained' system. In his 2014 book, Every Shot Counts, Broadie rips apart assumptions long regarded as sacrosanct - maxims like 'drive for show, putt for dough'. "The long game explains about two-thirds of scoring differences and the short game and putting about one-third. This is true for amateurs as well as pros." To capture and analyze data, Broadie developed a GolfMetrics program. He is the Carson Family Professor of Business at Columbia Business School, and has a PhD in operations research from Stanford. He has presented at the Sloan Sports Analytics Conference.

Pros have begun benefiting from golf analytics, including Danny Willett, winner of this year's Masters. He has thanked @15thClub, a new analytics firm, for helping him prep better course strategy. 15th Club provided insight for Augusta’s par-5 holes. As WSJ explained, the numbers show that when players lay up, leaving their ball short of the green to avoid a water hazard, they fare better when doing so as close to the green as possible, rather than the more distant spots where players typically take their third shots.

4. Evidence-based government on the rise.
In the US, "The still small, but growing, field of pay for success made significant strides this week, with Congress readying pay for success legislation and the Obama administration announcing a second round of grants through the Social Innovation Fund (@SIFund)."

5. Man + Machine = Success.
Only Humans Need Apply is a new book by Tom Davenport (@tdav) and @JuliaKirby. Cognitive computing combined with human decision making is what will succeed in the future. @DeloitteBA led a recent Twitter chat: Man-machine: The dichotomy blurs, which included @RajeevRonanki, the lead for their cognitive consulting practice.

Monday, 06 June 2016

How women decide, Pay for Success, and Chief Cognitive Officers.

Women-decide
1. Do we judge women's decisions differently?

Cognitive psychologist Therese Huston's new book is How Women Decide: What's True, What's Not, and What Strategies Spark the Best Choices. It may sound unscientific to suggest there's a particular way that several billion people make decisions, but the author doesn't seem nonchalant about drawing specific conclusions.

The book covers some of the usual decision analysis territory: The process of analyzing data to inform decisions. By far the most interesting material isn't about how choices are made, but how they are judged: The author makes a good argument that women's decisions are evaluated differently than men’s, by both males and females. Quick example: Marissa Mayer being hung up to dry for her ban on Yahoo! staff working from home, while Best Buy's CEO mostly avoided bad press after a similar move. Why are we often quick to question a woman’s decision, but inclined to accept a man’s?

Huston offers concrete strategies for defusing the stereotypes that can lead to this double standard. Again, it's dangerous to speak too generally. But the book presents evidence of gender bias in the interpretation of people's choices, and how it feeds into people's perceptions of choices. Worthwhile reading. Sheelah Kolhatkar reviewed for NYTimes books.

2. Better government through Pay for Success.
In Five things to know about pay for success legislation, Urban Institute staff explain their support for the Social Impact Partnership to Pay for Results Act (SIPPRA), which is being considered in the US House. Authors are Justin Milner (@jhmilner), Ben Holston (@benholston), and Rebecca TeKolste.

Under SIPPRA, state and local governments could apply for funding through outcomes-driven “social impact partnerships” like Pay for Success (PFS). This funding would require strong evidence and rigorous evaluation, and would accomodate projects targeting a wide range of outcomes: unemployment, child welfare, homelessness, and high school graduation rates.

One of the key drivers behind SIPPRA is its proposed fix for the so-called wrong pockets problem, where one agency bears the cost of a program, while others benefit as free riders. "The bill would provide a backstop to PFS projects and compensate state and local governments for savings that accrue to federal coffers." Thanks to Meg Massey (@blondnerd).

3. The rise of the Chief Cognitive Officer.
On The Health Care Blog, Dan Housman describes The Rise of the Chief Cognitive Officer. "The upshot of the shift to cognitive clinical decision support is that we will likely increasingly see an evolving marriage and interdependency between the worlds of AI (artificial intelligence) thinking and human provider thinking within medicine." Housman, CMO for ConvergeHealth by Deloitte, proposes a new title of CCO (Chief Cognitive Officer) or CCMO (Chief Cognitive Medical Officer) to modernize the construct of CMIO (Chief Medical Information Officer), and maintain a balance between AI and humans. For example, "If left untrained for a year or two, should the AI lose credentials? How would training be combined between organizations who have different styles or systems of care?"