Evidence Soup
How to find, use, and explain evidence.

Friday, 05 December 2014

"Big E" vs. "little e" evidence, the hero's journey, and provisional truth.

1. It's tempting to think there's a hierarchy for data: That evidence from high-quality experiments is on top at Level 1, and other research findings follow thereafter. But even in healthcare - the gold standard for the "gold standard" - it's not that simple, says NICE in The NICE Way: Lessons for Social Policy and Practice from the National Institute for Health and Care Excellence. The report, produced with the Alliance for Useful Evidence (@A4UEvidence), cautions against "slavish adherence" to hierarchies of research. "The appropriateness of the evidence to the question is more important, not its place in any such hierarchy." NICE articulates how its decision model works, and argues why it's also relevant outside healthcare.

2. Don't let "little e" evidence kill off good Big Ideas. Take note, lean startups + anyone new to the ways of validating ideas with evidence. In their should-be-considered-a-classic, Is Decision-Based Evidence Making Necessarily Bad?, Tingling and Broydon examine the different uses of evidence in decision-making (MIT Sloan Management Review). As a predictive tool, sometimes it's flat wrong: Those iconic Aeron chairs in tech offices everywhere? Utterly rejected by Herman Miller's market-research focus groups. It's good to have a culture where "small" evidence isn't just an excuse to avoid risk-taking. But it's also good to look at "Big E" Evidence, assessing what research is predictive over time, and replace older methods (focus groups, perhaps).

3. 10+ years ago, Billy Beane famously discovered powerful analytic insights for managing the Oakland A's baseball* team, and as a reward was portrayed by Brad Pitt in Moneyball. Now a bipartisan group of U.S. federal leaders and advisors has published Moneyball for Government, intending to encourage use of data, evidence, and evaluation in policy and funding decisions. On Twitter at @Moneyball4gov. *The A's play not far from Ugly Research HQ, and much to our dismay, the Moneyball competitive advantage has long since played out. But Billy is still a great analyst; for the moment, we're just holding our breath.

4. We've barely scratched the surface on figuring out how to present data to decision makers. All Analytics did a web series this week, The Results Are In: Think Presentation From the Start (recordings and slide decks available). One of the highlights was a comparison to Joseph Campbell's hero's journey, by James Haight of Blue Hill Research (on Twitter @James_Haight).

5. We're wired to seek certainty. But Ted Cadsby argues the world is too complex for our simplified conclusions. He suggests probabilistic thinking to arrive at a "provisional truth" that we can test over time in Closing the Mind Gap: Making Smarter Decisions in a Hypercomplex World.

Five for Friday: 5-Dec-2014.

Monday, 17 November 2014

Cognitive analytics, satisficing, sabremetrics, and other cool ways of deciding.

My weekly five links on evidence & decision-making. 

1. I was delighted when a friend sent a link to 10 Things I Believe About Baseball Without Evidence. Ken Arneson (@kenarneson) looks at sabremetrics with things like the linguistic relativity principle and the science of memory-based prediction. Warning to Oakland A's fans: Includes yet more reminders of the epic 2014 collapse.

2. The concept of satisficing came up on one of my current projects, so I reread some overviews. Herbert Simon coined this term to explain decision making under circumstances where an optimal solution can't be determined. (Simon sure has staying power: During grad school I wanted a copy of Administrative Behavior (1957), and there it was in paperback on the shelf at the Tattered Cover in Denver.)

3. Eric Topol (Medscape) interviewed Margaret Hamburg, US FDA commissioner, on weighing risks/benefits in complex agency decision-making. She talked specifically about the newly restrictive policy on opiate painkiller prescriptions (see page 3). I'd like to see FDA's analysis of unintended side effects, such as the concurrent rise of U.S. heroin use/overdoses.

4. Organizations are challenged with finding the pony in all their data, and marketing-spin hijinks have ensued. Seth Grimes (@sethgrimes) has a great discussion of efforts to go beyond the fundamental volume, velocity, etc. in Avoid Wanna-V Confusion.

5. Steve Ardire (@SArdire) authored a Dataversity paper on cognitive computing, including survey results and some choice comments, both pro and con, about the prospects for this emerging area. For creating business value, 'business intelligence/cognitive analytics' looks promising. Cognitive Computing: An Emerging Hub in IT Ecosystems.

Curated by Tracy Altman of Ugly Research

 

Friday, 07 November 2014

Data & Decision-Making: Five for Friday 7-Nov-2014.

1. "A gut is a personal, nontransferable attribute, which increases the value of a good one." This classic from Harvard Business Review recaps how policy makers have historically made big decisions. It's never just about the data. A Brief History of Decision Making.

2. James Taylor, an expert on decision management who I admire, is coauthor (with Tom Debevoise) of a new book, bringing a decision management perspective to process discovery and design. I like to think everything's better with explicitly identified decision steps. The notation idea (DMN) makes it much easier to model, execute, and interchange business rules. MicroGuide to Process and Decision Modeling in BPMN/DMN.

3. A reminder to look for the nonobvious. This analysis examines differences in parole hearing outcomes: The usual suspects, such as crime committed, don't always explain why one prisoner is paroled and another is not. Turns out, it's best to go up first thing in the morning. [NY Times] Do You Suffer From Decision Fatigue?

4. IBM sponsored a paper by Ventana Research connecting advanced analytics and business intelligence (finally!), making the excellent point that unless BI insights "are inserted into decision-making processes," they have minimal value. (And then sadness. Ventana weakens their case with "Robust new technology enables better decision-making." How do you measure ‘robust’?) Available at Advanced Analytics Enhances Business Intelligence.

5. Doctor 'decider' skills fade after lunchtime. Bring snacks. Researchers analyzed billing and electronic health records for 20,000+ patients. About 5% more got antibiotics prescriptions later in the day. Findings in JAMA Internal Medicine. Data Shows Docs Prescribe More Antibiotics Late in Day.

 

Are you interested in creating better ways to inform people's decision making? At Ugly Research, I'm  always looking for people who want to shape this idea.

Wednesday, 15 October 2014

The lean way to present evidence to decision makers.

There's a lot of advice out there about how to design presentations. But most of that fails to prepare you for delivering complex evidence to senior-level decision-makers. Here's what I do. Hope this helps.

First, ask yourself this: How might your evidence help someone better understand the steps required to reach an important goal?

  1. To develop an answer, put together what I call lean evidence, embracing lean management concepts. As explained by the Lean Enterprise Institute, "The core idea is to maximize customer value while minimizing waste." Keep this in mind when planning a presentation, writing your report, or sending that email: Focus on what's valuable, and reduce waste by stripping out nonessentials. Show how value flows over all the details to what's important to your audience.
  2. Skip the storytelling. Begin with "Boom! Here's my answer." You're not Steve Jobs, and this isn't a TED talk. You're delivering lean evidence to a busy executive, so think of all that buildup as waste. Stay true to lean, and get rid of it. Jeanne Tari, VP at Power Speaking, makes a similar point, saying the way to present to executives is to "bottom line it first, then have a dialogue".
  3. Go easy on the pretty pictures. Everybody loves eye candy. But the data visualization is not the point: It just helps you make your point.
  4. Connect dots that matter. Keep the focus on your insights, and how they can help the decision maker improve outcomes. (If you find that you're simply reporting results without connecting at least two important things together, then go back and re-evaluate.)
  5. Avoid the dreaded SMEs disease. Provide enough detail about your methods to establish credibility as a subject matter expert. Then stop. Pay yourself $5 for every word you delete. Andrew a/k/a @theFundingGuru gives this advice, and I agree.

Monday, 18 August 2014

How to tell if someone really is a unicorn.

Unicorn_oakland

This Sunday is Unicorn Backpack giveaway day at the Oakland A's game. Given the current mythology about good data scientists a/k/a unicorns, Billy Beane of baseball analytics fame (G.M. of the Athletics) comes to mind.

Unicorn verification process. I'm not minimizing the difficulty of policy research, analytics, data science, and other efforts to find meaningful patterns in data. But communication skills and business savvy dramatically influence people's ability to succeed. As part of an engagement or hiring process, I suggest asking a potential unicorn these questions:

1) What evidence have you worked with that can potentially improve outcomes? Where might it be applicable? 2) How do you translate a complex analysis into plain English for executive decision makers? 3) What visuals are most effective for connecting findings to important business objectives?

Can you talk the talk and walk the walk? While Mr. Beane brilliantly recognized the value of OBP and other underappreciated baseball stats, that's not what made him a unicorn. His ability to explain his findings and advocate for nonobvious, risky, high-stakes management decisions - and to later demonstrate a payoff from those decisions -  is what made him a unicorn.

A colleague of mine worked at a successful, publicly traded telecom company. As a PhD economist, he managed a group of 25 economists. And he says the reason he led the team, and did most of the interacting with senior executives, was that he could explain their economic modeling in business terms appropriate for the audience. 

Connect to what matters. Accenture’s extensive research of analytics ROI has found that “most organizations measure too many things that don’t matter, and don’t put sufficient focus on those things that do, establishing a large set of metrics, but often lacking a causal mapping of the key drivers of their business.”

It's a common theme: Translate geek to English. SAP’s chief data scientist, David Ginsberg, says a key player on his big-data team is someone “who can translate PhD to English. Those are the hardest people to find”. Kerem Tomak, who manages 35 retail analysts, explained to Information Week that “A common weakness with data analytics candidates is they’re happy with just getting the answer, but don’t communicate it”. "The inability to communicate with business decision-makers is not just a negative, it's a roadblock," says Jeanne Harris, global managing director of IT research at Accenture and author of two books on analytics. 

Will Mr. Beane be wearing a unicorn backpack at the game on Sunday? I sure hope so. 

Wednesday, 13 August 2014

Interview Wednesday: James Taylor on decision management, analytics, and evidence.

JamestaylorFor Interview Wednesday, today we hear from James Taylor, CEO of Decision Management Solutions in Palo Alto, California. Email him james@decisionmanagementsolutons.com, or follow on Twitter @jamet123. James' work epitomizes the mature use of evidence: Developing decision processes, figuring out ahead of time what evidence is required for a particular type of decision, then continually refining that to improve outcomes. I'm fond of saying "create a decision culture, not a data culture" and decision management is the fundamental step toward that. One of the interesting things he does is show people how to apply decision modeling. Of course we can't always do this, because our decisions aren't routine/repeatable enough, and we lack evidence - although I believe we could achieve something more meaningful in the middle ground, somewhere between establishing hard business rules and handling every strategic decision as a one-off process. But enough about me, let's hear from James.

#1. How did you get into the decision-making field, and what types of decisions do you help people make?
I started off in software as a product manager. Having worked on several products that needed to embed decision-making using business rules, I decided to join a company whose product was a business rules management system or BRMS. While there are many things you can do with business rules and with a BRMS, automating decision-making is where they really shine. That got me started but then we were acquired by HNC and then FICO – companies with a long history of using advanced predictive analytics as well as business rules to automate and manage credit risk decisions.

That brought me squarely into the analytics space and led me to the realization that Decision Management – the identification, automation, and management of high volume operational decisions – was a specific and very high-value way to apply analytics. That was about 11 or 12 years ago and I have been working in Decision Management and helping people build Decision Management Systems ever since. The specific kinds of decisions that are my primary focus are repeatable, operational decisions made at the front line of an organization in very high volume. These decisions, often about a single customer or a single transaction, are generally wholly or partly automated as you would expect. They range from decisions about credit or delivery risk to fraud detection, from approvals and eligibility decisions to next best action and pricing decisions. Often our focus is not so much on helping a person MAKE these decisions as helping them manage the SYSTEM that makes them.

#2. There's no shortage of barriers to better decision-making (problems with data, process, technology, critical thinking, feedback, and so on). Where does your work contribute most to breaking down these barriers?
I think there are really three areas – the focus on operational decisions, the use of business rules and predictive analytics as a pair, and the use of decision modeling. The first is simply identifying that these decisions matter and that analytics and other technology can be applied to automate, manage, and improve them. Many organizations think that only executives or managers make decisions and so neglect to improve the decision-making of their call center staff, their retail staff, their website, their mobile application etc.

The ROI on improving these decisions is high because although each decision is small, the cumulative effect is large because these decisions are made so often. The second is in recognizing business rules and predictive analytics as a pair of technologies. Business rules allow policies, regulations, and best practices to be applied rigorously while maintaining the agility to change them when necessary. They also act as a great platform for using predictive analytics, allowing you to define what to DO based on the prediction. Decision Management focuses on using them together to be analytically prescriptive. The third is in using decision modeling as a way to specify decision requirements. This helps identify the analytics, business rules and data required to make and improve the decision. It allows organizations to be explicit about the decision making they need and to create a framework for continuous improvement and innovation.

#3. One challenge with delivering findings is that people don't always see how or where they might be applied. In your experience, what are some effective ways to encourage the use of data in decision making?
Two things really seem to help. The first is mapping decisions to metrics, showing executives and managers that these low-level decisions contribute to hitting key metrics and performance indicators. If, for instance, I care about my level of customer retention, then all sorts of decisions made at the front line (what retention offer to make, what renewal price to offer, service configuration recommendations) make a difference. If I don't manage those decisions then I am not managing that metric. Once they make this connection they are very keen to improve the quality of these decisions and that leads naturally to using analytics to improve them.

Unlike executive or management decisions there is a sense that the "decision makers" for these decisions don't have much experience and it is easier therefore to present analytics as a critical tool for better decisions. The second is to model out the decision. Working with teams to define how the decision is made, and how they would like it to be made, often makes them realize how poorly defined it has been historically and how little control they have over it. Understanding the decision and then asking "if only" questions – "we could make a better decision if only we knew…" – make the role of analytics clear and the value of data flows naturally from this. Metrics are influenced by decisions, decisions are improved by analytics, those analytics require data. This creates a chain of business value.

#4. On a scale of 1 to 10, how would you rate the "state of the evidence" in your field? Where 1 = weak (data aren't available to show what works and what doesn't), and 10 = mature (people have comprehensive evidence to inform their decision-making, in the right place at the right time).
This is a tricky question to answer. In some areas, like credit risk, the approach is very mature. Few decisions about approving a loan or extending a credit card limit are made without analytics and evidence. Elsewhere it is pretty nascent. I would say 10 in credit risk and fraud, 5-6 in marketing and customer treatment, and mostly 1-2 elsewhere.

#5. What do you want your legacy to be?
From a work perspective I would like people to take their operational decisions as seriously as they take their strategic, tactical, executive and managerial decisions. Thanks, James. Great explanation of creating value with evidence by stepping back and designing a mature decision process.

Tuesday, 05 August 2014

Tech needs to embrace diversity in more ways than one.

In the U.S. there’s a push for more opportunity and diversity in the tech industry (for good reason, judging from recent statistics). Diversity is an important social goal. Where I live in Oakland, California, good work is being done to foster inclusion in tech. But I see another, related problem: We need more diversity of thought to stop producing the same types of data for the same types of audiences. Here’s my evidence.

Sheep herd photo by Linda Tanner on Flickr

Data isn't enough. It seems that business intelligence technology, big data startups, and analytics are everywhere. Nothing wrong with people becoming more productive and making more evidence-based decisions. Of course, many technologies seek to replace people with algorithms: Nothing wrong with that either, in some cases (I explore this in my recent report, Data is Easy, Deciding is Hard.)

But while we're collecting data, why don't we do more for the human decision maker? Tech vendors are producing lots of impressive dashboard and visualization functionality, but not enough tools for synthesizing complex evidence, evaluating difficult situations, and overcoming our bad decision-making habits. Tech is producing too many nicely displayed facts without explanation: Lots of 'what' and not enough 'why'. 

Data viz isn't enough. With more diverse thinking, we could build practical tools that visualize decisions, show causal mappings, and capture a whole story from numerous sources of evidence. Consider the new 8.2 release from Tableau, a very successful maker of data visualization tools. The company says it's "obsessed with data. Connecting to data, analyzing data, and communicating with data." Their new Story Point feature is nice. But, as you can see in their Austin Teacher Turnover example, the 'story' is long on facts and short on real story: We don't see the specifics of the Reach program, we don't know which Austin groups supported it and which ones didn't, or why it failed. And we don't see what decisions were made by school officials implementing the program, or which actions are connected to which outcomes. Rather than yet another data viz, why don't the smart, capable people at Tableau think differently and produce something more comprehensive and innovative?

Austin Teacher Turnover visualization by Tableau Story Points

BI getting bigger, not better. I'm not alone in questioning the value in some of the new data tools. Business intelligence usage is flat. A popular 2014 survey by TDWI reported a 6% decline in those finding significant impact, down to only 28%. 

We're not connecting action to outcome. One of the best critiques / analyses I've seen is Accenture’s extensive study Analytics in Action: Breakthroughs and Barriers on the Journey to ROI. Their research shows that “most organizations measure too many things that don’t matter, and don’t put sufficient focus on those things that do, establishing a large set of metrics, but often lacking a causal mapping of the key drivers of their business.” Accenture underscores the “need to industrialize the insight-action-outcome sequence”. Highlighting the absence of tools designed for decision-making, they conclude that most companies “fail to embed analytical insights in key decision processes so that analytics capabilities are linked to business outcomes.”

Frank Bien of Looker tells the hard truth: “The common view of the past five years is that users are stupid and that data needs to be spoon-fed to them via pretty pictures…. It’s time to strike a new balance: to join ‘big data’ to business data in such a way that it serves the business - and doesn’t just grow a big data repository.” 

What can be done? Hiring people with diverse experience, and engaging a diverse set of customers, is a good first step toward finding better problems to solve. Diversity of investment - in the public, private, and third sectors - is another needed step, and that's being recognized. Christopher Mims wrote recently that "The entire Bay Area appears to have given up on solving anything but its own problems: those afflicting the same 20-somethings who are building these startups." Of course they don't do this all by themselves: Venture capitalists are being accused of "focusing exclusively on the first-world segment of twentysomething yuppies".

Yes, we need more hiring diversity, but please don't take away the 20-somethings. As a startup founder in the Bay Area, I benefit from several of their clever, disruptive, well-executed solutions, particularly Lyft, Munchery, Caviar, and Instacart.

Adorable sheep photo, Why I Was Late for Church Today, by Linda Tanner / CC BY.

#divtech #dataviz #diversity #siliconvalley #decisionmaking

 

Monday, 21 July 2014

The Data-Driven vs. Gut Feel hyperbole needs to stop.

Smart decision-making is more complicated than becoming ‘data-driven’, whatever that means exactly. We know people can make better decisions if they consider relevant evidence, and that process is getting easier with more data available. But too often I hear tech advocates suggest that people’s decisions are just based on gut feel, as if data will save us from ourselves.

Dataman-vs-Human_07jul14

We need to put an end to the false dichotomy of 'data-driven' vs. 'human intuition'. Consider the challenge of augmenting the performance of a highly skilled professional, such as a medical doctor. Investor Vinod Khosla claims technology will replace 80%+ of physicians’ role in the decision-making process. “Human judgment simply cannot compete against machine-learning systems that derive predictions from millions of data points”. Perhaps so, but it’s really tricky to blend evidence into patient care processes: Research in BMJ reveals mixed results from clinical decision support technology, particularly systems that deliver alerts to doctors who are writing prescriptions.

Data+People=Better. One tech enthusiast compares IBM’s Watson to a hospital CEO. Ron Shinkman asks if it could “be programmed to pore over business cases, news clippings, algorithms and spreadsheets to make the same recommendations?” Actually, that’s what Watson does. But Shinkman overlooks the real opportunity: To supplement, not replace, a CEO’s analytical skills. (Note: This is an excerpt from a research paper I recently wrote  at Ugly Research.)

Why IT Fumbles Analytics. In an excellent Harvard Business Review analysis of how decision makers assimilate data, Donald Marchand and Joe Peppard explain that

management lacks “structure. Even when an organization tries to capture their information needs, it can take only a snapshot, which in no way reflects the messiness of their jobs. At one moment a manager will need data to support a specific, bounded decision; at another he’ll be looking for patterns that suggest new business opportunities or reveal problems.”

Here's another example of a claim that new technology will replace human intuition with fact-driven decision-making.

Factdriven-vs-instinct-ibm

Source: Business analytics and optimization for the intelligent enterprise (IBM).

You’re not the boss of me. There’s a right time and a wrong time to look at data. As Peter Kafka explains, Netflix executives enthusiastically use data to market TV shows, but not to create them. Others agree data can interrupt the creative process. In The United States of Metrics, Bruce Fieler observes that data is often presented as if it contains all the answers. But “metrics rob individuals of the sense that they can choose their own path.”

However, people could do better. Of course decision makers frequently should ignore their instincts. Andrew McAfee gives examples of algorithms that outperform human experts, and explains why our intuition is uneven (we need cues and rapid feedback).

The Economist Intelligence Unit asked managers “When taking a decision, if the available data contradicted your gut feeling, what would you do?” Most preferred to crunch some more numbers. Only 10% said they would follow the action suggested by the data. The sponsors of Decisive action: How businesses make decisions and how they could do it better concluded that while “many business leaders know they need to make better use of data, it’s clear that they don’t always know how best to do so, or which data they should select from the enormous quantity available to them. They are constrained by their ability to analyse data, rather than their access to it.”

How do you challenge a decision maker? When data is available to improve a result, it must be communicated so it challenges people to apply it, not deny it. One way is to provide initial recommendations, and then require anyone who takes exception to enter notes explaining their rationale. Examples: Extending offers to customers on the telephone, or prescribing medical treatments.

Excerpted from: Data is Easy. Deciding is Hard.

Wednesday, 29 January 2014

Enough already with the Ooh, Shiny! data. Show me evidence to explain these outcomes.

I love data visualization as much as the next guy. I'm big on Big Data! And I quantify myself every chance I get.

But I've had my fill of shiny data that doesn't help answer important questions. Things like: What explains these outcomes? What do the experts say? How can we reduce crime?

Crime data viz Source: Tableau.

Does new technology contribute nothing more than pretty pictures and mindless measurement? Of course not. We can discover meaningful patterns with analytics and business intelligence: Buying behavior, terrorist activity, health effects.

But not all aha! moments are created equal. Looky here! There's poverty in Africa! People smile more in St. Louis! Some of this stuff has marginal usefulness for decision makers. A recent New York Times piece underscores the apparent need for arty manipulations of relatively routine data. In A Makeover for Maps, we learn that:

  • “It doesn’t work if it’s not moving.” (Eric Rodenbeck of Stamen Design)
  • "No more than 18 colors at once. You can't consume more than 18." (Christian Cabot, CEO of wildly successful Tableau Software)

I dare say these aren't the aha! moments strategic decision makers are looking for. This seems like a good time to re-visit the Onion's classic, Nation Shudders at Large Block of Uninterrupted Text.

Crime research forest plotShiny objects are great conversation starters. But many of us a) are busy trying to solve big problems, and b) don't need special effects to keep us interested in our professional lives. We need explanations of causes and effects, transparency into research findings, analysis of alternatives. Take the forest plot, for instance, described very effectively by Hilda Bastian. Here you don't just see crime stats: You discover that some tax-funded social programs might actually increase crime.

Decision makers need presentations that are better suited to them. That's the real data story.

Other examples of gee-whiz visualizations that signal a worrisome trend: The Do You Realize? dashboard, winner of a QlikView BI competition, as reported by Software Advice. And Have you ever wondered how fast you are spinning around earth's rotational axis? Probably not, but now you can find out anyway!

On a brighter note, the very talented Douglas van der Molen is quoted in Makeover for Maps, saying he is “looking for ways to augment human perception to help in complex decision making.” Maybe today's sophisticated tools will lead to something game-changing for problem solvers. Or maybe we'll keep manufacturing faux aha! moments.

Wednesday, 30 October 2013

Don't show me the evidence. Show me how you weighed the evidence.

Sometimes we fool ourselves into thinking that if people just had access to all the relevant evidence, then the right decision - and better outcomes - would surely follow.

Calculator for decision makingOf course we know that's not the case. A number of things block a clear path from evidence to decision to outcome. Evidence can't speak for itself (and even if it could, human beings aren't very good listeners). 

It's complicated. Big decisions require synthesizing lots of evidence arriving in different (opaque) forms, from diverse sources, with varying agendas. Not only do decision makers need to resolve conflicting evidence, they must also balance competing values and priorities. (Which is why "evidence-based management" is a useful concept, but as a tangible process is simply wishful thinking.) Later in this post, I'll describe a recent pharma evidence project as an example.

If you're providing evidence to influence a decision, what can you do? Transparency can move the ball forward substantially. But ideally it's a two-way street: Transparency in the presentation of evidence, rewarded with transparency into the decision process. However, decison-makers avoid exposing their rationale for difficult decisions. It's not always a good idea to publicly articulate preferences about values, risk assessments, and priorities when addressing a complex problem: You may get burned. And it's even less of a good idea to reveal proprietary methods for weighing evidence. Mission statements or checklists, yes, but not processes with strategic value.

Boxplots-D3 libraryThe human touch. If decision-making was simply a matter of following the evidence, then we could automate it, right? In banking and insurance, they've created impressive technology to automate approvals for routine decisions: But doing so first requires a very explicit weighing of the evidence and design of business rules.

Where automation isn't an option, decision makers use a combination of informal methods and highly sophisticated models. Things like Delphi, efficient frontier, or multiple criteria decision analysis (MCDA); but let's face it, there are still a lot of high-stakes beauty contests going on out there.

What should transparency look like? Presenters can add transparency to their evidence in several ways. Here's my take:

Level 1: Make the evidence accessible. Examples: Publishing a study in conventional academic/science journal style. Providing access to a database.

Level 2: Show, don't tell: Supplement lengthy narrative with visual cues. Provide data visualization and synopsis. Demonstrate the dependencies and interactivity of the information. Example: Provide links to comprehensive analysis, but first show the highlights in easily digestible form - including details of the analytical methods being applied.

Level 3: Make it actionable: Apply the "So what?" test. Show why the evidence matters. Example: Show how variables connect to, or influence, important outcomes (supported by graph data and/or visualizations, rather than traditional tabular results).

On the flip side, decision makers can add transparency by explaining how they view the evidence: Which evidence carries the most weight? Which findings are expected to influence desired outcomes?

How are pharma coverage decisions made? Which brings me to transparency in health plan decision-making. Here you have complex evidence and important tradeoffs, compounded by numerous stakeholders (payers, providers, patients, pharma). When U.S. pharmaceutical manufacturers seek formulary approval, they present the evidence about their product; frequently they must follow a prescribed format such as AMCP dossier (there are other ways, including value dossiers). Then the health plan's P&T (Pharmacy and Therapeutics) committee evaluates that evidence.

Recently an industry group conducted a study in an effort to gain deeper understanding of payer coverage decisions. Results appear in “Transparency in Evidence Evaluation and Formulary Decision-Making” (Pharmacy and Therapeutics, August 2013).

“Right now, there is a bit of a ‘black box’ around the formulary decision-making process,” said Robert Dubois, MD, PhD, NPC’s chief science officer and an author of the study. “As a result, decisions about treatment access are often unpredictable to patients, providers and biopharmaceutical manufacturers. We sought to identify ways to clarify the process.”

Whose business is it, anyway? Understandably, manufacturers want to clarify what factors influence the level of access their products receive. And patients want more visibility into formularies: What coverage and co-pays can they expect from their health plan? How is safety weighed against effectiveness? Now that U.S. healthcare is becoming more consumer-driven, I expect something to change.

Transparency in Evidence Evaluation and Formulary Decision-Making
The process. Put simply, the project sponsors were asking payers to explain how they balance the evidence about drug efficacy, safety, and cost. Capturing that information systematically is a big challenge. In scenarios like this, you'll often end up with a big checklist, which is sort of what happened (snippet shown above). An evidence assessment tool was developed by surveying medical and pharmacy directors, who identified key factors by rating the level of access they would provide for drugs in various hypothetical scenarios. 

And then sadness. The tool was validated, then pilot-tested in real-world environments where P&T committees used it to review new drugs. However, participants in the testing portion indicated that "the tool did not capture the dynamic and complex variables involved in the formulary decision-making process, and therefore would not be suitable for more sophisticated organizations." Once again, capturing a complex decision-making process seems out of reach.

Setting expectations. Traditional vendor/customer relationships don't lend themselves to openness. If pharma companies want more insight into payer expectations, they'll have to build strong partnerships with them. That's something they're now doing with risk-sharing and value-based reimbursement, but things won't change overnight. Developing the data infrastructure is one of the challenges long-term, but it seems to me - despite the unsuccessful result with the formulary tool - that more transparency could happen without substantial IT investments.