Evidence Soup
How to find, use, and explain evidence.

Monday, 18 August 2014

How to tell if someone really is a unicorn.

Unicorn_oakland

This Sunday is Unicorn Backpack giveaway day at the Oakland A's game. Given the current mythology about good data scientists a/k/a unicorns, Billy Beane of baseball analytics fame (G.M. of the Athletics) comes to mind.

Unicorn verification process. I'm not minimizing the difficulty of policy research, analytics, data science, and other efforts to find meaningful patterns in data. But communication skills and business savvy dramatically influence people's ability to succeed. As part of an engagement or hiring process, I suggest asking a potential unicorn these questions:

1) What evidence have you worked with that can potentially improve outcomes? Where might it be applicable? 2) How do you translate a complex analysis into plain English for executive decision makers? 3) What visuals are most effective for connecting findings to important business objectives?

Can you talk the talk and walk the walk? While Mr. Beane brilliantly recognized the value of OBP and other underappreciated baseball stats, that's not what made him a unicorn. His ability to explain his findings and advocate for nonobvious, risky, high-stakes management decisions - and to later demonstrate a payoff from those decisions -  is what made him a unicorn.

A colleague of mine worked at a successful, publicly traded telecom company. As a PhD economist, he managed a group of 25 economists. And he says the reason he led the team, and did most of the interacting with senior executives, was that he could explain their economic modeling in business terms appropriate for the audience. 

Connect to what matters. Accenture’s extensive research of analytics ROI has found that “most organizations measure too many things that don’t matter, and don’t put sufficient focus on those things that do, establishing a large set of metrics, but often lacking a causal mapping of the key drivers of their business.”

It's a common theme: Translate geek to English. SAP’s chief data scientist, David Ginsberg, says a key player on his big-data team is someone “who can translate PhD to English. Those are the hardest people to find”. Kerem Tomak, who manages 35 retail analysts, explained to Information Week that “A common weakness with data analytics candidates is they’re happy with just getting the answer, but don’t communicate it”. "The inability to communicate with business decision-makers is not just a negative, it's a roadblock," says Jeanne Harris, global managing director of IT research at Accenture and author of two books on analytics. 

Will Mr. Beane be wearing a unicorn backpack at the game on Sunday? I sure hope so. 

Wednesday, 13 August 2014

Interview Wednesday: James Taylor on decision management, analytics, and evidence.

JamestaylorFor Interview Wednesday, today we hear from James Taylor, CEO of Decision Management Solutions in Palo Alto, California. Email him james@decisionmanagementsolutons.com, or follow on Twitter @jamet123. James' work epitomizes the mature use of evidence: Developing decision processes, figuring out ahead of time what evidence is required for a particular type of decision, then continually refining that to improve outcomes. I'm fond of saying "create a decision culture, not a data culture" and decision management is the fundamental step toward that. One of the interesting things he does is show people how to apply decision modeling. Of course we can't always do this, because our decisions aren't routine/repeatable enough, and we lack evidence - although I believe we could achieve something more meaningful in the middle ground, somewhere between establishing hard business rules and handling every strategic decision as a one-off process. But enough about me, let's hear from James.

#1. How did you get into the decision-making field, and what types of decisions do you help people make?
I started off in software as a product manager. Having worked on several products that needed to embed decision-making using business rules, I decided to join a company whose product was a business rules management system or BRMS. While there are many things you can do with business rules and with a BRMS, automating decision-making is where they really shine. That got me started but then we were acquired by HNC and then FICO – companies with a long history of using advanced predictive analytics as well as business rules to automate and manage credit risk decisions.

That brought me squarely into the analytics space and led me to the realization that Decision Management – the identification, automation, and management of high volume operational decisions – was a specific and very high-value way to apply analytics. That was about 11 or 12 years ago and I have been working in Decision Management and helping people build Decision Management Systems ever since. The specific kinds of decisions that are my primary focus are repeatable, operational decisions made at the front line of an organization in very high volume. These decisions, often about a single customer or a single transaction, are generally wholly or partly automated as you would expect. They range from decisions about credit or delivery risk to fraud detection, from approvals and eligibility decisions to next best action and pricing decisions. Often our focus is not so much on helping a person MAKE these decisions as helping them manage the SYSTEM that makes them.

#2. There's no shortage of barriers to better decision-making (problems with data, process, technology, critical thinking, feedback, and so on). Where does your work contribute most to breaking down these barriers?
I think there are really three areas – the focus on operational decisions, the use of business rules and predictive analytics as a pair, and the use of decision modeling. The first is simply identifying that these decisions matter and that analytics and other technology can be applied to automate, manage, and improve them. Many organizations think that only executives or managers make decisions and so neglect to improve the decision-making of their call center staff, their retail staff, their website, their mobile application etc.

The ROI on improving these decisions is high because although each decision is small, the cumulative effect is large because these decisions are made so often. The second is in recognizing business rules and predictive analytics as a pair of technologies. Business rules allow policies, regulations, and best practices to be applied rigorously while maintaining the agility to change them when necessary. They also act as a great platform for using predictive analytics, allowing you to define what to DO based on the prediction. Decision Management focuses on using them together to be analytically prescriptive. The third is in using decision modeling as a way to specify decision requirements. This helps identify the analytics, business rules and data required to make and improve the decision. It allows organizations to be explicit about the decision making they need and to create a framework for continuous improvement and innovation.

#3. One challenge with delivering findings is that people don't always see how or where they might be applied. In your experience, what are some effective ways to encourage the use of data in decision making?
Two things really seem to help. The first is mapping decisions to metrics, showing executives and managers that these low-level decisions contribute to hitting key metrics and performance indicators. If, for instance, I care about my level of customer retention, then all sorts of decisions made at the front line (what retention offer to make, what renewal price to offer, service configuration recommendations) make a difference. If I don't manage those decisions then I am not managing that metric. Once they make this connection they are very keen to improve the quality of these decisions and that leads naturally to using analytics to improve them.

Unlike executive or management decisions there is a sense that the "decision makers" for these decisions don't have much experience and it is easier therefore to present analytics as a critical tool for better decisions. The second is to model out the decision. Working with teams to define how the decision is made, and how they would like it to be made, often makes them realize how poorly defined it has been historically and how little control they have over it. Understanding the decision and then asking "if only" questions – "we could make a better decision if only we knew…" – make the role of analytics clear and the value of data flows naturally from this. Metrics are influenced by decisions, decisions are improved by analytics, those analytics require data. This creates a chain of business value.

#4. On a scale of 1 to 10, how would you rate the "state of the evidence" in your field? Where 1 = weak (data aren't available to show what works and what doesn't), and 10 = mature (people have comprehensive evidence to inform their decision-making, in the right place at the right time).
This is a tricky question to answer. In some areas, like credit risk, the approach is very mature. Few decisions about approving a loan or extending a credit card limit are made without analytics and evidence. Elsewhere it is pretty nascent. I would say 10 in credit risk and fraud, 5-6 in marketing and customer treatment, and mostly 1-2 elsewhere.

#5. What do you want your legacy to be?
From a work perspective I would like people to take their operational decisions as seriously as they take their strategic, tactical, executive and managerial decisions. Thanks, James. Great explanation of creating value with evidence by stepping back and designing a mature decision process.

Wednesday, 06 August 2014

Interview Wednesday: Steve Miller reminds us it's "Analytics for show, data for dough."

Steve Miller

Today we interview Steve Miller, President of Open BI in Chicago, Illinois (on Twitter: @openbillc). I like his reminder that, while findings deliver the wow factor, perhaps the trickiest part of developing useful evidence is getting the right data to start with.


#1. How did you get into the business intelligence field, and what types of decisions do you help people make?

I got into data/analytics 35 years ago after leaving a PhD stats program with a Masters. So, I've basically been doing this work my entire career -- before DSS, before the DW and before BI.The obsession has always been with the data -- design, structure, quality, development, etc. We have a saying: Analytics for show; Data for dough.

For the last 25 years, I've been in consulting. Up until about 5 years ago, the focus of the BI/analytics was generally performance management -- evaluating how companies are doing. We have a science of business methodology that helps connect strategy with desired outcomes through analytics. Now it's as much about data science, focusing on data products for companies that owe their existence to data.


#2. There's no shortage of barriers to better decision-making (problems with data, process, technology, critical thinking, feedback, and so on). Where does your work contribute most to breaking down these barriers?

One barrier we see often is between traditional BI and data science. It's as much a generational divide as anything. Many BI people could be my siblings; most DS folks would be my kids! The new generation often finds established BI too plodding and inflexible, while the "adults" portray much data science work as one-off, cowboy stuff, lacking governance and repeatable process methodologies. Both sides are right -- and wrong. If you could combine the discipline of BI with the enthusiastic get-it-done attitude of DS while avoiding the downsides, you'd really have something.


#3. One challenge with delivering findings is that people don't always see how or where they might be applied. In your experience, what are some effective ways to encourage the use of data in decision making?

We believe in agile delivery to get some quality data/analytics in the hands of users as quickly as possible so they can experience the "aha" moments when the intelligence starts to click. It is only when they say, "great, now I'd like to see such and such" -- which is not to be delivered in that stage -- that we know we're having success. An old bromide continues to ring true: "The analytics project doesn't start till the main user first experiences her data."


#4. On a scale of 1 to 10, how would you rate the "state of the evidence" in your field? Where 1 = weak (data aren't available to show what works and what doesn't), and 10 = mature (people have comprehensive evidence to inform their decision-making, in the right place at the right time).

Most of the companies that hire us are committed, at least publicly, to using data to support decision-making. I'd say for the customers that hire us for BI in support of performance management, the rating would be a 6. Some of our other customers weren't even in business 5 years ago. They are fundamentally data products companies who make money only through data -- mostly via analytics that provide their customers with "lift" over the absence of same. I'd rate these customers as an 8.


#5. What do you want your legacy to be?

On the career side, I'm pretty much doing the same thing now I was 35 years ago -- using data/anaytics to drive better drive decision-making. For me, that stability/consistency suggests sound choices -- I think.
On the life side, I'd like to think I understand and live by the differences between evidence-based policy-making and policy-based evidence making. I strive, but probably don't quite succeed, accommodating my biases and irrational thinking.


#6. What's the future for your company?

We're in the process of finalizing the re-branding we started several years ago. We were founded as a BI company focused on open source solutions. Though we still use it extensively, OS is no longer the driver for business. Now, it's analytics, big data and data science. Interesting is how the term business intelligence is starting to become obsolete. Indeed, one of our big customers recently changed the name of its BI group to Data Science, convinced the new moniker will be more attractive to candidates.

Tuesday, 05 August 2014

Tech needs to embrace diversity in more ways than one.

In the U.S. there’s a push for more opportunity and diversity in the tech industry (for good reason, judging from recent statistics). Diversity is an important social goal. Where I live in Oakland, California, good work is being done to foster inclusion in tech. But I see another, related problem: We need more diversity of thought to stop producing the same types of data for the same types of audiences. Here’s my evidence.

Sheep herd photo by Linda Tanner on Flickr

Data isn't enough. It seems that business intelligence technology, big data startups, and analytics are everywhere. Nothing wrong with people becoming more productive and making more evidence-based decisions. Of course, many technologies seek to replace people with algorithms: Nothing wrong with that either, in some cases (I explore this in my recent report, Data is Easy, Deciding is Hard.)

But while we're collecting data, why don't we do more for the human decision maker? Tech vendors are producing lots of impressive dashboard and visualization functionality, but not enough tools for synthesizing complex evidence, evaluating difficult situations, and overcoming our bad decision-making habits. Tech is producing too many nicely displayed facts without explanation: Lots of 'what' and not enough 'why'. 

Data viz isn't enough. With more diverse thinking, we could build practical tools that visualize decisions, show causal mappings, and capture a whole story from numerous sources of evidence. Consider the new 8.2 release from Tableau, a very successful maker of data visualization tools. The company says it's "obsessed with data. Connecting to data, analyzing data, and communicating with data." Their new Story Point feature is nice. But, as you can see in their Austin Teacher Turnover example, the 'story' is long on facts and short on real story: We don't see the specifics of the Reach program, we don't know which Austin groups supported it and which ones didn't, or why it failed. And we don't see what decisions were made by school officials implementing the program, or which actions are connected to which outcomes. Rather than yet another data viz, why don't the smart, capable people at Tableau think differently and produce something more comprehensive and innovative?

Austin Teacher Turnover visualization by Tableau Story Points

BI getting bigger, not better. I'm not alone in questioning the value in some of the new data tools. Business intelligence usage is flat. A popular 2014 survey by TDWI reported a 6% decline in those finding significant impact, down to only 28%. 

We're not connecting action to outcome. One of the best critiques / analyses I've seen is Accenture’s extensive study Analytics in Action: Breakthroughs and Barriers on the Journey to ROI. Their research shows that “most organizations measure too many things that don’t matter, and don’t put sufficient focus on those things that do, establishing a large set of metrics, but often lacking a causal mapping of the key drivers of their business.” Accenture underscores the “need to industrialize the insight-action-outcome sequence”. Highlighting the absence of tools designed for decision-making, they conclude that most companies “fail to embed analytical insights in key decision processes so that analytics capabilities are linked to business outcomes.”

Frank Bien of Looker tells the hard truth: “The common view of the past five years is that users are stupid and that data needs to be spoon-fed to them via pretty pictures…. It’s time to strike a new balance: to join ‘big data’ to business data in such a way that it serves the business - and doesn’t just grow a big data repository.” 

What can be done? Hiring people with diverse experience, and engaging a diverse set of customers, is a good first step toward finding better problems to solve. Diversity of investment - in the public, private, and third sectors - is another needed step, and that's being recognized. Christopher Mims wrote recently that "The entire Bay Area appears to have given up on solving anything but its own problems: those afflicting the same 20-somethings who are building these startups." Of course they don't do this all by themselves: Venture capitalists are being accused of "focusing exclusively on the first-world segment of twentysomething yuppies".

Yes, we need more hiring diversity, but please don't take away the 20-somethings. As a startup founder in the Bay Area, I benefit from several of their clever, disruptive, well-executed solutions, particularly Lyft, Munchery, Caviar, and Instacart.

Adorable sheep photo, Why I Was Late for Church Today, by Linda Tanner / CC BY.

#divtech #dataviz #diversity #siliconvalley #decisionmaking

 

Monday, 21 July 2014

The Data-Driven vs. Gut Feel hyperbole needs to stop.

Smart decision-making is more complicated than becoming ‘data-driven’, whatever that means exactly. We know people can make better decisions if they consider relevant evidence, and that process is getting easier with more data available. But too often I hear tech advocates suggest that people’s decisions are just based on gut feel, as if data will save us from ourselves.

Dataman-vs-Human_07jul14

We need to put an end to the false dichotomy of 'data-driven' vs. 'human intuition'. Consider the challenge of augmenting the performance of a highly skilled professional, such as a medical doctor. Investor Vinod Khosla claims technology will replace 80%+ of physicians’ role in the decision-making process. “Human judgment simply cannot compete against machine-learning systems that derive predictions from millions of data points”. Perhaps so, but it’s really tricky to blend evidence into patient care processes: Research in BMJ reveals mixed results from clinical decision support technology, particularly systems that deliver alerts to doctors who are writing prescriptions.

Data+People=Better. One tech enthusiast compares IBM’s Watson to a hospital CEO. Ron Shinkman asks if it could “be programmed to pore over business cases, news clippings, algorithms and spreadsheets to make the same recommendations?” Actually, that’s what Watson does. But Shinkman overlooks the real opportunity: To supplement, not replace, a CEO’s analytical skills. (Note: This is an excerpt from a research paper I recently wrote  at Ugly Research.)

Why IT Fumbles Analytics. In an excellent Harvard Business Review analysis of how decision makers assimilate data, Donald Marchand and Joe Peppard explain that

management lacks “structure. Even when an organization tries to capture their information needs, it can take only a snapshot, which in no way reflects the messiness of their jobs. At one moment a manager will need data to support a specific, bounded decision; at another he’ll be looking for patterns that suggest new business opportunities or reveal problems.”

Here's another example of a claim that new technology will replace human intuition with fact-driven decision-making.

Factdriven-vs-instinct-ibm

Source: Business analytics and optimization for the intelligent enterprise (IBM).

You’re not the boss of me. There’s a right time and a wrong time to look at data. As Peter Kafka explains, Netflix executives enthusiastically use data to market TV shows, but not to create them. Others agree data can interrupt the creative process. In The United States of Metrics, Bruce Fieler observes that data is often presented as if it contains all the answers. But “metrics rob individuals of the sense that they can choose their own path.”

However, people could do better. Of course decision makers frequently should ignore their instincts. Andrew McAfee gives examples of algorithms that outperform human experts, and explains why our intuition is uneven (we need cues and rapid feedback).

The Economist Intelligence Unit asked managers “When taking a decision, if the available data contradicted your gut feeling, what would you do?” Most preferred to crunch some more numbers. Only 10% said they would follow the action suggested by the data. The sponsors of Decisive action: How businesses make decisions and how they could do it better concluded that while “many business leaders know they need to make better use of data, it’s clear that they don’t always know how best to do so, or which data they should select from the enormous quantity available to them. They are constrained by their ability to analyse data, rather than their access to it.”

How do you challenge a decision maker? When data is available to improve a result, it must be communicated so it challenges people to apply it, not deny it. One way is to provide initial recommendations, and then require anyone who takes exception to enter notes explaining their rationale. Examples: Extending offers to customers on the telephone, or prescribing medical treatments.

Excerpted from: Data is Easy. Deciding is Hard.

Wednesday, 29 January 2014

Enough already with the Ooh, Shiny! data. Show me evidence to explain these outcomes.

I love data visualization as much as the next guy. I'm big on Big Data! And I quantify myself every chance I get.

But I've had my fill of shiny data that doesn't help answer important questions. Things like: What explains these outcomes? What do the experts say? How can we reduce crime?

Crime data viz Source: Tableau.

Does new technology contribute nothing more than pretty pictures and mindless measurement? Of course not. We can discover meaningful patterns with analytics and business intelligence: Buying behavior, terrorist activity, health effects.

But not all aha! moments are created equal. Looky here! There's poverty in Africa! People smile more in St. Louis! Some of this stuff has marginal usefulness for decision makers. A recent New York Times piece underscores the apparent need for arty manipulations of relatively routine data. In A Makeover for Maps, we learn that:

  • “It doesn’t work if it’s not moving.” (Eric Rodenbeck of Stamen Design)
  • "No more than 18 colors at once. You can't consume more than 18." (Christian Cabot, CEO of wildly successful Tableau Software)

I dare say these aren't the aha! moments strategic decision makers are looking for. This seems like a good time to re-visit the Onion's classic, Nation Shudders at Large Block of Uninterrupted Text.

Crime research forest plotShiny objects are great conversation starters. But many of us a) are busy trying to solve big problems, and b) don't need special effects to keep us interested in our professional lives. We need explanations of causes and effects, transparency into research findings, analysis of alternatives. Take the forest plot, for instance, described very effectively by Hilda Bastian. Here you don't just see crime stats: You discover that some tax-funded social programs might actually increase crime.

Decision makers need presentations that are better suited to them. That's the real data story.

Other examples of gee-whiz visualizations that signal a worrisome trend: The Do You Realize? dashboard, winner of a QlikView BI competition, as reported by Software Advice. And Have you ever wondered how fast you are spinning around earth's rotational axis? Probably not, but now you can find out anyway!

On a brighter note, the very talented Douglas van der Molen is quoted in Makeover for Maps, saying he is “looking for ways to augment human perception to help in complex decision making.” Maybe today's sophisticated tools will lead to something game-changing for problem solvers. Or maybe we'll keep manufacturing faux aha! moments.

Wednesday, 30 October 2013

Don't show me the evidence. Show me how you weighed the evidence.

Sometimes we fool ourselves into thinking that if people just had access to all the relevant evidence, then the right decision - and better outcomes - would surely follow.

Calculator for decision makingOf course we know that's not the case. A number of things block a clear path from evidence to decision to outcome. Evidence can't speak for itself (and even if it could, human beings aren't very good listeners). 

It's complicated. Big decisions require synthesizing lots of evidence arriving in different (opaque) forms, from diverse sources, with varying agendas. Not only do decision makers need to resolve conflicting evidence, they must also balance competing values and priorities. (Which is why "evidence-based management" is a useful concept, but as a tangible process is simply wishful thinking.) Later in this post, I'll describe a recent pharma evidence project as an example.

If you're providing evidence to influence a decision, what can you do? Transparency can move the ball forward substantially. But ideally it's a two-way street: Transparency in the presentation of evidence, rewarded with transparency into the decision process. However, decison-makers avoid exposing their rationale for difficult decisions. It's not always a good idea to publicly articulate preferences about values, risk assessments, and priorities when addressing a complex problem: You may get burned. And it's even less of a good idea to reveal proprietary methods for weighing evidence. Mission statements or checklists, yes, but not processes with strategic value.

Boxplots-D3 libraryThe human touch. If decision-making was simply a matter of following the evidence, then we could automate it, right? In banking and insurance, they've created impressive technology to automate approvals for routine decisions: But doing so first requires a very explicit weighing of the evidence and design of business rules.

Where automation isn't an option, decision makers use a combination of informal methods and highly sophisticated models. Things like Delphi, efficient frontier, or multiple criteria decision analysis (MCDA); but let's face it, there are still a lot of high-stakes beauty contests going on out there.

What should transparency look like? Presenters can add transparency to their evidence in several ways. Here's my take:

Level 1: Make the evidence accessible. Examples: Publishing a study in conventional academic/science journal style. Providing access to a database.

Level 2: Show, don't tell: Supplement lengthy narrative with visual cues. Provide data visualization and synopsis. Demonstrate the dependencies and interactivity of the information. Example: Provide links to comprehensive analysis, but first show the highlights in easily digestible form - including details of the analytical methods being applied.

Level 3: Make it actionable: Apply the "So what?" test. Show why the evidence matters. Example: Show how variables connect to, or influence, important outcomes (supported by graph data and/or visualizations, rather than traditional tabular results).

On the flip side, decision makers can add transparency by explaining how they view the evidence: Which evidence carries the most weight? Which findings are expected to influence desired outcomes?

How are pharma coverage decisions made? Which brings me to transparency in health plan decision-making. Here you have complex evidence and important tradeoffs, compounded by numerous stakeholders (payers, providers, patients, pharma). When U.S. pharmaceutical manufacturers seek formulary approval, they present the evidence about their product; frequently they must follow a prescribed format such as AMCP dossier (there are other ways, including value dossiers). Then the health plan's P&T (Pharmacy and Therapeutics) committee evaluates that evidence.

Recently an industry group conducted a study in an effort to gain deeper understanding of payer coverage decisions. Results appear in “Transparency in Evidence Evaluation and Formulary Decision-Making” (Pharmacy and Therapeutics, August 2013).

“Right now, there is a bit of a ‘black box’ around the formulary decision-making process,” said Robert Dubois, MD, PhD, NPC’s chief science officer and an author of the study. “As a result, decisions about treatment access are often unpredictable to patients, providers and biopharmaceutical manufacturers. We sought to identify ways to clarify the process.”

Whose business is it, anyway? Understandably, manufacturers want to clarify what factors influence the level of access their products receive. And patients want more visibility into formularies: What coverage and co-pays can they expect from their health plan? How is safety weighed against effectiveness? Now that U.S. healthcare is becoming more consumer-driven, I expect something to change.

Transparency in Evidence Evaluation and Formulary Decision-Making
The process. Put simply, the project sponsors were asking payers to explain how they balance the evidence about drug efficacy, safety, and cost. Capturing that information systematically is a big challenge. In scenarios like this, you'll often end up with a big checklist, which is sort of what happened (snippet shown above). An evidence assessment tool was developed by surveying medical and pharmacy directors, who identified key factors by rating the level of access they would provide for drugs in various hypothetical scenarios. 

And then sadness. The tool was validated, then pilot-tested in real-world environments where P&T committees used it to review new drugs. However, participants in the testing portion indicated that "the tool did not capture the dynamic and complex variables involved in the formulary decision-making process, and therefore would not be suitable for more sophisticated organizations." Once again, capturing a complex decision-making process seems out of reach.

Setting expectations. Traditional vendor/customer relationships don't lend themselves to openness. If pharma companies want more insight into payer expectations, they'll have to build strong partnerships with them. That's something they're now doing with risk-sharing and value-based reimbursement, but things won't change overnight. Developing the data infrastructure is one of the challenges long-term, but it seems to me - despite the unsuccessful result with the formulary tool - that more transparency could happen without substantial IT investments.

Friday, 18 October 2013

The Illustrated Book of Bad Arguments.

It's a glorious Fun-with-Evidence Friday. Because I've discovered The Illustrated Book of Bad Arguments. The author is Ali Almossawi (@alialmossawi), a metrics engineer in San Francisco. It's fantastic. Available online now, and soon in hardback.

Illustrated Book of Bad Arguments

Besides the fun illustrations, you'll find serious explanations of logical fallacies, plus definitions of key terms:

"Soundness: A deductive argument is sound if it is valid and its premisses are true. If either of those conditions does not hold, then the argument is unsound. Truth is determined by looking at whether the argument's premisses and conclusions are in accordance with facts in the real world." (BTW, I did not know premise is also spelled premiss.)

Almossawi says "I have selected a small set of common errors in reasoning and visualized them using memorable illustrations that are supplemented with lots of examples. The hope is that the reader will learn from these pages some of the most common pitfalls in arguments and be able to identify and avoid them in practice."

Happy weekend, everyone. 

 

Thursday, 17 October 2013

Got findings? Show us the value. And be specific about next steps, please.

Lately I've become annoyed with research, business reports, etc. that report findings without showing why they might matter, or what should be done next. Things like this: "The participants biological fathers’ chest hair had no significant effect on their preference for men with chest hair." [From Archives of Sexual Behavior, via Annals of Improbable Research.]

Does it pass the "so what" test? Not many of us write about chest hair. But we all need to keep our eyes on the prize when drawing conclusions about evidence. It's refreshing to see specific actions, supported by rationale, being recommended alongside research findings. As Exhibit A, I offer the PLOS Medicine article Use of Expert Panels to Define the Reference Standard in Diagnostic Research: A Systematic Review of Published Methods and Reporting (Bertens et al). Besides explaining how panel diagnosis has (or hasn't) worked well in the past, the authors recommend specific steps to take - and provide a checklist and flowchart. I'm not suggesting everyone could or should produce a checklist, flowchart, or cost-benefit analysis in every report, but more concrete Next Steps would be powerful.

PLOS Medicine: Panel Diagnosis research by Bertens et al

So many associations, so little time. We're living in a world where people need to move quickly. We need to be specific when we identify our "areas for future research". What problem can this help solve? Where is the potential value that could be confirmed by additional investigation? And why should we believe that?

Otherwise it's like simply saying "fund us some more, and we'll tell you more". We need to know exactly what should be done next, and why. I know basic research isn't supposed to work that way, but since basic research seems to be on life support, something needs to change. It's great to circulate an insight for discovery by others. But without offering a suggestion of how it can make the world a better place, it's exhausting for the rest of us.

Wednesday, 09 October 2013

Could we (should we) use evidence to intervene with compulsive gamblers?

If you've stopped for gas in Winnemuca, you've likely seen a down-and-out traveler in quiet conversation with Max Bet, the one-armed bandit. There's no shortage of heartbreaking gambling stories.

Slotmachine_iStock_000021042260_ExtraSmall

Now some researchers claim thay can identify compulsive gamblers. Advocates say we should follow that evidence to intervene before they suffer devastating losses. For me, this raises a number of questions: It's always complicated when we try to save people from themselves.

The evidence. Casinos know a great deal about customer behavior. The Wall Street Journal cover story Researchers Bet Casino Data Can Identify Gambling Addicts describes the work of Sarah Nelson PhD, who has developed the Sports Bettor Algorithm 1.1 [paywall]. Crunching data from casino loyalty programs, the SB algorithm pinpoints "risky betting patterns such as intensive play over long periods of time, significant shifts in behavior, or chasing losses". Dr. Nelson cautions that the system is not yet very accurate, though it has established some correlation between certain behaviors and addiction.

Focal Research Consultants also works in this area. Tony Schellinck, PhD has spearheaded the creation of algorithms and assessment tools, including the FocaL Adult Gambling Screen (FLAGS), a multi-construct instrument specifically designed to measure risk due to gambling. He is quoted in an ABC story about gambling analytics.

"You've got to learn that a mark is going to give it away to somebody, kid. There's no way to stop a real mark. So when he's ready, you just try to be first in line." 
-John D. MacDonald, The Only Girl in the Game

The question. Two questions, actually. What's a 'compulsive gambler'? And what could (should) casinos do to help them? Understandably, the industy is concerned about exposure to liability (similar to bartenders who are expected to intervene with intoxicated patrons). Some casino executives object, saying this compares to asking a nonprofessional to "diagnose a mental health disorder" (not sure I buy that argument).

  •  Like evidence-based medicine, intervention is easy to imagine, but not so easily achieved. If Caesar's (or another big casino) cut off an addicted customer, wouldn't they simply gamble elsewhere? Or find a different addiction?
  • Where does this end? With health screening at the entrance to every McDonald's? Holding Nordstrom accountable for binge shopping?
  • Speaking to ABC News, Schellinck said some casinos fear algorithms "will show that some of their best customers are addicts, and that the casino's bottom line will suffer if management intervenes with troubled high-rollers." But he has discovered that "The vast majority of problem gamblers are not big spenders. They're people who are spending $200 a month on their habit but can't afford to do it."

Critics want casinos to do more; Jen Miller in Philly Mag vehemently makes that argument. The BlackJack episode of This American Life tells the sad story of a woman who gambled away her inheritance - and then sued the casino, blaming them (she lost).

Clearly this isn't my area. At least theoretically, maybe there could be a pooled service, where alerts went to a central intervention group. But then we'd need a Unique Gambler Identifier to track people across multiple casinos -- sheesh, this is starting to sound like our U.S. healthcare quagmire.