Evidence Soup
How to find, use, and explain evidence.

9 posts categorized "books"

Tuesday, 14 June 2016

Mistakes we make, Evidence Index, and Naturals vs Strivers.

Putin_pianist

1. Mistakes we make when sharing insights.
We've all done this: Hurried to share valuable, new information and neglected to frame it meaningfully, thus slowing the impact and possibly alienating our audience. Michael Shrage describes a perfect example, taken from The Only Rule Is It Has to Work, a fantastic book about analytics innovation.

The cool thing about the book is that it's a Moneyball for the rest of us. Ben Lindbergh and Sam Miller had the rare opportunity to experiment and apply statistics to improve the performance of the Sonoma Stompers, a minor league baseball team in California wine country. But they had to do it with few resources, and learn leadership skills along the way.

The biggest lesson they learned was the importance of making their findings easy to understand. As Shrage points out in his excellent Harvard Business Review piece, the authors were frustrated at the lack of uptake: They didn't know how to make the information meaningful and accessible to managers and coaches. Some people were threatened, others merely annoyed: "Predictive analytics create organizational winners and losers, not just insights."

2. Naturals vs. Strivers: Why we lie about our efforts.
Since I live in Oakland, I'd be remiss without a Steph Curry story this week. But there's lots more to it: Lebron James is a natural basketball player, and Steph is a striver ; they're both enormously popular, of course. But Ben Cohen explains that people tend to prefer naturals, whether we recognize it or not: We favor those who just show up and do things really well. So strivers lie about their efforts.

Overachievers launch into bad behavior, such as claiming to sleep only four hours a night. Competitive pianists practice in secret. Social psychology research has found that we like people described as naturals, even when we're being fooled.

3. How do government agencies apply evidence?
Results for America has evaluated how U.S. agencies apply evidence to decisions, and developed an index synthesizing their findings. It's not easily done. @Results4America studied factors such as "Did the agency use evidence of effectiveness when allocating funds from its five largest competitive grant programs in FY16?" The Departments of Housing and Labor scored fairly high. See the details behind the index [pdf here].

Photo credit: Putin classical pianist on Flickr.

 

Wednesday, 08 June 2016

Grit isn't the answer, plus Scrabble and golf analytics.

Scrabble

1. Poor kids already have grit: Educational Controversy, 2016 edition.
All too often, we run with a sophisticated, research-based idea, oversimplify it, and run it into the ground. 2016 seems to be the year for grit. Jean Rhodes, who heads up the Chronicle of Evidence-Based Mentoring (@UMBmentoring) explains that grit is not a panacea for the problems facing disadvantaged youth. "Grit: The power and passion of perseverance, Professor Angela Duckworth’s new bestseller, on the topic has fueled both enthusiasm for such efforts as well as debate among those of us who worry that it locates the problem (a lack of grit) and solution (training) in the child. Further, by focusing on exemplars of tenacity and success, the book romanticizes difficult circumstances. The forces of inequality that, for the most part, undermine children’s success are cast as contexts for developing grit. Moreover, when applied to low-income students, such self-regulation may privilege conformity over creative expression and leadership. Thus, it was a pleasure to come across a piece by Stanford doctoral student, Ethan Ris, on the history and application of the concept." Ris first published his critique in the Journal of Educational Controversy and recently wrote a piece for the Washington Post, The problem with teaching ‘grit’ to poor kids? They already have it.

2. Does Scrabble have its own Billy Beane?
It had to happen: Analytics for Scrabble. But it might not be what you expected. WSJ explains why For World’s Newest Scrabble Stars, SHORT Tops SHORTER.

Wellington Jighere and other players from Nigeria are shaking up the game, using analytics to support a winning strategy favoring five-letter words. Most champions follow a “long word” strategy, making as many seven- and eight-letter plays as possible. But analytics have brought that "sacred Scrabble shibboleth into question, exposing the hidden risks of big words."

Jighere has been called the Rachmaninoff of rack management, often saving good letters for a future play rather than scoring an available bingo. (For a pre-Jighere take on the world of Scrabble, see Word Wars.)

3. Golf may have a Billy Beane, too.
This also had to happen. Mark Broadie (@MarkBroadie) is disrupting golf analytics with his 'strokes gained' system. In his 2014 book, Every Shot Counts, Broadie rips apart assumptions long regarded as sacrosanct - maxims like 'drive for show, putt for dough'. "The long game explains about two-thirds of scoring differences and the short game and putting about one-third. This is true for amateurs as well as pros." To capture and analyze data, Broadie developed a GolfMetrics program. He is the Carson Family Professor of Business at Columbia Business School, and has a PhD in operations research from Stanford. He has presented at the Sloan Sports Analytics Conference.

Pros have begun benefiting from golf analytics, including Danny Willett, winner of this year's Masters. He has thanked @15thClub, a new analytics firm, for helping him prep better course strategy. 15th Club provided insight for Augusta’s par-5 holes. As WSJ explained, the numbers show that when players lay up, leaving their ball short of the green to avoid a water hazard, they fare better when doing so as close to the green as possible, rather than the more distant spots where players typically take their third shots.

4. Evidence-based government on the rise.
In the US, "The still small, but growing, field of pay for success made significant strides this week, with Congress readying pay for success legislation and the Obama administration announcing a second round of grants through the Social Innovation Fund (@SIFund)."

5. Man + Machine = Success.
Only Humans Need Apply is a new book by Tom Davenport (@tdav) and @JuliaKirby. Cognitive computing combined with human decision making is what will succeed in the future. @DeloitteBA led a recent Twitter chat: Man-machine: The dichotomy blurs, which included @RajeevRonanki, the lead for their cognitive consulting practice.

Monday, 06 June 2016

How women decide, Pay for Success, and Chief Cognitive Officers.

Women-decide
1. Do we judge women's decisions differently?

Cognitive psychologist Therese Huston's new book is How Women Decide: What's True, What's Not, and What Strategies Spark the Best Choices. It may sound unscientific to suggest there's a particular way that several billion people make decisions, but the author doesn't seem nonchalant about drawing specific conclusions.

The book covers some of the usual decision analysis territory: The process of analyzing data to inform decisions. By far the most interesting material isn't about how choices are made, but how they are judged: The author makes a good argument that women's decisions are evaluated differently than men’s, by both males and females. Quick example: Marissa Mayer being hung up to dry for her ban on Yahoo! staff working from home, while Best Buy's CEO mostly avoided bad press after a similar move. Why are we often quick to question a woman’s decision, but inclined to accept a man’s?

Huston offers concrete strategies for defusing the stereotypes that can lead to this double standard. Again, it's dangerous to speak too generally. But the book presents evidence of gender bias in the interpretation of people's choices, and how it feeds into people's perceptions of choices. Worthwhile reading. Sheelah Kolhatkar reviewed for NYTimes books.

2. Better government through Pay for Success.
In Five things to know about pay for success legislation, Urban Institute staff explain their support for the Social Impact Partnership to Pay for Results Act (SIPPRA), which is being considered in the US House. Authors are Justin Milner (@jhmilner), Ben Holston (@benholston), and Rebecca TeKolste.

Under SIPPRA, state and local governments could apply for funding through outcomes-driven “social impact partnerships” like Pay for Success (PFS). This funding would require strong evidence and rigorous evaluation, and would accomodate projects targeting a wide range of outcomes: unemployment, child welfare, homelessness, and high school graduation rates.

One of the key drivers behind SIPPRA is its proposed fix for the so-called wrong pockets problem, where one agency bears the cost of a program, while others benefit as free riders. "The bill would provide a backstop to PFS projects and compensate state and local governments for savings that accrue to federal coffers." Thanks to Meg Massey (@blondnerd).

3. The rise of the Chief Cognitive Officer.
On The Health Care Blog, Dan Housman describes The Rise of the Chief Cognitive Officer. "The upshot of the shift to cognitive clinical decision support is that we will likely increasingly see an evolving marriage and interdependency between the worlds of AI (artificial intelligence) thinking and human provider thinking within medicine." Housman, CMO for ConvergeHealth by Deloitte, proposes a new title of CCO (Chief Cognitive Officer) or CCMO (Chief Cognitive Medical Officer) to modernize the construct of CMIO (Chief Medical Information Officer), and maintain a balance between AI and humans. For example, "If left untrained for a year or two, should the AI lose credentials? How would training be combined between organizations who have different styles or systems of care?"

Wednesday, 20 April 2016

How to lead people through evidence-based decisions.

Decision Quality book

There's no shortage of books on strategy and decision-making - and many of them can seem out of touch. This one is worthwhile reading: Decision Quality: Value Creation from Better Business Decisions by Carl Spetzler, Hannah Winter, and Jennifer Meyer (Wiley 2016).

The authors are decision analysis experts with the well-known, Palo Alto-based Strategic Decisions Group. Instead of presenting schemes or templates for making decisions, they get to the heart of the matter: Decision quality, when making big decisions or smaller choices. How will you decide? How will you teach your team to make high-quality decisions? And how will you define 'high quality'?

For example, for a healthcare formulary decision, outline in advance what findings will be considered. Cost-effectiveness modeling? Real-world evidence? How will evidence be weighted - possibly using multi-criteria decision analysis? How will uncertainty be factored in?

 

“If you want to change the culture of an organization, change the way people make decisions.” -Vincent Barabba

Key takeaways from this book:

- You can lead a meaningful change by encouraging people to fully understand why it's the decision process, not the outcome, that is under their control.

- Teach your team to make high-quality decisions. Build organizational capability so people use similar language and methods to assess evidence and analyze decisions.

- Get more buy-in with a better process, from initial concept to execution. Judge the quality of a decision as you go along.

 

Tuesday, 09 February 2016

How Warby Parker created a data-driven culture.

 

4 pic Creating a Data Driven Organization 04feb16

 

What does it take to become a data-driven organization? "Far more than having big data or a crack team of unicorn data scientists, it requires establishing an effective, deeply ingrained data culture," says Carl Anderson, director of data science at the phenomenally successful Warby Parker. In his recent O'Reilly book Creating a Data-Driven Organization, he explains how to build the analytics value chain required for valuable, predictive business models: From data collection and analysis to insights and leadership that drive concrete actions. Follow Anderson @LeapingLlamas.

Practical advice, in a conversational style, is combined with references and examples from the management literature. The book is an excellent resource for real-world examples and highlights of current management research. The chapter on creating the right culture is a good reminder that leadership and transparency are must-haves.

UglyResearch_Action_Outcome

Although the scope is quite ambitious, Anderson offers thoughtful organization, hitting the highlights without an overwhelmingly lengthy literature survey. My company, Ugly Research, is delighted to be cited in the decision-making chapter (page 196 in the hard copy, page 212 in the pdf download). As shown in the diagram, with PepperSlice we provide a way to present evidence to decision makers in the context of a specific 'action-outcome' prediction or particular decision step.

Devil's advocate point of view. Becoming 'data-driven' is context sensitive, no doubt. The author is Director of Data Science at Warby Parker, so unsurprisingly the emphasis is technologies that enable data-gathering for consumer marketing. While it does address several management and leadership issues, such as selling a data-driven idea internally, the book primarily addresses the perspective of someone no more than two or three degrees of freedom from the data; a senior executive working with an old-style C-Suite would likely need to take additional steps to fill the gaps.

The book isn't so much about how to make decisions, as about how to create an environment where decision makers are open to new ideas, and to testing those ideas with data-driven insights. Because without ideas and evidence, what's the point of a good decision process?

 

Tuesday, 24 November 2015

Masters of self-deception, rapid systematic reviews, and Gauss v. Legendre.

This week's 5 links on evidence-based decision making.

1. Human fallibility → Debiasing techniques → Better science
Don't miss Regina Nuzzo's fantastic analysis in Nature: How scientists trick themselves, and how they can stop. @ReginaNuzzo explains why people are masters of self-deception, and how cognitive biases interfere with rigorous findings. Making things worse are a flawed science publishing process and "performance enhancing" statistical tools. Nuzzo describes promising ways to overcome these challenges, including blind data analysis.

2. Slow systematic reviews → New evidence methods → Controversy
Systematic reviews are important for evidence-based medicine, but some say they're unreliable and slow. Two groups attempting to improve this - not without controversy - are Trip (@TripDatabase) and Rapid Reviews.

3. Campus competitions → Real-world analytics → Attracting talent
Tech firms are finding ways to attract students to the analytics field. David Weldon writes in Information Management about the Adobe Analytics Challenge, where thousands of US university students compete using data from companies such as Condé Nast and Comcast to solve real-world business problems.

4. Discover regression → Solve important problem → Rock the world
Great read on how Gauss discovered statistical regression, but thinking his solution was trivial, didn't share. Legendre published the method later, sparking one of the bigger disputes in the history of science. The Discovery of Statistical Regression - Gauss v. Legendre on Priceonomics.

5. Technical insights → Presentation skill → Advance your ideas
Explaining insights to your audience is as crucial as getting the technical details right. Present! is a new book with speaking tips for technology types unfamiliar with the spotlight. By Poornima Vijayashanker (@poornima) and Karen Catlin.

Tuesday, 28 July 2015

10 Years After Ioannidis, speedy decision habits, and the peril of whether or not.

1. Much has happened in the 10 years since Why Most Published Research Findings Are False, the much-discussed PLOS essay by John P. A. Ioannidis offering evidence that "false findings may be the majority or even the vast majority of published research claims...." Why are so many findings never replicated? Ioannidis listed study power and bias, the number of studies, and the ratio of true to no relationships among those probed in that scientific field. Also, "the convenient, yet ill-founded strategy of claiming conclusive research findings solely on... formal statistical significance, typically for a p-value less than 0.05."
Now numerous initiatives address the false-findings problem with innovative publishing models, prohibition of p-values, or study design standards. Ioannidis followed up with 2014's How to Make More Published Research True, noting improvements in credibility and efficiency in specific fields via "large-scale collaborative research; replication culture; registration; sharing; reproducibility practices; better statistical methods;... reporting and dissemination of research, and training of the scientific workforce."

2. Speedy decision habits -> Fastest in market -> Winning. Dave Girouard, CEO of personal finance startup Upstart & ex-Google apps head, believes speedy decision-making is essential to competing: For product dev, and other organizational functions. He explains how people can develop speed as a healthy habit. Relatively little is "written about how to develop the institutional and employee muscle necessary to make speed a serious competitive advantage." Key tip: Deciding *when* a decision will be made from the start is a profound, powerful change that speeds everything up.

3. Busy, a new book by Tony Crabbe (@tonycrabbe), considers why people feel overwhelmed and dissatisfied - and suggests steps for improving their personal & work lives. Psychological and business research are translated into practical tools and skills. The book covers a range of perspectives; one worth noting is "The Perils of Whether or Not" (page 31): Crabbe cites classic decision research demonstrating the benefits of choosing from multiple options, vs. continuously (and busily) grinding through one alternative at a time. BUSY: How to Thrive in a World of Too Much, Grand Central Publishing, $28.

4. Better lucky than smart? Eric McNulty reminds us of a costly, and all-too-common, decision making flaw: Outcome bias, when we evaluate the quality of a decision based on its final result. His strategy+business article explains we should be objectively assessing whether an outcome was achieved by chance or through a sound process - but it's easy to fall into the trap of positively judging only those efforts with happy endings (@stratandbiz).

5. Fish vs. Frog: It's about values, not just data. Great reminder from Denis Cuff @DenisCuff of @insidebayarea that the data won't always tell you where to place value. One SF Bay Area environmental effort to save a fish might be endangering a frog species.

Tuesday, 14 July 2015

Data-driven organizations, machine learning for C-Suite, and healthcare success story.

1. Great stuff on data-driven decision making in a new O'Reilly book by Carl Anderson (@LeapingLlamas), Creating the Data-Driven Organization. Very impressive overview of the many things that need to happen, and best practices for making them happen. Runs the gamut from getting & analyzing the data, to creating the right culture, to the psychology of decision-making. Ugly Research is delighted to be referenced (pages 187-188 and Figure 9-7).

2. Healthcare success story. "Data-driven decision making has improved patient outcomes in Intermountain's cardiovascular medicine, endocrinology, surgery, obstetrics and care processes — while saving millions of dollars in procurement and in its the supply chain."

3. 1) description, 2) prediction, 3) prescription. What the C-Suite needs to understand about applied machine learning. McKinsey's executive guide to machine learning 1.0, 2.0, and 3.0.

4. Place = Opportunity. Where kids grow up has a big impact on what they earn as adults; new evidence on patterns of upward mobility. Recap by @UrbanInstitute's Margery Austin Turner (@maturner).

5. Open innovation improves the odds of biotech product survival. Analysis by Deloitte's Ralph Marcello shows the value of working together, sharing R&D data.

Monday, 19 July 2010

Is "evidence-based" meaningless? Three books illustrate the dilemma of describing things as "evidence-based".

Today we're looking at three books on "evidence-based ____" , each concerning management in the public or private sector. Together, they demonstrate the wide range of possibilities for evidence-based action. But they also illustrate why it's difficult for people to point to something and say "Now I see what this EB stuff is all about!"

How does this work, again? Some of the books/articles with "evidence-based" in their titles hardly seem to be about evidence at all: Maybe authors do this because it's trendy, and might make people look (which it does, in my case, since I track Google alerts for "evidence-based"). Call me cynical, but I suspect in many cases you could remove the references to "evidence-based" without changing the gist. (Rarely does anyone go to the trouble of defining "evidence". This makes me want to pound out The Evidence-Based Manifesto ...and then my caffeine buzz wears off.)

Becoming the Evidence-Based Manager Now, let's look at some books that do talk about evidence. First up is Becoming the Evidence-Based Manager: Making the Science of Management Work for You, by Gary P. Latham. He says the book was "written to underscore the scientific aspect of effective management - what is called evidence-based management - in an artful way." His intent is to "share management techniques that have been proven by valid and reliable research studies to work."

Latham's introduction says "The art of management can seldom be taught. The science of management can be taught." Good point. His focus is on human resources: Hiring top performers, motivating people, etc. He doesn't delve into evidence-based methods for business strategy, decision-making, etc. This is about organizational psychology. (Here's Latham on YouTube.)

A gentle introduction. One thing I like about this book is that doesn't just preach to the converted. Latham addresses an audience that hasn't focused on "evidence-based" before, So this could be useful to a new manager, or to anyone who hasn't looked at management scientifically. The presentation isn't too academic or complicated.

Have some fish: It's good for you. Not everyone needs to be a philosopher, and not every book needs to 'meta' in scope (this one isn't). Generally speaking, the book gives people fish (telling them about management techniques supported by evidence), rather than teaching them how to fish (i.e., how to gather up their own evidence).

Becoming the Evidence-Based Manager-Excerpt For instance, Chapter 1 is about how to "Use the Right Tools to Hire High-Performing Employees." The book provides brief case studies: For example, a case where a manager replaced the traditional hiring process with situational interviews, "a method proven to be effective at selecting high performers" (page 143).

Can we look that up? Many references to 'the evidence' aren't supported by a footnote or link (such as statements like "The evidence also shows that ..." [page 96]). Latham seems to be addressing an audience that isn't inclined (or simply lacks the time) to scrutinize the evidence: People who want the evidence to be condensed and presented to them as management advice. Nothing wrong with that. (Though I still would like to see more specific references to the underlying research, and I believe the author could do that without weighting down his discussions.)

The takeaway. Lots of people could use an introduction like this. Hopefully it will give more managers the confidence to ask "Before we move forward, what does the evidence say?" (Eventually leading to more rigorous evaluation of the evidence.)

Practitioners Guide to Using Research for Evidence-Based Practice At the other end of the spectrum is Allen Rubin's Practitioner's Guide to Using Research for Evidence-Based Practice. The book "aims to provide human services practitioners what they need to know about various research designs and methods so that when engaging in the EBP process, they can determine which interventions, programs, policies, and assessment tools are supported by the best evidence."

It's a process. I especially like how Rubin describes evidence-based practice as a process, not a thing. That mind-set needs to prevail. The book emphasizes that:

  • EBP is a process that includes locating and appraising credible evidence as a part of practice decisions.
  • EBP is way to designate certain interventions as empirically supported under certain conditions.

Other things I like: The book addresses both quantitative and qualitative studies. And Rubin reminds us that we are seeking answers to EBP questions, and that each one must suit our objective.

This is heavy-duty stuff, but the book is well-written, using language like "recall back in chapter 3 we saw how systematic reviews reside above experiments on the evidentiary hierarchy for answering questions about effectiveness." And Rubin talks about assessing a research study in relatively straightforward English, and provides practical advice about how to evaluate a study (pages 104-105).

Evidentiary Hierarchy - Practitioners Guide to Using Research for Evidence-Based Practice Now we're getting meta. Evidence about the evidence. Rubin suggests a hierarchy ranking various types of evidence. Level 1 (the best evidence) is systematic review and meta-analysis (page 52): A la Cochrane levels of medical evidence (I, II...).

Chapter 3 talks about "Research Hierarchies: More than one type of hierarchy for more than one type of EBP question." Chapter 4 is "Criteria for inferring Ineffectiveness." Chapter 8 covers "Critically appraising systematic reviews and meta-analyses." And Chapter 9 is "Critically appraising non-experimental quantitative studies." Good things to know, but not for the novice.

The takeaway. Good for people who think about research design and evidence the way a PhD would - with or without a doctorate. Clearly this kind of effort is appropriate for 'big' interventions, or where there's lots of repeatable stuff to analyze - but it's too much for making decisions on a smaller scale, or in situations where innovative things are being tried and the available evidence is not very applicable.

Is the goal to be performance-based *and* evidence-based? I've often thought about the similarities and differences between so-called 'performance management' and 'evidence-based management'. And I've wondered that if these approaches were merged, maybe we'd really have something. (Allow me to digress before we move on to evidence-based book #3.) When narrowly defined, performance management "includes activities to ensure that goals are consistently being met in an effective and efficient manner." It's about using metrics, key performance indicators (KPIs), dashboards, etc. to monitor and improve performance. And of course evidence-based management is about making decisions based on the current, best evidence about what works. So I think of performance data as one type of evidence that fits into a wider program to achieve evidence-based management.

The Intelligent Company-Five Steps to Success with Evidence-Based Management I raised the subject of performance management because its concepts are prevalent in our next book, The Intelligent Company: Five Steps to Success with Evidence-Based Management, by Bernard Marr, a consultant on organizational performance. He's got a lot to say about using data to improve performance, and makes specific recommendations. It's a serious effort, and Marr cheerfully provides citations and references (in a section at the end of the book). [Note: On amazon there's a downloadable mini-version priced at $9.95 US (40 pages). It's a 2009 article from Financial Management.]

Marr opens by making the connection between evidence-based medicine and business management, explaining the need to identify an important question, gather data, analyze, etc. But a lot of the discussion focuses on "evidence" that can be displayed on management dashboards or presented in reports - high-volume data about specific operational parameters. That's indeed important, but overlooks useful evidence about business strategy, product management, human resource practices, etc.

The focus here is the data you have - or can go get yourself - and key performance indicators (KPIs) derived from that data: It's not as much about evidence in the external literature, or systematic reviews. But Marr does a thorough job within his focus area: There's lots of rigor in his approach, such as testing and proving ideas. And I'm glad that he discusses how best to present information and create reports. He references Stephen Few, a well-known dashboard designer in the performance management/business intelligence field.

Intelligent, you say? The book's blurb is too hype-y for my taste, breathlessly claiming that "Today's most successful companies are Intelligent Companies that use the best available data to inform their decision making. This is called Evidence-Based Management and is one of the fastest growing business trends of our times." (What's an "Intelligent Company"? And why is the phrase capitalized? This is just consultant-speak.)

The 'five steps to more intelligent decision making' are:

  1. "More intelligent strategies – by identifying strategic priorities and agreeing your real information needs.
  2. More intelligent data – by creating relevant and meaningful performance indicators and qualitative management information liked back to your strategic information needs.
  3. More intelligent insights – by using good evidence to test and prove ideas and by analysing the data to gain robust and reliable insights.
  4. More intelligent communication – by creating engaging management information packs and dashboards that provide the essential information, packaged in an easy-to-read way.
  5. More intelligent decision making – by fostering an evidence-based culture of turning information into actionable knowledge and real decisions."

Here is Marr's EbM model. Since he focuses on data, it's appropriate that he includes enabling technology in the process.

EbM Model - The Intelligent Company - Bernard Marr

The takeaway. Solid advice. But lots has already been written about data-driven management, performance management, business intelligence, etc. I'm not sure there's much new here, though the book does a respectable job of presenting it.

Is "evidence-based" meaningless terminology? Would the point of Marr's book be substantially different if it instead described "data-driven decision-making"? Would Rubin's or Latham's books be the same if they talked about "science-based" instead? And would other works be unaffected if their references to EBM were replaced with something else? Here are my thoughts on "evidence-based ___":

  • We need to talk about scientific methods, not just data, and a good way to do that is to use terminology that addresses evidence and evidence-based actions specifically.
  • We need to say what we mean by "evidence".and "current best" evidence. And that should include evidence from sources both internal and external to an organization.
  • We need to discuss the role of technology in supporting "evidence-based ____". Whether we're developing, presenting, or distributing evidence, technology is a key variable in the equation: Evidence needs to be more transparent and available on web pages, in documents, and in mobile apps.
  • We need to show references when we cite evidence and research. There are ways to do this without making everything look and sound like a textbook.

Where are you on the evidence-based hierarchy? Are you a novice manager, new to the use of science in management? Then books such as Latham's Becoming the Evidence-Based Manager: Making the Science of Management Work for You might be a good choice. If you're more interested in quantitative decision-making based on performance data, try Marr's The Intelligent Company: Five Steps to Success with Evidence-Based Management. If you are up to the task of experimental research design and meta-analysis, then you will enjoy Rubin's Practitioner's Guide to Using Research for Evidence-Based Practice.