Evidence Soup
How to find, use, and explain evidence.

« February 2011 | Main | April 2011 »

8 posts from March 2011

Wednesday, 30 March 2011

Interview Wednesday: Dr. Elizabeth Oyer, evaluator of evidence.

Elizabeth Oyer PhD For today's Interview Wednesday, we talk with Elizabeth Oyer, Director of Research at Eval Solutions in Carmel, Indiana, USA. You can read Elizabeth's blog here. Her Twitter account is @elizabethoyer.

#1. What got you interested in evaluation and evidence?
I believe my work in evaluation has been my destiny for a while. I began my undergraduate studies in Psychology with a minor in French (because I loved studying French!). By my sophomore year, the practicality of a French minor set in and I decided to double major in Communications. To satisfy my passion for French, I began teaching in a weekend program for
gifted students (3rd and 4th graders). In my senior year of teaching this class, I was paired with an Education major. We decided to divide up the teaching duties, alternating throughout each class. Well, instantly I could see how much better she was at teaching!

I was fascinated by the impact of our different abilities on the students. Staring down the barrel of graduate school, I was elated to discover the field of Educational Psychology and my path to evaluation began at Indiana University's program with an Inquiry emphasis.

What types of evidence do you work with most often (medical, business research, statistics, social science, etc.)?
For most of my projects, my work incorporates quantitative data (e.g., achievement data and survey data) as well as qualitative data (e.g., observations, interviews) and extant records (e.g., lesson plans) in education. The majority of my clients are in the field of education, although I also work with businesses (e.g., needs assessments, strategic planning) and non-profit organizations.

What is your involvement with evidence and evaluation: applying it, advocating its use, researching/developing it, synthesizing/explaining/translating it, communicating it?
I would say there is an element of all of these in my work! Certainly most of my projects incorporate reviewing literature and work in the field, modifying instruments to meet the clients' needs, reporting the results for accountability requirements but also translating trends in the data to feed back to answer local questions about progress, as well as disseminating the results (generally at conferences).

Where do you go looking for evidence, and what types of sources do you prefer? (formally published stuff such as journals, or something less formalized?)
When I am looking for evidence to help me process and contextualize results, I rely mostly on formal evidence (like journals). However, the informal conversations I have with my colleagues are priceless! When we are together we are always thinking, analyzing, evaluating our work to improve our impact on the field.

#2. On a scale of 1 to 10, where 10= ‘It’s crystal clear.’ and 1=’We have no idea why things are happening.’, how would you describe the overall “state of the evidence” in your primary field?
I would say with the advances in brain research and contributions coming from effectiveness of comprehensive reform, our understanding of what we're supposed to do is an '8' - but how to get systems to their optimal states of effectiveness consistently and across contexts is about a '4'.

Work on implementation fidelity and systems thinking is getting us there, but it certainly appears we have a long road ahead of us.

Which of these situations is most common in your field?
a) Much of the evidence we need doesn’t yet exist.
b) People don't know about the evidence that is available.
c) People don't understand the evidence well enough to apply it.
d) People don’t follow the evidence because it's not the expectation.

I would say a healthy level of 'c' and 'd' are common. As expectations and accountability standards at the federal level increase, the field responds. When there are vague policies, there are too many opportunities for excuses for why standards of evidence are too difficult to implement.

#3. Imagine a world where people can get the evidence they need, and exchange it easily and transparently. What barriers do you believe are preventing that world from becoming a reality?
I believe there are a couple of sizeable barriers to our knowledge in the field. The first relates to access to information about the work of our peers. For a large number of workers in the field (evaluators who are not in higher education), access to journals of published work is difficult. There are no models to open access to this work in the same way that there is easy access to other resources (like music and videos). Paying $20 to read what may be 1 out of 50 articles you need for a project is simply not feasible.

Secondly, I believe access to individual-level student data (in accordance with FERPA standards, of course) continues to be more difficult than necessary. In this digital age, access is more often a matter of the will of the community stakeholders (administrators, teachers, school boards, project leadership) than a practical issue. It is vital that our understanding of progress in education target observable impact on individual students; we must continue to implement evaluation frameworks that test our theory of change from system level factors (e.g, school policies) to classroom level factors (e.g., teachers and resources) on clearly articulated student outcomes.

Where do you see technology making things better?
Technology has the potential to improve access to work universally when we have a business model that allows for the work to be reasonably profitable. As we move toward instruction that is grounded in data, technology can provide a vehicle for information flow, especially with automated tools like a web services and data clouds.

#4. How do you prefer to share evidence with people, and explain it to them? Do you have a systematic way of doing it, or is there a format that you follow?
Well, many times there are reporting requirements set by the program officers funding the project, so those are always followed. However, the standards for evaluation have guidelines for communicating results and I try to incorporate these into my reports. I also try to create additional formats (like press releases, Executive Summaries) to provide access to information. Finally, I meet with all of my clients to discuss what the evaluation is saying about the project's progress (or how the evaluation needs to be improved to better inform the progress!).

I also have begun sharing my work in my blog and I try to present at conferences annually. Professionally, I would like to do more published work, so that will be my next target for myself.

What mistakes do you see people making when they explain evidence?
I would say poor evaluation frameworks or poor implementation of sound frameworks is at the root of a report that doesn't give the guidance that is needed. Spending time clearly articulating the theory of change down to the observable, measurable implementation goals for the core program elements is more often the problem than the data itself. Often folks go way beyond the evidence at hand to make generalizations that don't hold water.

#5. What do you want your legacy to be?
I'd like to be remembered for pursuing excellence in the field with integrity and respect for stakeholders while delivering high quality, useful information that changes the way things work. A very lofty goal that keeps me hopping!

#6. What question do you wish we'd asked?

"What is your most important resource?" My colleagues are my most important resource. I couldn't do my work with the hard work of my subcontractors and project partners and clients. My colleagues provide formal support (through partnerships on grants) as well as serve as an informal resource (to bounce ideas, confirm interpretations of evidence in tight spots) and just generally feed my evolution as a professional. My clients keep me motivated, with their passion for their projects, that my purpose is to provide the best evaluation services and seek out the best resources to meet the needs of each project.

Thanks for sharing these insights, Elizabeth.


____________________________

Chime in. Would you like to be interviewed, or do you have someone to recommend? Drop me a note at tracy AT evidencesoup DOT com.

Tuesday, 29 March 2011

Bad evidence for a good cause.

Remember awhile back when craigslist promised to cut back on its 'adult' classifieds, in response to claims that ads for child prostitution (due to sex trafficking) were rampant on the site? Hmmm. It seems the public outcry -- however well-intended -- may have been based on bad evidence.

Womens-funding-network-junk-science Junk science? This is far removed from my area of expertise, but folks who have looked closely at the situation say the so-called studies claiming exponential growth of child prostitution are based on junk science. Shown here is Steve Doig, the Knight Chair in Journalism at Arizona State University, who says one of the influential studies is based on a logical fallacy (to put it mildly). Some of the accusations about the research methodologies are mind-boggling; I suppose people are quick to accept findings when the stakes are so high.

Links to critiques are provided below. But first (pardon my ignorance), I'm not even sure what's been removed from craigslist since this whole kerfuffle erupted. I looked at their personals section today (for Denver), and saw recent listings for women seeking men, etc.

Evidently, I am clueless. But I don't see how removing some ads from craigslist will help the situation. If law enforcement officials want to hunt down sex traffickers, aren't they better off having the classifieds posted publicly, rather than having this stuff happen over private networks/text messages or whatever? It seems to me that transparency works in favor of law and order.

Here's some further reading about the evidence (or lack thereof) on sex trafficking in the U.S. and elsewhere:

Washington Post: Human Trafficking Evokes Outrage, Little Evidence: U.S. Estimates Thousands of Victims, But Efforts to Find Them Fall Short. 

The most recent cover story in Westword (the Denver version of Village Voice): Women's Funding Network sex-trafficking study is junk science.

Village Voice has put together a list of references, including the London Guardian article Inquiry fails to find single trafficker who forced anybody into prostitution.

 

 

Friday, 25 March 2011

Evidence-based basketball.

Happy Fun-with-Evidence Friday. I'm a bit of a baseball statistics geek (worked for awhile at STATS). I don't know as much about basketball, though I do enjoy the March Madness.

Blindfold I ran across a great Wall Street Journal article that presents some very interesting evidence about Bracketology. They set up 'blindfolded' descriptions of this year's men's NCAA teams without revealing their names. In Filling Out a Bias-Free Bracket [paywall], the WSJ explains that many of us can't objectively rank the teams because "We bring to the exercise not just simple and precise analytical thinking, but childhood prejudices, irrational fears, junk math, loony logic and enough emotional scars to cast a dozen horror movies."

Evidently, blindfolds are statistically significant. The WSJ asked several bball experts to set up brackets both with and without blindfolds. The findings? Only one person wound up with the same bracket in each case. You can try it yourself here: Wall Street Journal Blindfold Brackets are "a bias-free way of picking your NCAA bracket: choosing the teams without knowing their identities."

Also related to evidence-based basketball, here's a link to a video from the 2011 MIT Sloan Sports Analytics conference: Gut vs. Data: How do coaches make decisions? (Thanks to @ariegoldshlager for the link.)

   Gut-data

 

Saturday, 19 March 2011

Evidence-based school design.

This is almost enough to make me want to be a kid again: Evidence-Based Design of Elementary and Secondary Schools by Peter Lippmann (here's the amazon page for this book).

Not your father's junior high. The school design being featured is gorgeous. "Evidence-Based Design of Elementary and Secondary Schools analyzes the current shift toward a modern architectural paradigm that balances physical beauty, and social awareness, and building technologies with functionality to create buildings that optimize the educational experience for all learners."

evidence-based school design book

Lippmann says "The contemporary school must be a vibrant, living extension of its community. With assistance from research-intensive principles grounded in theories, concepts, and research methodologies - and with roots in the behavioral sciences - this book examines and provides strategies for pooling streams of information to establish a holistic design approach that is responsive to the changing needs of educators and their students."

Friday, 18 March 2011

Evidence-based smackdown! Denning says Sutton's "good boss" characterization doesn't follow the evidence.

What makes a "good boss"?* On the Harvard Business Review site, Bob Sutton suggests a list of 12 things that good bosses believe: Things like productive/intelligent handling of mistakes. Okay.

But over at Forbes.com, Steve Denning finds a crucial flaw in Sutton's characterization: He asks why this definition of "good boss" doesn't include a focus on the customer. Denning says that:

The contrast to a dictatorial command-style manager no doubt helped make the [Sutton] post very popular. From another perspective, however, there is one shocking, even mind-boggling, omission from the list: There is no mention of the client or the customer (or the stakeholder).... That’s because Sutton’s “good boss” is the natural inhabitant of hierarchical bureaucracy. This is a world that operates from an inside-out perspective, pushing products at customers, parsing and manufacturing demand, tweaking the value chain to achieve ever greater efficiencies, and concerned about outputs more than outcomes.... It leads to a preoccupation with style over substance.

As the author of Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management, Sutton presents his picture of a "good boss” as “evidence-based” and “rooted in real proof of efficacy.”

Sutton’s picture of the “good boss” however overlooks one huge mass of evidence: the comprehensive study of the productivity of 20,000 US companies from 1965 compiled by Deloitte’s Center for the Edge under the title of the Shift Index. Its conclusions? Running organizations as hierarchical bureaucracies has proved to be an utter disaster.

Denning seems to be asking "What does the evidence say about organizational structure, incentives, and long-term performance?" A question they'll be answering long after all of us are gone. Read the rest of Denning's post here.

*For me, any list of good boss characteristics has to start with "never uses the word 'boss'".

Wednesday, 16 March 2011

Learning to hug it out: The competing logics of the physician-patient relationship and business-like, evidence-based health care.

Here's an interesting take on the struggle to shift toward evidence-based medicine: Managing the Rivalry of Competing Institutional Logics, by Trish Reay and C. R. (Bob) Hinings. This Organization Studies article is free from Sage Online through April.

Sorry, I don't follow your logic. The authors look at institutional logics, which provide our organizing principles: they're "the basis of taken-for-granted rules guiding behaviour of field-level actors, and they ‘refer to the belief systems and related practices that predominate’.... Logics are an important theoretical construct because they help to explain connections that create a sense of common purpose and unity". But there's often rivalry between an incumbent logic and a New World Order.

Oh, Canada! When the Alberta (Canada) provincial health care system introduced "new business-like health care", it presented "a direct challenge to the previously dominant logic of medical professionalism. [Originally] the physician – patient relationship guided all service provision. Physicians used their professional knowledge to determine appropriate care for their patients, and other professionals followed physician requests or orders. Patients were expected to follow medical direction. And it was the role of government to provide sufficient funds to meet the need determined by physicians. Thus the logics of medical professionalism and business-like health care were competing. Each was associated with different organizing principles, and each required a different set of behaviours from actors within the field."

Alberta adopted new governance structures to increase efficiency and do more with less. The principles "associated with a business-like logic were cost-effective treatment, lowest-cost provider and customer satisfaction. In particular, physicians were singled out as significant cost drivers and identified as a major source of current problems. Consequently, physicians were legislatively excluded from holding board positions in the newly created Regional Health Authorities (RHAs).... Physicians did not agree with the principles of business-like health care as set out by government. They did not believe that patient care should be provided based on government determinations of cost-effectiveness and patient satisfaction. They particularly did not believe that their behaviour should be determined by those principles."

Let's hug it out. The health system and the physicians found some ways to work together. My favorite? "Working together against the government" (p. 641), which provided common ground for collaborating. They also separated medical decisions from other RHA decisions, and collaborated on experimental sites.

The data. The authors used archival documents, interview data, and their own observations over time as they investigated organizational change. A sample finding: "Physicians were angered by government actions excluding them from positions on RHA boards and proposals for substituting other (less expensive) professionals for physicians when possible. They believed that their opinions and expertise were no longer considered valuable and they spoke out in letters to the editor and other public outlets."

The findings: Managed rivalry. From the research abstract: This study "identified four mechanisms for managing the rivalry of competing logics that facilitated and strengthened the separate identities of key actors, thus providing a way for competing logics to co-exist and separately guide the behaviour of different actors."

Friday, 11 March 2011

A must-read: Yudkowsky's gentle, brilliant explanation of Bayesian reasoning and evidence. And why we overestimate breast cancer occurrence 85% of the time.

Eliezer_yudkowsky_stanford2006 Happy Fun-with-Evidence Friday. No videos today, just an important lesson told in an entertaining way. Eliezer Yudkowsky is a research fellow at the Singularity Institute for Artifical Intelligence. Boy howdy, can he explain stuff. He's written some great explanations of Bayes' theorem for non-practitioners. [Thanks to @SciData (Mike Will) for linking to this.]

How Bayesian reasoning relies on evidence. Bayes depends on something called 'priors': These are prior probabilities (think of them as 'original' probabilities before additional information becomes available). We use evidence to establish the value of these priors (e.g., the proportion of people with a particular disease or condition). Bayes' theorem is then used to determine revised, or posterior, probabilities given some additional information, such as a test result. (Where do people get priors? "There's a small, cluttered antique shop in a back alley of San Francisco's Chinatown. Don't ask about the bronze rat.")

Yudkowsky opens by saying "Your friends and colleagues are talking about something called 'Bayes' Theorem' or 'Bayes' Rule', or something called Bayesian reasoning. They sound really enthusiastic about it, too, so you google and find a webpage about Bayes' Theorem and... It's this equation. That's all. Just one equation. The page you found gives a definition of it, but it doesn't say what it is, or why it's useful, or why your friends would be interested in it. It looks like this random statistics thing." Then he walks through a simple example about calculating breast cancer risk for a woman with a positive mammography result. Very hands-on, including little calculators like this:

Bayes_yudkowsky

Risk, misunderstood 85% of the time? The scary part is this: "Next, suppose I told you that most doctors get the same wrong answer on this problem - usually, only around 15% of doctors get it right. ("Really?  15%?  Is that a real number, or an urban legend based on an Internet poll?" It's a real number. See Casscells, Schoenberger, and Grayboys 1978; Eddy 1982; Gigerenzer and Hoffrage 1995; and many other studies. It's a surprising result which is easy to replicate, so it's been extensively replicated.)"

Evidence slides probability up or down. I especially like Yudkowsky's description of how evidence 'slides' probability in one direction or another. For instance, in the breast cancer example, if a woman receives a positive mammography result, the revised probability of cancer slides from 1% to 7.8%, while a negative result slides the revised probability from 1% to 0.22%.

About priors. Yudkowsky reminds us that "priors are true or false just like the final answer - they reflect reality and can be judged by comparing them against reality. For example, if you think that 920 out of 10,000 women in a sample have breast cancer, and the actual number is 100 out of 10,000, then your priors are wrong. For our particular problem, the priors might have been established by three studies - a study on the case histories of women with breast cancer to see how many of them tested positive on a mammography, a study on women without breast cancer to see how many of them test positive on a mammography, and an epidemiological study on the prevalence of breast cancer in some specific demographic."

The Bayesian discussion references the classic Judgment under uncertainty: Heuristics and biases, edited by D. Kahneman, P. Slovic and A. Tversky. "If it seems to you like human thinking often isn't Bayesian... you're not wrong. This terrifying volume catalogues some of the blatant searing hideous gaping errors that pop up in human cognition."

You must read this. Yudkowky's Bayesian discussion continues with a more in-depth example, eventually leading to a technical explanation of technical explanation. I recommend that one, too.

Monday, 07 March 2011

Looking for evidence showing you can get rich *and* be nice to people? Exhibit #1 is Steve Ells.

I've been meaning to write about Steve Ells and Chipotle for some time. Last night's premiere of America's Next Great Restaurant prompted me to (more about that in a minute). I've been eating at Chipotle since the first location opened (in Denver, where I happened to live nearby*). The food is better than ever; I highly recommend the posole.

Bio-steve-ells The evidence. Good people policies, forward-thinking environmental programs, and good business results can co-exist. (Yes, there are others, such as Zappos.)

Good business. As the company took off, McDonald's invested substantially, and in 2006 spun off Chipotle as a successful IPO. Here's a writeup about their solid performance, and a look at EVA (economic value added). (And McDonald's - no stranger to hard evidence - presented evidence-based management methods to the younger chain, such as video-recording lines outside during busy lunch hours, so they could calculate how many customers left due to excessive wait times.... which don't exist, because even a line of 15 people moves through rather quickly.)

Do one thing really well. Clearly, Chipotle follows Steve Jobs' advice to "say no to 1,000 things." I've always wished they would add Asian options to the menu (hey, they've already got the rice -- and yippee! they recently added brown rice, which evidence shows has a lower glycemic index than white, and is associated with lower incidence of diabetes). But of course they're too smart and strategic for that: Now they're developing a separate chain of Asian-inspired restaurants (more news here). The word on the street says it will be called ShopHouse Southeast Asian Kitchen. I dearly hope they offer tofu.

Good environmental programs.  Chipotle emphasizes Food with Integrity: The company's supply chain demonstrates respect for the animals, the environment, and the farmers (ahem, I thought people who raised animals were called ranchers, but I'm from Oklahoma... what do I know?). All this comes at a price: It's not the cheapest food in town. But people will gladly pay for differentiation.

Chipotle_people Good people policies. At Chipotle, a general manager can earn in the six figures (yes, over $100,000/yr). They hire a diverse group of people (and not just to satisfy some bogus quota). Take a look at this picture on their careers page. Over the years, I've eaten many times at more than a dozen of Chipotle's stores (in several U.S. states), and I can assure you that The Man isn't running these places.

Chipotle will help any employee become fluent in English. A restaurant reviewer who I very much admire, Laura Shunk, waxes poetic about her experience working there -- and she's not the only foodie/culinary type who agrees the company has good people policies *and* good food.

Reality TV!! Ells is an investor/judge on the new NBC show America's Next Great Restaurant. (Bingo! His participation is great marketing for Chipotle.) What an incredible opportunity for the entrepreneur who wins: Ells will be a mentor as they open three locations in LA, Minneapolis, and New York. I can't think of a better way for a budding restauranteur to gather great evidence.

*The first Chipotle opened near the University of Denver (it's still there, at 1644 East Evans Avenue).