Evidence Soup
How to find, use, and explain evidence.

« March 2010 | Main | May 2010 »

9 posts from April 2010

Friday, 30 April 2010

How do those Power Balance bracelets work? I think it's because of the 20-Hz difference between a genius and an ascending colon.

[Update: I wrote about hologram bracelets again on 5-Nov-2010.]

Happy Fun-with-Evidence Friday. Let's take a look at the Power Balance bracelet, which some people claim can improve athletic performance. PB is described as "performance technology that uses holograms embedded with frequencies that react positively with your body’s natural energy field".

PB uses a 'test' to convince people that the technology will work (more about that below). PowerBalance.com says a test video is coming soon. But they already have a heck of a testimonial: Shaquille O'neal says “I came across Power Balance when someone did the test on me. That night, while playing for the Phoenix Suns, there were about three of my teammates with the product on and we won that game by 57 points! I kept feeling something when I wore the bracelet, so I kept wearing it. When I took it off I went back to normal. I’ve been wearing the bracelet ever since.”

Power Balance The lucky charm effect? It could be that these work - at least temporarily - because the wearer believes they work. Thursday's Wall Street Journal had an excellent story on The Power of Lucky Charms. There is peer-reviewed evidence that "a belief in good luck can affect performance." An article in the upcoming June issue of Psychological Science reports a study in which "participants on a putting green who were told they were playing with a 'lucky ball' sank 6.4 putts out of 10, nearly two more putts, on average, than those who weren't told the ball was lucky. That is a 35% improvement." Positive thinking can make a difference in cases where someone has the ability to influence an outcome.

As shown here, Power Balance pendants are also available, made of "sterling silver from Bali.... equipped with two PB holograms embedded on the back under a clear epoxy resin window for easy viewing." SLAM, a basketball web site, recently did a Q&A with Josh Rodarmel, one of the Power Balance founders, who explained that his father "was into this technology that was similar to Power Balance, but it was like $500-600." So Rodarmel's brother, Troy, "started researching what Mylar bags were made of and figured out that it was basically a hologram. That’s how we became able to mass produce it at an affordable rate."

Just a stage trick? The bracelet is available for $49 US from amazon, with mostly negative reviews (amazon marketplace merchants offer it for under $30). My favorite product review says "'Applied Kinesiology', which is the demonstration used to sell these bracelets, is an old stage magician's trick. If you fell for it, you've been had. This product is a con. There is no plausible hypothesis behind it, and well established stage trickery explains why the demonstration is always successful." (More about applied kinesiology demonstrations here.)

The site Rat Bags granted PB a 2009 Millenium Award, saying the product is effective only in that "The judges tried some of the wristbands on and they unanimously agreed that the bracelets certainly increased the strength of their laughter." Rat Bags recaps a television story featuring a "special bracelet that a skeptic could not explain", and quotes the company's Australian distributor as saying that the Mylar is supposed to restore the body's frequency to somewhere near the 7.83 hertz required.

Not so fast. But there's positive feedback also. On Examiner.com there's a review saying "the Power Balance bracelet - reviewed by triathletes and others - you have to try it". The story says "Research has shown that the cells of the body are capable of finding their own frequency. Other objects can also match a frequency." Quoting VibeLife.net: "For example, if you tap a 440 Hz tuning fork and move another 440 Hz tuning fork near it, the second tuning fork will duplicate the same tone (or oscillate) with the same tone as the one that was tapped." So the PB supposedly allows the body's cells to operate at their optimal level. Does this mean Shaq will score more 3-pointers in his next NBA game?

Phase What's your frequency? This frequency stuff is huge. The researchers behind an addiction therapy site, NewWayClinic.com, explain that "The human body has a normal frequency range of 62 Hz to 68 Hz, as the frequency range lowers due to the constant use of alcohol or drugs, the worse the health condition becomes." Their reported findings include: normal brain frequency (head) 72-78 Hz, brain frequency at 80-82 MHz indicates a genius, healthy body (neck down) 62-68 Hz, disease begins & colds invade 59-60 Hz, ascending colon 58-60 Hz, cancer can set in 42 Hz, death begins at 20 H. Meanwhile, fresh produce is supposed to have up to a 15-Hz frequency, while canned foods have zero. ("Invading pathogenic frequencies (toxins & viruses) are low. Positive beneficial bacterial frequencies are higher.")

The clinic claims that "By using the principles above it is possible to the remove the frequency patterns of nicotine, alcohol or drugs from the body and so help to the stop the physical dependency for that substance(s)." They apply a phase-cancellation technique to treat addiction ("By correctly analysing a substance and applying Formula 23®, we can create a wave, which is 180° out of phase.... [As] each wave cancels the other out, as a result the body requires less substance(s) each day to feel 'normal' until clean.")

Show us the evidence. According to New Way, "there have been clinical studies conducted on the effectiveness of using the above techniques for asthma, allergies & skin disease." They list "success rates" in the range of 78%-93%, but the links they provided were broken, and all I got was 404 errors.

Thursday, 29 April 2010

What does the evidence say about sports kinesiology tape? The Wall Street Journal steps up its game.

Tuesday's Wall Street Journal featured a story about kinesiology tape. You know, the brightly colored tape that athletes wear in funny configurations when playing volleyball and other sports.

Kinesiology Tape: Wall Street Journal In Putting on the Stripes to Ease Pain, Laura Johannes recaps this popular method of treating injuries, and also recaps the peer-reviewed research. Kudos to Johannes for focusing on the evidence. She writes at length about results from clinical trials --- and provides more specifics about the published articles than many science journalists do.

"Two recent studies on Kinesio Tex showed some short-term effect. A study of 42 patients with shoulder pain, published in 2008 in the Journal of Orthopaedic & Sports Physical Therapy, found that range of motion improved immediately after application of kinesiology tape, compared with a sham taping using no tension. But the study found no significant difference in pain or overall disability scores. Last year, a study on 41 patients with whiplash after car accidents found statistically significant pain relief and improvements in range of motion with kinesiology taping compared with a sham tape.... [But] the changes were so small they 'may not be clinically meaningful.'"

Raising the bar. It's great to see the Wall Street Journal providing more details about the hard evidence in its health coverage. Let's hope other mainstream publications will do the same.

Wednesday, 28 April 2010

Rational Wiki tries using evidence to refute pseudoscience.

Let's take a look at Rational Wiki, a site dedicated to "analyzing and refuting pseudoscience, the anti-science movement, and crank ideas." (Thanks to @republicofmath for the tip.)

I like the concept, though much of the stuff on the site strikes me as overtly anti-religion and anti-conservative. I'm more interested in debunking in general -- liberals and atheists aren't immune to faulty reasoning, bad logic, and misplaced evidence.

Rational Wiki does provide some guidance, however, such as presenting Carl Sagan's Fine Art of Baloney Detection, where he describes common fallacies (arguing from authority, begging the question, etc.). And some of their pages do provide evidence and references to support what is being said: The herbal supplements page, for instance, lists published research such as Quality of reporting of randomized controlled trials of herbal medicine interventions. However, the presentation of the evidence on this site isn't anything particularly outstanding or new, and doesn't improve on information available elsewhere.

Rationalwiki

Tuesday, 20 April 2010

Download this guide for using evidence to assess accountability. (Good for newbies and for gray beards.)

The Nuffield Foundation has published a new guide to using evidence. This is a solid, practical resource for practitioners in government agencies, think tanks, or the private sector - whether or not you're in the UK, and whether or not you're auditing a government agency. You can download Evidence for accountability: Using evidence in the audit, inspection and scrutiny of UK government at no charge.

Nuffield Foundation Evidence for Accountability This would be an excellent introductory resource - and experienced people should use it as a refresher. [Thanks to Peter West at Continuous Innovation for the heads up (@WestPeter).] I like the organization according to "eight principles for the effective use of evidence in these organisations."

  1. Be clear about what is expected of each audit, inspection and scrutiny project.
  2. Consider the appropriateness and feasibility of different methods and ensure that you have the necessary skills.
  3. Seek out a range of different kinds of evidence.
  4. Test the quality of evidence.
  5. Consider alternative interpretations of the evidence.
  6. Tailor reporting to the needs of different audiences.
  7. Check the implementation of findings and recommendations.
  8. Reflect on the lessons for future projects.

This isn't just another resource saying "pay more attention to research findings" - there's practical advice for folks who must navigate in a politically charged atmosphere and rely on conflicting evidence. For instance: Table 4, Quality Criteria for Evidence reviews four important criteria: relevance, robustness, sufficiency, and legitimacy. The checklist reminds us to look for:

  • Relevance: Salient to aspects of the topic (know what, know why, know how, and know who). Is it up to date? Is it specific or local?
  • Robustness: Factually accurate (error-free). Is it consistent and reliable? Is it representative (controlled for bias)? Traceable (replicable)?
  • Sufficiency: What counts as enough evidence will often be a trade-off between the strength (and defensibility) of the findings; conclusions and judgements resting on the evidence; the constraints of time, cost and burden that apply to the project.
  • Legitimacy: Including stakeholders in evidence-gathering, analysis, and discussion may shape how they view the quality of the evidence.

That last point is worth repeating: You can influence the perceived legitimacy of the evidence by Including stakeholders in evidence-gathering, analysis, and discussion.

Nuffield Foundation Evidence Guide Principle 3 (Seek out a range of different kinds of evidence) is also a good reminder. The guide provides useful recaps, such as this table listing the most common sources of evidence. [Though I wonder how often anyone (except PhD students) has the time and resources to investigate each of these with real rigor?]

The guide even touches on the traditional policy/planning/evaluation cycle by talking about implementation, and how to increase the likelihood that recommendations will actually be followed. I suggest downloading this free pdf now, and sharing it with colleagues; it's a worthwhile refresher. Evidence for accountability: Using evidence in the audit, inspection and scrutiny of UK government.

Friday, 16 April 2010

WHAW? It's Homeopathy Awareness Week!

Update: Today's Dilbert commentary on homeopathy: Dilbert.com

Thanks to Alex Knapp for explaining on Outside the Beltway that homeopathy is "the belief that incredibly dilute solutions of non-pharmacologically active herbs can somehow cure disease... magic potions, minus the magic part. Also, minus the curing disease part."

Analysis of Homeopathy by Hell's Newsstand Knapp learned from David Gorski on the always-good Science-Based Medicine site that it's "World Homeopathy Awareness Week (WHAW), or, as I like to call it, World Sympathetic Magic Awareness Week".

One of the claims made by homeopathy practitioners is that water has a "memory". Hell's Newsstand generously provided this concise analysis of that concept (see left). [Shouldn't Hell's Newsstand ("Because Crazy Never Sleeps") be required reading?]

Of course, there is an argument, and a little evidence, supporting homeopathy - see, for instance, The Case FOR Homeopathic Medicine: The Historical and Scientific Evidence, which appeared on Huffington Post.

Both Knapp and Gorski link to a fun episode of the BBS show That Mitchell and Webb Look (the end is my favorite part).

In a comment on Gorski's post, Draal provides research findings about silicates in homeopathic preparations, which indicate that "The positive results from homeopathic in vitro studies could just be due to contaminants from the glassware." Oops!

Happy Fun-with-Evidence Friday, everybody. (And happy WHAW, too.) For an interesting (and serious) take on alternative medicine, and how it's fundamentally different from evidence-based Western medicine, see "Why Alternative Medicine Cannot Be Evidence-based" [pdf].

Thursday, 15 April 2010

I/O at Work summarizes the Industrial/Organizational evidence. Good idea, though it falls into the same trap as others do.

A new resource, I/O at Work, was launched recently by Alison Mallard, Ph.D. Here's what she says about the site: "Many consider Industrial/Organizational psychology as the science behind Human Resources, Organizational Development, Organizational Effectiveness, and Organizational Behavior. I/O AT WORK helps to bridge the gap between I/O research and its application in the HR world (and beyond) by making it easier for practitioners to access and stay on top of recent published research."

Sounds promising. I applaud the efforts of I/O at Work to present brief recaps of relevant evidence. I reviewed a summary of a peer-reviewed journal article, listed under decision-making. Here's an excerpt:

"The study examined cultural variables including power distance (the extent to which less powerful members of an organization not only expect but accept that power is distributed unequally) and collectivism (where the importance of the group supersedes the importance of the individual) in relation to importance ratings of job aspects (decision-making and interpersonal skills). They did find that for some jobs, employees in countries high in power distance and collectivism tended (remember, p > .05) to rate decision-making activities as lower in importance than employees in countries that are low in collectivism and power-distance. The take-home message from this article is that the job descriptions the authors used, which came from the O*NET, may not generalize to other countries. Read that last line again, because it's important. Practitioners need to be wary of using measures and job descriptions developed in this country in other countries, even for the same jobs."

The summary provides a link to the original source, an article titled The Transportability of Job Information Across Countries. Let's take a look at the authors' abstract:

"Three Occupational Information Network (O*NET) instruments (Generalized Work Activities, Basic and Cross-Functional Skills, Work Styles) were administered to 1,007 job incumbents, from 369 organizations, performing 1 of 3 jobs (first-line supervisor, office clerk, computer programmer) in New Zealand, China, and Hong Kong. Data from these countries were compared with archival data collected from 370 incumbents holding similar jobs in the United States. Hypothesized country differences, derived from cross-cultural theory, received limited support. The magnitude of differences in mean item ratings between incumbents from the United States and the other 3 countries were generally small to moderate in size, and rank-orderings of the importance and level of work activities and job requirements were quite similar, suggesting that, for most applications, job information is likely to transport quite well across countries."

It appears that plenty of effort went into both the original research and the IO recap. (Maybe it's me, but these seem like they're about somewhat different things.) By definition, summaries and abstracts will over-simplify.

So what's missing? Here's my take: What I see happening here isn't unique to this particular site - this is typical for scholarly and trade publications. I believe some significant flaws need to be addressed, so the format of evidence being presented will be more in sync with the web, more reader-friendly, and more searchable.

  • Explicit variables. Most research recaps provide substantial technical information. Yet they often fail to concisely explain what I call the 'A-and-B' of their findings: How does 'A' relate to 'B'? Of the variables that were considered, which ones are key, and how do they impact one another? In this instance, job descriptions, something called 'job information', and country of employment appear to be important variables: But I'm not sure I know exactly how they relate, based upon a reading of the abstract and the IOatWork writeup.
  • Structure. The content in the IO recap is no more structured than the content in the original material: Adding some structure, to make these things more repeatable and searchable, would be useful. (For example, they could explicitly identify the variables involved, and tag them with metadata. Or even simpler, they could specify a structure that provides for a title, a 'bottom line' statement, and some keywords.)

Looking for sponsors. My understanding is that this is a boot-strapped site that was recently launched, and that they're looking for contributors and sponsors. It seems to be set up on Typepad (at least the home page is). Overall, the functionality was alright. The articles I reviewed included links to share via Twitter, etc., so it meets the overall expectation for such a site.

Ioatwork

Tuesday, 13 April 2010

Patient preference should be weighed as evidence.

A new article asks: 'New' Evidence for Clinical Practice Guidelines: Should We Search for 'Preference Evidence'? This is an important question, and I believe the answer is 'yes'. For healthcare providers, what evidence could be more valuable than evidence about patient preferences - particularly after patients been presented the medical evidence applicable to their situation?

Murray Krahn's article appears in the June 2010 issue of the journal The Patient: Patient-Centered Outcomes Research (pp. 71-77, $62.95). From the abstract:

"Clinical practice guidelines (CPGs) are systematically developed statements to assist both patient and practitioner decisions. They link the practice of medicine more closely to the body of underlying evidence, shift the burden of evidence review from the individual practitioner to experts, and aim to improve the quality of care. [But] CPGs do not routinely search for or include evidence related to patients' values and preferences. We argue that they should. We think that such evidence can tell us whether a decision is preference sensitive; how patients feel about important health outcomes, treatment goals, and decisions; and whether preferences vary in different types of patients. The likely effects... are a general sensitization to the importance of preferences in decision making, the recognition that some decisions are simply all about preferences, a more considered approach to forming preferences among patients and other stakeholders, and more effective integration of preferences into decisions."

Evidence-Based To Value-Based Medicine This reminds me of a post I wrote awhile back about a concept called value-based medicine: Oh, dear. Are we going to need another word for evidence? The idea, presented in the book Evidence-Based To Value-Based Medicine, is that practice should evolve from evidence-based medicine to an even higher quality of patient care, described as value-based medicine. This approach measures patient-perceived value and integrates relevant costs provided by health care interventions, allowing a more accurate measure of the overall worth to stakeholders. The authors came up with a new way to evaluate medical treatments in the context of the patient's quality of life: A quality-adjusted life-year (QALY) is used to measure the total value conferred, taking into account improvements in both the length and the quality of life.

Surely 'patient-perceived value' and 'preference' are overlapping. Regardless of the terminology, I believe we need fresh attitudes about what counts as evidence, and how it's integrated into the practice of medicine one patient at a time.

McKinsey has evidence showing equity analyst forecasts are still pretty bogus.

The McKinsey Quarterly has a new analysis of something we've seen before: The discrepancy between analyst forecasts and actual market performance. In fact, we've seen evidence of this for many years. In Equity analysts: Still too bullish, McKinsey points out that "After almost a decade of stricter regulation, analysts’ earnings forecasts continue to be excessively optimistic." [I'm not sure, but you may need to be logged in (with a free account) to view this article.]

The evidence usually doesn't speak for itself, but in this case, a chart tells the story quite well. The lines represent forecasts, and the dots show where the market actually closed.

McKinsey study of equity analyst forecasts

The McKinsey authors close with some good advice: "Executives, as the evidence indicates, ought to base their strategic decisions on what they see happening in their industries rather than respond to the pressures of forecasts."

Thursday, 08 April 2010

Heavy hitters get sloppy with the evidence about changing consumer energy usage with real-time displays.

It's a tempting idea: Changing consumer behavior by providing immediate feedback on energy usage. (Think real-time display inside the house to keep people informed about their electricity consumption and pricing.) This week, some heavy hitters promoted this idea in a letter to Barack Obama. But instead of backing up their argument with evidence, they simply referred to "studies" and "experience".

In-home display device Not good enough. Is this kind of statement acceptable? It's a letter, after all, not a research report, so how much hard evidence is appropriate? i think they should include something - footnotes, at least. By vaguely referring to "studies," they open the door to an examination of the evidence. So where is it?

Google, a sponsor of this letter, is famously data-driven. Would Google executives accept such an unsupported statement if someone were pitching an investment to them? (For folks who want more information, the letter refers us to a spokesman at Google. I will follow up.)

Wolf in sheep's clothing. Too often, statements that "studies show" are accepted without challenge. These claims are poorly disguised as evidence-based management, when in fact they are the enemy.

The letter (signed by AT&T, Best Buy, Dow, Environmental Defense GE, Google, Intel, HP, Nokia, and others) says in part: "Dear Mr. President: We are writing to ask that your Administration adopt the goal of giving every household and business access to timely, useful and actionable information on their energy use. By giving people the ability to monitor and manage their energy consumption, for instance, via their computers, phones or other devices, we can unleash the forces of innovation in homes and businesses.... Studies and experience show that when people have access to direct feedback on their electricity use, they can achieve significant savings through simple behavioral changes." (Information Week coverage here.)

Phil Carson of Intelligent Utility did a great job of explaining that although this idea may work, more evidence is needed. It just so happens that this week researchers from The Brattle Group presented a webinar on "Effects of In-Home Displays on Energy Consumption: A Summary of Pilot Results" [pdf of presentation here]. Carson asks "Does direct feedback on energy use change consumer behavior?" and finds that "a dozen pilot programs indicated the answer is 'yes,' and the average savings was 7 percent. But... Sanem Sergici pointed out major caveats to the results of each pilot, such that many if not most did not rise to the scientific standard of reproducible results."

Gold star for Mr. Carson. The Intelligent Utility piece ended by saying: "Faruqui's conclusion: Utilities may 'borrow' disparate industry data to make a point, but if they are mulling expensive programs and major capital decisions, they'd better spend the time and money in their own service areas to research the rationale for those decisions. Research is expensive, Faruqui added, but that cost is a tiny fraction of the cost of an ill-informed, major decision. The phrase 'studies show' helps make a convincing letter to the president. But it behooves us all to ensure that the data really is there to support the statement."

Investment without evidence? Here's the original of the energy letter [pdf]. The actions requested include: "Encourage the purchase and installation of technologies, devices and methods of delivery that will help ensure timely, secure, and clear information on energy consumption is available to consumers. To that end, we request that you consider access to this information as part of any program aimed at improving home and building energy performance."

Energy letter