Evidence Soup
How to find, use, and explain evidence.

Tuesday, 21 June 2016

Free beer! and the "Science of X".

Chanteuse_flickr_Christian_Hornick

1. Free beer for a year for anyone who can work perfume, velvety voice, and 'Q1 revenue goals were met' into an appropriate C-Suite presentation.
Prezi is a very nice tool enabling you to structure a visual story, without forcing a linear, slide-by-slide presentation format. The best part is you can center an entire talk around one graphic or model, and then dive into details depending on audience response. (Learn more in our writeup on How to Present Like a Boss.)

Now there's a new marketing campaign, the Science of Presentations. Prezi made a darn nice web page. And the ebook offers several useful insights into how to craft and deliver a memorable presentation (e.g., enough with the bullet points already).

But in their pursuit of click-throughs, they've gone too far. It's tempting to claim you're following the "Science of X". To some extent, Prezi provides citations to support its recommendations: The ebook links to a few studies on audience response and so forth. But that's not a "science" - they don't always connect between what they're citing and what they're suggesting to business professionals. Example: "Numerous studies have found that metaphors and descriptive words or phrases — things like 'perfume' and 'she had a velvety voice' - trigger the sensory cortext.... On the other hand, when presented with nondescriptive information — for example, 'The marketing team reached all of its revenue goals in Q1' — the only parts of our brain that are activated are the ones responsible for understanding language. Instead of experiencing the content with which we are being presented, we are simply processing it."

Perhaps in this case "simply processing" the good news is enough experience for a busy executive. But our free beer offer still stands.

2. How should medical guidelines be communicated to patients?

And now for the 'Science of Explaining Guidelines'. It's hard enough to get healthcare professionals to agree on a medical guideline - and then follow it. But it's also hard to decide whether/how those recommendations should be communicated to patients. Many of the specifics are intended for providers' consumption, to improve their practice of medicine. Although it's essential that patients understand relevant evidence, translating a set of recommendations into lay terms is quite problematic.

Groups publish medical guidelines to capture evidence-based recommendations for addressing a particular disease. Sometimes these are widely accepted - and other times not. The poster-child example of breast cancer screening illustrates why patients, and not just providers, must be able to understand guidelines. Implementation Science recently published the first systematic review of methods for disseminating guidelines to patients.

Not surprisingly, the study found weak evidence of methods that are consistently feasible. "Key factors of success were a dissemination plan, written at the start of the recommendation development process, involvement of patients in this development process, and the use of a combination of traditional and innovative dissemination tools." (Schipper et al.)

3. Telling a story with data.
In the Stanford Social Innovation Review (SSIR), @JakePorway explains three things great data storytellers do differently [possible paywall]. Jake is with @DataKind, "harnessing the power of data science in service of humanity".

 

Photo credit: Christian Hornick on Flickr.

Tuesday, 14 June 2016

Mistakes we make, Evidence Index, and Naturals vs Strivers.

Putin_pianist

1. Mistakes we make when sharing insights.
We've all done this: Hurried to share valuable, new information and neglected to frame it meaningfully, thus slowing the impact and possibly alienating our audience. Michael Shrage describes a perfect example, taken from The Only Rule Is It Has to Work, a fantastic book about analytics innovation.

The cool thing about the book is that it's a Moneyball for the rest of us. Ben Lindbergh and Sam Miller had the rare opportunity to experiment and apply statistics to improve the performance of the Sonoma Stompers, a minor league baseball team in California wine country. But they had to do it with few resources, and learn leadership skills along the way.

The biggest lesson they learned was the importance of making their findings easy to understand. As Shrage points out in his excellent Harvard Business Review piece, the authors were frustrated at the lack of uptake: They didn't know how to make the information meaningful and accessible to managers and coaches. Some people were threatened, others merely annoyed: "Predictive analytics create organizational winners and losers, not just insights."

2. Naturals vs. Strivers: Why we lie about our efforts.
Since I live in Oakland, I'd be remiss without a Steph Curry story this week. But there's lots more to it: Lebron James is a natural basketball player, and Steph is a striver ; they're both enormously popular, of course. But Ben Cohen explains that people tend to prefer naturals, whether we recognize it or not: We favor those who just show up and do things really well. So strivers lie about their efforts.

Overachievers launch into bad behavior, such as claiming to sleep only four hours a night. Competitive pianists practice in secret. Social psychology research has found that we like people described as naturals, even when we're being fooled.

3. How do government agencies apply evidence?
Results for America has evaluated how U.S. agencies apply evidence to decisions, and developed an index synthesizing their findings. It's not easily done. @Results4America studied factors such as "Did the agency use evidence of effectiveness when allocating funds from its five largest competitive grant programs in FY16?" The Departments of Housing and Labor scored fairly high. See the details behind the index [pdf here].

Photo credit: Putin classical pianist on Flickr.

 

Wednesday, 08 June 2016

Grit isn't the answer, plus Scrabble and golf analytics.

Scrabble

1. Poor kids already have grit: Educational Controversy, 2016 edition.
All too often, we run with a sophisticated, research-based idea, oversimplify it, and run it into the ground. 2016 seems to be the year for grit. Jean Rhodes, who heads up the Chronicle of Evidence-Based Mentoring (@UMBmentoring) explains that grit is not a panacea for the problems facing disadvantaged youth. "Grit: The power and passion of perseverance, Professor Angela Duckworth’s new bestseller, on the topic has fueled both enthusiasm for such efforts as well as debate among those of us who worry that it locates the problem (a lack of grit) and solution (training) in the child. Further, by focusing on exemplars of tenacity and success, the book romanticizes difficult circumstances. The forces of inequality that, for the most part, undermine children’s success are cast as contexts for developing grit. Moreover, when applied to low-income students, such self-regulation may privilege conformity over creative expression and leadership. Thus, it was a pleasure to come across a piece by Stanford doctoral student, Ethan Ris, on the history and application of the concept." Ris first published his critique in the Journal of Educational Controversy and recently wrote a piece for the Washington Post, The problem with teaching ‘grit’ to poor kids? They already have it.

2. Does Scrabble have its own Billy Beane?
It had to happen: Analytics for Scrabble. But it might not be what you expected. WSJ explains why For World’s Newest Scrabble Stars, SHORT Tops SHORTER.

Wellington Jighere and other players from Nigeria are shaking up the game, using analytics to support a winning strategy favoring five-letter words. Most champions follow a “long word” strategy, making as many seven- and eight-letter plays as possible. But analytics have brought that "sacred Scrabble shibboleth into question, exposing the hidden risks of big words."

Jighere has been called the Rachmaninoff of rack management, often saving good letters for a future play rather than scoring an available bingo. (For a pre-Jighere take on the world of Scrabble, see Word Wars.)

3. Golf may have a Billy Beane, too.
This also had to happen. Mark Broadie (@MarkBroadie) is disrupting golf analytics with his 'strokes gained' system. In his 2014 book, Every Shot Counts, Broadie rips apart assumptions long regarded as sacrosanct - maxims like 'drive for show, putt for dough'. "The long game explains about two-thirds of scoring differences and the short game and putting about one-third. This is true for amateurs as well as pros." To capture and analyze data, Broadie developed a GolfMetrics program. He is the Carson Family Professor of Business at Columbia Business School, and has a PhD in operations research from Stanford. He has presented at the Sloan Sports Analytics Conference.

Pros have begun benefiting from golf analytics, including Danny Willett, winner of this year's Masters. He has thanked @15thClub, a new analytics firm, for helping him prep better course strategy. 15th Club provided insight for Augusta’s par-5 holes. As WSJ explained, the numbers show that when players lay up, leaving their ball short of the green to avoid a water hazard, they fare better when doing so as close to the green as possible, rather than the more distant spots where players typically take their third shots.

4. Evidence-based government on the rise.
In the US, "The still small, but growing, field of pay for success made significant strides this week, with Congress readying pay for success legislation and the Obama administration announcing a second round of grants through the Social Innovation Fund (@SIFund)."

5. Man + Machine = Success.
Only Humans Need Apply is a new book by Tom Davenport (@tdav) and @JuliaKirby. Cognitive computing combined with human decision making is what will succeed in the future. @DeloitteBA led a recent Twitter chat: Man-machine: The dichotomy blurs, which included @RajeevRonanki, the lead for their cognitive consulting practice.

Monday, 06 June 2016

How women decide, Pay for Success, and Chief Cognitive Officers.

Women-decide
1. Do we judge women's decisions differently?

Cognitive psychologist Therese Huston's new book is How Women Decide: What's True, What's Not, and What Strategies Spark the Best Choices. It may sound unscientific to suggest there's a particular way that several billion people make decisions, but the author doesn't seem nonchalant about drawing specific conclusions.

The book covers some of the usual decision analysis territory: The process of analyzing data to inform decisions. By far the most interesting material isn't about how choices are made, but how they are judged: The author makes a good argument that women's decisions are evaluated differently than men’s, by both males and females. Quick example: Marissa Mayer being hung up to dry for her ban on Yahoo! staff working from home, while Best Buy's CEO mostly avoided bad press after a similar move. Why are we often quick to question a woman’s decision, but inclined to accept a man’s?

Huston offers concrete strategies for defusing the stereotypes that can lead to this double standard. Again, it's dangerous to speak too generally. But the book presents evidence of gender bias in the interpretation of people's choices, and how it feeds into people's perceptions of choices. Worthwhile reading. Sheelah Kolhatkar reviewed for NYTimes books.

2. Better government through Pay for Success.
In Five things to know about pay for success legislation, Urban Institute staff explain their support for the Social Impact Partnership to Pay for Results Act (SIPPRA), which is being considered in the US House. Authors are Justin Milner (@jhmilner), Ben Holston (@benholston), and Rebecca TeKolste.

Under SIPPRA, state and local governments could apply for funding through outcomes-driven “social impact partnerships” like Pay for Success (PFS). This funding would require strong evidence and rigorous evaluation, and would accomodate projects targeting a wide range of outcomes: unemployment, child welfare, homelessness, and high school graduation rates.

One of the key drivers behind SIPPRA is its proposed fix for the so-called wrong pockets problem, where one agency bears the cost of a program, while others benefit as free riders. "The bill would provide a backstop to PFS projects and compensate state and local governments for savings that accrue to federal coffers." Thanks to Meg Massey (@blondnerd).

3. The rise of the Chief Cognitive Officer.
On The Health Care Blog, Dan Housman describes The Rise of the Chief Cognitive Officer. "The upshot of the shift to cognitive clinical decision support is that we will likely increasingly see an evolving marriage and interdependency between the worlds of AI (artificial intelligence) thinking and human provider thinking within medicine." Housman, CMO for ConvergeHealth by Deloitte, proposes a new title of CCO (Chief Cognitive Officer) or CCMO (Chief Cognitive Medical Officer) to modernize the construct of CMIO (Chief Medical Information Officer), and maintain a balance between AI and humans. For example, "If left untrained for a year or two, should the AI lose credentials? How would training be combined between organizations who have different styles or systems of care?"

Tuesday, 17 May 2016

Magical thinking about ev-gen, your TA is a bot, and Foursquare predicts stuff really well.

Dreaming about Ev-Gen

1. Magical thinking about ev-gen.
Rachel E. Sherman, M.D., M.P.H., and Robert M. Califf, M.D. of the US FDA have described what is needed to develop an evidence generation system - and must be playing a really long game. "The result? Researchers will be able to distill the data into actionable evidence that can ultimately guide clinical, regulatory, and personal decision-making about health and health care." Recent posts are Part I: Laying the Foundation and Part II: Building Out a National System. Sherman and Califf say "There must be a common approach to how data is presented, reported and analyzed and strict methods for ensuring patient privacy and data security. Rules of engagement must be transparent and developed through a process that builds consensus across the relevant ecosystem and its stakeholders." Examples of projects reflecting these concepts include: Sentinel Initiative (querying claims data to identify safety issues), PCORNet (leveraging EHR data in support of pragmatic clinical research), and NDES (the National Device Evaluation System).

2. It pays to play the long game with data.
Michael Carney shares great examples in So you want to build a data business? Play the long game. These include "Foursquare demonstrating, once again, that it’s capable of predicting public company earnings with an incredible degree of accuracy based on real world foot traffic data.... On April 12, two weeks in advance of the beleaguered restaurant chain’s quarterly earnings report, Foursquare CEO Jeff Glueck published a detailed blog post outlining a decline in foot traffic to Chipotle’s stores and predicting Q1 sales would be 'Down Nearly 30%.' Yesterday, the burrito brand reported a 29.7% decline in quarter over quarter earnings.... Kudos to the company for persisting in the face of public scrutiny and realizing the true potential of its location-based behavioral graph."

3. Meet Jill Watson, AI TA.
Turns out, college students often submit 10,000 questions to their teaching assistants. Per class, per semester. So a Georgia Tech prof experimented with using IBM's Watson Analytics AI engine to pretend to be a live TA - and pulled it off. Cool stories from The Verge and Wall Street Journal.

4. Burst of unsettling healthcare news.
- So now that we know more about the cost of our healthcare, evidence suggests price transparency doesn't seem to cut our outpatient spending. Healthcare reform is hard.

- Recent findings indicate patient-centered medical homes aren't cutting Medicare costs. Buzzkill via THCB.

- Ever been told to have surgery where they do the most procedures? Some data show high-volume surgeries aren't so closely linked to better patient outcomes. Ouch.

Wednesday, 11 May 2016

How Integrative Propositional Analysis shapes evidence into a graph.

  IPA beers

I admire any effort to create a simple presentation of complex evidence. Having developed some models of my own, I know I’m on the right track when someone’s initial response is “That’s too simplistic; it’s much more complicated.” I believe you’ll really struggle if you don’t begin with a top-down perspective.

Now we can choose from useful frameworks for synthesizing and rating the quality and relevance of evidence: GRADE for medical evidence, and the U.S. Dept. of Education's evidence guidelines are just two examples. Integrative Propositional Analysis (IPA) is a method of integrating and analyzing the propositions (theories) stated in a study, strategic plan, or other document.

Bernadette Wright and Steven Wallis write in Sage Open that IPA structures relationships and quantitatively measures the inter-connectedness among concepts found within theories. I think this is a promising idea. IPA is briefly introduced in Three Ways of Getting to Policy-Based Evidence: Why researchers and practitioners are shifting away from expensive new studies toward the effective synthesis of existing research (Stanford Social Innovation Review). The ‘three ways’ are Randomista (requiring randomized experiments to generate evidence), Explainista (requiring strong data with synthesized explanation), and Mapista (preferring a  holistic knowledge map of a policy, program, or issue).

Graphista? IPA falls under Mapista, but we might instead say Graphista, since it constructs a graph and applies straightforward analytics. These six steps are involved:

  1. Find the logical statements/propositions in a theory (found in a publication).
  2. Diagram the propositions (a box for each concept/ term, an arrow for each causal link).
  3. Combine those smaller diagrams where they overlap to create a larger diagram.
  4. Count the number of concepts with two or more causes (“concatenated” concepts).
  5. Count the total number of concepts in the theory (“Complexity”).
  6. Divide concatenated concepts by total concepts to assess “Systemicity.”

Quantifying complexity and systemicity. Wright and Wallis explain that “The systemicity score computed in the final step is a key measure of causal inter-relatedness in IPA. The greater the proportion of concepts in a theory that are concatenated, the more the theory’s concepts are causally interrelated (Wallis, 2013). In previous studies across diverse fields in the physical and social sciences, paradigm-changing scientific theories have shown greater systemicity (inter-connectedness among concepts) than earlier, less successful scientific theories (Wallis 2010a).” Returning to the graph comparison, this brings to mind graph connectivity.

Integrative Propositional Analysis map

Talking this week with Bernadette Wright, she added that “Policy research is an applied science. It has the same problem of all applied sciences. It’s not enough to make new discoveries. We also need to apply existing knowledge to real-world problems. Integrative Propositional Analysis and related mapping techniques provide a rigorous way to connect existing studies into a larger pattern. This lets managers quickly see what’s known on a topic. So they can use that information to make a bigger difference for the people they serve.”

I have some questions, such as: Do different people construct the same IPA maps for the same theories? The authors also raise the question of inter-rater reliability in their discussion (page 7).

Cool idea. Wright and Wallis have developed a gamified version of IPA, where people co-create knowledge maps for experiental learning. For more on that and some other insights, go to  AEA365 - A Tip-a-Day for Evaluators.

Tuesday, 03 May 2016

Bitcoin for learning, helping youth with evidence, and everyday health evidence.

College diploma

1. Bitcoin tech records people's learning.
Ten years from now, what if you could evaluate a job candidate by reviewing their learning ledger, a blockchain-administered record of their learning transactions - from courses they took, books they read, or work projects they completed? And what if you could see their work product (papers etc.) rather than just their transcript and grades? Would that be more relevant and useful than knowing what college degree they had?

This is the idea behind Learning is Earning 2026, a future system that would reward any kind of learning. The EduBlocks Ledger would use the same blockchain technology that runs Bitcoin. Anyone could award these blocks to anyone else. As explained by Marketplace Morning Report, the Institute for the Future is developing the EduBlocks concept.

 

Market share MIT-Sloan

2. Is market share a valuable metric?
Only in certain cases is market share an important metric for figuring out how to make more profits. Neil T. Bendle and Charan K. Bagga explain in the MIT Sloan Management Review that Popular marketing metrics, including market share, are regularly misunderstood and misused.

Well-known research in the 1970s suggested a link between market share and ROI. But now most evidence shows it's a correlational relationship, not causal.

 

Adolescent crime

3. Evidence-based ways to close gaps in crime, poverty, education.
The Laura and John Arnold Foundation launched a $15 million Moving the Needle Competition, which will fund state and local governments and nonprofits implementing highly effective ways to address poverty, education, and crime. The competition is recognized as a key evidence-based initiative in White House communications about My Brother’s Keeper, a federal effort to address persistent opportunity gaps.

Around 250 communities have responded to the My Brother’s Keeper Community Challenge with $600+ million in private sector and philanthropic grants, plus $1 billion in low-interest financing. Efforts include registering 90% of Detroit's 4-year-olds in preschool, private-sector “MBK STEM + Entrepreneurship” commitments, and a Summit on Preventing Youth Violence.

Here's hoping these initiatives are evaluated rigorously, and the ones demonstrating evidence of good or promising outcomes are continued.

 

Eddie Izzard

4. Everyday health evidence.
Evidence for Everyday Health Choices is a new series by @UKCochraneCentr, offering quick rundowns of the systematic reviews on a pertinent topic. @SarahChapman30 leads the effort. Nice recent example inspired by Eddie Izzard: Evidence on stretching and other techniques to improve marathon performance and recovery: Running marathons Izzard enough: what can help? [Photo credit: Evidence for Everyday Health Choices.]

5. Short Science = Understandable Science.
Short Science allows people to publish summaries of research papers; they're voted on and ranked until the best/most accessible summary has been identified. The goal is to make seminal ideas in science accessible to the people who want to understand them. Anyone can write a summary of any paper in the Short Science database. Thanks to Carl Anderson (@LeapingLlamas).

Tuesday, 26 April 2016

Baseball decisions, actuaries, and streaming analytics.

Cutters from Breaking Away movie

1. SPOTLIGHT: How are innovations in baseball analytics like data science?
Last week, I spoke at Nerd Nite SF about recent developments in baseball analytics. Highlights from my talk:

- Data science and baseball analytics are following similar trajectories. There's more and more data, but people struggle to find predictive value. Oftentimes, executives are less familiar with technical details, so analysts must communicate findings and recommendations so they're palatable to decision makers. The role of analysts, and  challenges they face, are described beautifully by Adam Guttridge and David Ogren of NEIFI.

- 'Inside baseball' is full of outsiders with fresh ideas. Bill James is the obvious/glorious example - and Billy Beane (Moneyball) applied great outsider thinking. Analytics experts joining front offices today are also outsiders, but valued because they understand prediction;  the same goes for anyone seeking to transform a corporate culture to evidence-based decision making.

Tracy Altman @ Nerd Nite SF
- Defensive shifts may number 30,000 this season, up from 2,300 five years ago (John Dewan prediction). On-the-spot decisions are powered by popup iPad spray charts with shift recommendations for each opposing batter. And defensive stats are finally becoming a reality.

- Statcast creates fantastic descriptive stats for TV viewers; potential value for team management is TBD. Fielder fly-ball stats are new to baseball and sort of irresistible, especially the 'route efficiency' calculation.

- Graph databases, relatively new to the field, lend themselves well to analyzing relationships - and supplement what's available from a conventional row/column database. Learn more at FanGraphs.com. And topological maps (Ayasdi and Baseball Prospectus) are a powerful way to understand player similarity. Highly dimensional data are grouped into nodes, which are connected when they share a common data point - this produces a topo map grouping players with high similarity.

2. Will AI replace insurance actuaries?
10+ years ago, a friend of Ugly Research joined a startup offering technology to assist actuaries making insurance policy decisions. It didn't go all that well - those were early days, and it was difficult for people to trust an 'assistant' who was essentially a black box model. Skip ahead to today, when #fintech competes in a world ready to accept AI solutions, whether they augment or replace highly paid human beings. In Could #InsurTech AI machines replace Insurance Actuaries?, the excellent @DailyFintech blog handicaps several tech startups leading this effort, including Atidot, Quantemplate, Analyze Re, FitSense, and Wunelli.

3. The blind leading the blind in risk communication.
On the BMJ blog, Glyn Elwyn contemplates the difficulty of shared health decision-making, given people's inadequacy at understanding and communicating risk. Thanks to BMJ_ClinicalEvidence (@BMJ_CE).

4. You may know more than you think.
Maybe it's okay to hear voices. Evidence suggests the crowd in your head can improve your decisions. Thanks to Andrew Munro (@AndrewPMunro).

5. 'True' streaming analytics apps.
Mike Gualtieri of Forrester (@mgualtieri) put together a nice list of apps that stream real-time analytics. Thanks to Mark van Rijmenam (@VanRijmenam).

Wednesday, 20 April 2016

How to lead people through evidence-based decisions.

Decision Quality book

There's no shortage of books on strategy and decision-making - and many of them can seem out of touch. This one is worthwhile reading: Decision Quality: Value Creation from Better Business Decisions by Carl Spetzler, Hannah Winter, and Jennifer Meyer (Wiley 2016).

The authors are decision analysis experts with the well-known, Palo Alto-based Strategic Decisions Group. Instead of presenting schemes or templates for making decisions, they get to the heart of the matter: Decision quality, when making big decisions or smaller choices. How will you decide? How will you teach your team to make high-quality decisions? And how will you define 'high quality'?

For example, for a healthcare formulary decision, outline in advance what findings will be considered. Cost-effectiveness modeling? Real-world evidence? How will evidence be weighted - possibly using multi-criteria decision analysis? How will uncertainty be factored in?

 

“If you want to change the culture of an organization, change the way people make decisions.” -Vincent Barabba

Key takeaways from this book:

- You can lead a meaningful change by encouraging people to fully understand why it's the decision process, not the outcome, that is under their control.

- Teach your team to make high-quality decisions. Build organizational capability so people use similar language and methods to assess evidence and analyze decisions.

- Get more buy-in with a better process, from initial concept to execution. Judge the quality of a decision as you go along.

 

Tuesday, 12 April 2016

Better evidence for patients, and geeking out on baseball.

Health tech wearables

1. SPOTLIGHT: Redefining how patients get health evidence.

How can people truly understand evidence and the tradeoffs associated with health treatments? How can the medical community lead them through decision-making that's shared - but also evidence-based?

Hoping for cures, patients and their families anxiously Google medical research. Meanwhile, the quantified selves are gathering data at breakneck speed. These won't solve the problem. However, this month's entire Health Affairs issue (April 2016) focuses on consumer uses of evidence and highlights promising ideas.

  • Translating medical evidence. Lots of synthesis and many guidelines are targeted at healthcare professionals, not civilians. Knowledge translation has become an essential piece, although it doesn't always involve patients at early stages. The Boot Camp Translation process is changing that. The method enables leaders to engage patients and develop healthcare language that is accessible and understandable. Topics include colon cancer, asthma, and blood pressure management.
  • Truly patient-centered medicine. Patient engagement is a buzzword, but capturing patient-reported outcomes in the clinical environment is a real thing that might make a big difference. Danielle Lavallee led an investigation into how patients and providers can find more common ground for communicating.
  • Meaningful insight from wearables. These are early days, so it's probably not fair to take shots at the gizmos out there. It will be a beautiful thing when sensors and other devices can deliver more than alerts and reports - and make valuable recommendations in a consumable way. And of course these wearables can play a role in routine collection of patient-reported outcomes.


Statcast

2. Roll your own analytics for fantasy baseball.
For some of us, it's that special time of year when we come to the realization that our favorite baseball team is likely going home early again this season. There's always fantasy baseball, and it's getting easier to geek out with analytics to improve your results.

3. AI engine emerges after 30 years.
No one ever said machine learning was easy. Cyc is an AI engine that reflects 30 years of building a knowledge base. Now its creator, Doug Lenat, says it's ready for prime time. Lucid is commercializing the technology. Personal assistants and healthcare applications are in the works.

Photo credit: fitbit one by Tatsuo Yamashita on Flickr.