Evidence Soup
How to find, use, and explain evidence.

64 posts categorized "learning & education"

Tuesday, 09 August 2016

Health innovation, foster teens, NBA, Gwyneth Paltrow.

Foster_care_youth

1. Behavioral economics → Healthcare innovation.
Jaan Sidorov (@DisMgtCareBlog) writes on the @Health_Affairs blog about roadblocks to healthcare innovation. Behavioral economics can help us truly understand resistance to change, including unconscious bias, so valuable improvements will gain more traction. Sidoro offers concise explanations of hyperbolic discounting, experience weighting, social utility, predictive value, and other relevant economic concepts. He also recommends specific tactics when presenting a technology-based innovation to the C-Suite.

2. Laptops → Foster teen success.
Nobody should have to type their high school essays on their phone. A coalition including Silicon Valley leaders and public sector agencies will ensure all California foster teens can own a laptop computer. Foster Care Counts reports evidence that "providing laptop computers to transition age youth shows measurable improvement in self-esteem and academic performance". KQED's California Report ran a fine story.

For a year, researchers at USC's School of Social Work surveyed 730 foster youth who received laptops, finding that "not only do grades and class attendance improve, but self-esteem and life satisfaction increase, while depression drops precipitously."

3. Analytical meritocracy → Better NBA outcomes.
The Innovation Enterprise Sports Channel explain how the NBA draft is becoming an analytical meritocracy. Predictive models help teams evaluate potential picks, including some they might have overlooked. Example: Andre Roberson, who played very little college ball, was drafted successfully by Oklahoma City based on analytics. It's tricky combining projections for active NBA teams with prospects who may never take the court. One decision aid is ESPN’s Draft Projection model, using Statistical Plus/Minus to predict how someone would perform through season five of a hypothetical NBA career. ESPN designates each player as a Superstar, Starter, Role Player, or Bust, to facilitate risk-reward assessments.

4. Celebrity culture → Clash with scientific evidence.
Health law and policy professor Timothy Caulfield (@CaulfieldTim) examines the impact of celebrity culture on people's choices of diet and healthcare. His new book asks Is Gwyneth Paltrow Wrong About Everything?: How the Famous Sell Us Elixirs of Health, Beauty & Happiness. Caulfield cites many, many peer-reviewed sources of evidence.

Evidence & Insights Calendar:

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

February 22-23; London UK. Evidence Europe 2017. How pharma, payers, and patients use real-world evidence to understand and demonstrate drug value and improve care.

Photo credit: Foster Care Counts.

Tuesday, 02 August 2016

Business coaching, manipulating memory for market research, and female VCs.

Hoosiers_coach

1. Systematic review: Does business coaching make a difference?
In PLOSOne, Grover and Furnham present findings of their systematic review of coaching impacts within organizations. They found glimmers of hope for positive results from coaching, but also spotted numerous holes in research designs and data quality.

Over the years, outcome measures have included job satisfaction, performance, self-awareness, anxiety, resilience, hope, autonomy, and goal attainment. Some have measured ROI, although this one seems particularly subjective. In terms of organizational impacts, researchers have measured transformational leadership and performance as rated by others. This systematic review included only professional coaches, whether internal or external to the organization. Thanks @Rob_Briner and @IOPractitioners.

2. Memory bias pollutes market research.
David Paull of Dialsmith hosted a series about how flawed recall and memory bias affect market research. (Thanks to @kristinluck.)

All data is not necessarily good data. “We were consistently seeing a 13–20% misattribution rate on surveys due in large part to recall problems. Resultantly, you get this chaos in your data and have to wonder what you can trust.... Rather than just trying to mitigate memory bias, can we actually use it to our advantage to offset issues with our brands?”

The ethics of manipulating memory. “We can actually affect people’s nutrition and the types of foods they prefer eating.... But should we deliberately plant memories in the minds of people so they can live healthier or happier lives, or should we be banning the use of these techniques?”

Mitigating researchers' memory bias. “We’ve been talking about memory biases for respondents, but we, as researchers, are also very prone to memory biases.... There’s a huge opportunity in qual research to apply an impartial technique that can mitigate (researcher) biases too....[I]n the next few years, it’s going to be absolutely required that anytime you do something that is qualitative in nature that the analysis is not totally reliant on humans.”

3. Female VC --> No gender gap for startup funding.
New evidence suggests female entrepreneurs should choose venture capital firms with female partners (SF Business Times). Michigan's Sahil Raina analyzed data to compare the gender gap in successful exits from VC financing between two sets of startups: those initially financed by VCs with only male general partners (GPs), and those initially financed by VCs that include female GPs. “I find a large performance gender gap among startups financed by VCs with only male GPs, but no such gap among startups financed by VCs that include female GPs.”

4. Sharing evidence about student outcomes.
Results for America is launching an Evidence in Education Lab to help states, school districts, and individual schools build and use evidence of 'what works' to improve student outcomes. A handful of states and districts will work closely with RFA to tackle specific data challenges.

Background: The bipartisan Every Student Succeeds Act (ESSA) became law in December 2015. ESSA requires, allows, and encourages the use of evidence-based approaches that can help improve student outcomes. Results for America estimates that ESSA's evidence provisions could help shift more than $2B US of federal education funds in each of the next four years toward evidence-based, results-driven solutions.

Evidence & Insights Calendar:

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 26 July 2016

Evidence relativism, innovation as a process, and decision analysis pioneer.

Gold-panning

1. Panning for gold in the evidence stream.
Patrick Lester introduces his new SSIR article by saying "With evidence-based policy, we need to acknowledge that some evidence is more valid than others. Pretending all evidence is equal will only preserve the status quo." In Defining Evidence Down, the director of the Social Innovation Research Center responds to analysts skeptical of evidence hierarchies developed to steer funding toward programs that fit the "what works" concept.

Are levels valid? Hierarchies recognize different levels of evidence according to their rigor and certainty. These rankings are well-established in healthcare, and are becoming the standard for evidence evaluation within the Dept of Education and other US government agencies. Critics of this prevailing thinking (Gopal & Schorr, Friends of Evidence) want to ensure decision-makers embrace an inclusive definition of evidence that values qualitative research, case studies, insights from experience, and professional judgment. Lester contends that "Unfortunately, to reject evidence hierarchies is to promote is a form of evidence relativism, where everyone is entitled to his or her own views about what constitutes good evidence in his or her own local or individualized context.

Ideology vs. evidence. "By resisting the notion that some evidence is more valid than others, they are defining evidence down. Such relativism would risk a return to the past, where change has too often been driven by fads, ideology, and politics, and where entrenched interests have often preserved the status quo." Other highlights: "...supporting a broad definition of evidence is not the same thing as saying that all evidence is equally valid." And "...randomized evaluations are not the only rigorous way to examine systems-level change. Researchers can often use quasi-experimental evaluations to examine policy changes...."

2. Can innovation be systematic?
Everyone wants innovation nowadays, but how do you make it happen? @HighVizAbility reviews a method called Systematic Inventive Thinking, an approach to creativity, innovation, and problem solving. The idea is to execute as a process, rather than relying on random ideas. Advocates say SIT doesn't replace unbridled creativity, but instead complements it.

3. Remembering decision analysis pioneer Howard Raiffa.
Howard Raiffa, co-founder of the Harvard Kennedy School of Government and decision analysis pioneer, passed away recently. He was also a Bayesian decision theorist and well-known author on negotiation strategies. Raiffa considered negotiation analysis an opportunity for both sides to get value, describing it as The Science and Art of Collaborative Decision Making.

4. Journal impact factor redux?
In the wake of news that Thomson Reuters sold its formula, Stat says changes may finally be coming to the "hated" journal impact factor. Ivan Oransky (@ivanoransky) and Adam Marcus (@armarcus) explain that some evidence suggests science articles don't receive the high number of citations supposedly predicted by the IF. The American Society of Microbiologists has announced that it will abandon the metric completely. Meanwhile, top editors from Nature — which in the past has taken pride in its IF — have coauthored a paper widely seen as critical of the factor.

Photo credit: Poke of Gold by Mike Beauregard

Tuesday, 05 July 2016

Brain training isn't smart, physician peer pressure, and #AskforEvidence.

Brain-Training

1. Spending $ on brain training isn't so smart.
It seems impossible to listen to NPR without hearing from their sponsor, Lumosity, the brain-training company. The target demo is spot on: NPR will be the first to tell you its listeners are the "nation's best and brightest". And bright people don't want to slow down. Alas, spending hard-earned money on brain training isn't looking like a smart investment. New evidence seems to confirm suspicions that this $1 billion industry is built on hope, sampling bias, and placebo effect. Arstechnica says researchers have concluded that earlier, mildly positive "findings suggest that recruitment methods used in past studies created a self-selected groups of participants who believed the training would improve cognition and thus were susceptible to the placebo effect." The study, Placebo Effects in Cognitive Training, was published in the Proceedings of the National Academy of Sciences.

It's not a new theme: In 2014, 70 cognitive scientists signed a statement saying "The strong consensus of this group is that the scientific literature does not support claims that the use of software-based 'brain games' alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease."


Journal.pmed.1002049.t001

2. Ioannidis speaks out on usefulness of research.
After famously claiming that most published research findings are false, John Ioannidis now tells us Why Most Clinical Research Is Not Useful (PLOS Medicine). So, what are the key features of 'useful' research? The problem needs to be important enough to fix. Prior evidence must be evaluated to place the problem into context. Plus, we should expect pragmatism, patient-centeredness, monetary value, and transparency.


Antibiotic_use

3. To nudge physicians, compare them to peers.
Doctors are overwhelmed with alerts and guidance. So how do you intervene when a physician prescribes antibiotics for a virus, despite boatloads of evidence showing they're ineffective? Comparing a doc's records to peers is one promising strategy. Laura Landro recaps research by Jeffrey Linder (Brigham and Women's, Harvard): "Peer comparison helped reduce prescriptions that weren’t warranted from 20% to 4% as doctors got monthly individual feedback about their own prescribing habits for 18 months.

"Doctors with the lower rates were told they were top performers, while the rest were pointedly told they weren’t, in an email that included the number and proportion of antibiotic prescriptions they wrote compared with the top performers." Linder says “You can imagine a bunch of doctors at Harvard being told ‘You aren’t a top performer.’ We expected and got a lot of pushback, but it was the most effective intervention.” Perhaps this same approach would work outside the medical field.

4. Sports analytics taxonomy.
INFORMS is a professional society focused on Operations Research and Management Science. The June issue of their ORMS Today magazine presents v1.0 of a sports analytics taxonomy (page 40). This work, by Gary Cokins et al., demonstrates how classification techniques can be applied to better understand sports analytics. Naturally this includes analytics for players and managers in the major leagues. But it also includes individual sports, amateur sports, franchise management, and venue management.

5. Who writes the Internet, anyway? #AskforEvidence
Ask for Evidence is a public campaign that helps people request for themselves the evidence behind news stories, marketing claims, and policies. Sponsored by @senseaboutsci, the campaign has new animations on YouTube, Twitter, and Facebook. Definitely worth a like or a retweet.

Calendar:
September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Tuesday, 21 June 2016

Free beer! and the "Science of X".

Chanteuse_flickr_Christian_Hornick

1. Free beer for a year for anyone who can work perfume, velvety voice, and 'Q1 revenue goals were met' into an appropriate C-Suite presentation.
Prezi is a very nice tool enabling you to structure a visual story, without forcing a linear, slide-by-slide presentation format. The best part is you can center an entire talk around one graphic or model, and then dive into details depending on audience response. (Learn more in our writeup on How to Present Like a Boss.)

Now there's a new marketing campaign, the Science of Presentations. Prezi made a darn nice web page. And the ebook offers several useful insights into how to craft and deliver a memorable presentation (e.g., enough with the bullet points already).

But in their pursuit of click-throughs, they've gone too far. It's tempting to claim you're following the "Science of X". To some extent, Prezi provides citations to support its recommendations: The ebook links to a few studies on audience response and so forth. But that's not a "science" - they don't always connect between what they're citing and what they're suggesting to business professionals. Example: "Numerous studies have found that metaphors and descriptive words or phrases — things like 'perfume' and 'she had a velvety voice' - trigger the sensory cortext.... On the other hand, when presented with nondescriptive information — for example, 'The marketing team reached all of its revenue goals in Q1' — the only parts of our brain that are activated are the ones responsible for understanding language. Instead of experiencing the content with which we are being presented, we are simply processing it."

Perhaps in this case "simply processing" the good news is enough experience for a busy executive. But our free beer offer still stands.

2. How should medical guidelines be communicated to patients?

And now for the 'Science of Explaining Guidelines'. It's hard enough to get healthcare professionals to agree on a medical guideline - and then follow it. But it's also hard to decide whether/how those recommendations should be communicated to patients. Many of the specifics are intended for providers' consumption, to improve their practice of medicine. Although it's essential that patients understand relevant evidence, translating a set of recommendations into lay terms is quite problematic.

Groups publish medical guidelines to capture evidence-based recommendations for addressing a particular disease. Sometimes these are widely accepted - and other times not. The poster-child example of breast cancer screening illustrates why patients, and not just providers, must be able to understand guidelines. Implementation Science recently published the first systematic review of methods for disseminating guidelines to patients.

Not surprisingly, the study found weak evidence of methods that are consistently feasible. "Key factors of success were a dissemination plan, written at the start of the recommendation development process, involvement of patients in this development process, and the use of a combination of traditional and innovative dissemination tools." (Schipper et al.)

3. Telling a story with data.
In the Stanford Social Innovation Review (SSIR), @JakePorway explains three things great data storytellers do differently [possible paywall]. Jake is with @DataKind, "harnessing the power of data science in service of humanity".

 

Photo credit: Christian Hornick on Flickr.

Tuesday, 14 June 2016

Mistakes we make, Evidence Index, and Naturals vs Strivers.

Putin_pianist

1. Mistakes we make when sharing insights.
We've all done this: Hurried to share valuable, new information and neglected to frame it meaningfully, thus slowing the impact and possibly alienating our audience. Michael Shrage describes a perfect example, taken from The Only Rule Is It Has to Work, a fantastic book about analytics innovation.

The cool thing about the book is that it's a Moneyball for the rest of us. Ben Lindbergh and Sam Miller had the rare opportunity to experiment and apply statistics to improve the performance of the Sonoma Stompers, a minor league baseball team in California wine country. But they had to do it with few resources, and learn leadership skills along the way.

The biggest lesson they learned was the importance of making their findings easy to understand. As Shrage points out in his excellent Harvard Business Review piece, the authors were frustrated at the lack of uptake: They didn't know how to make the information meaningful and accessible to managers and coaches. Some people were threatened, others merely annoyed: "Predictive analytics create organizational winners and losers, not just insights."

2. Naturals vs. Strivers: Why we lie about our efforts.
Since I live in Oakland, I'd be remiss without a Steph Curry story this week. But there's lots more to it: Lebron James is a natural basketball player, and Steph is a striver ; they're both enormously popular, of course. But Ben Cohen explains that people tend to prefer naturals, whether we recognize it or not: We favor those who just show up and do things really well. So strivers lie about their efforts.

Overachievers launch into bad behavior, such as claiming to sleep only four hours a night. Competitive pianists practice in secret. Social psychology research has found that we like people described as naturals, even when we're being fooled.

3. How do government agencies apply evidence?
Results for America has evaluated how U.S. agencies apply evidence to decisions, and developed an index synthesizing their findings. It's not easily done. @Results4America studied factors such as "Did the agency use evidence of effectiveness when allocating funds from its five largest competitive grant programs in FY16?" The Departments of Housing and Labor scored fairly high. See the details behind the index [pdf here].

Photo credit: Putin classical pianist on Flickr.

 

Wednesday, 08 June 2016

Grit isn't the answer, plus Scrabble and golf analytics.

Scrabble

1. Poor kids already have grit: Educational Controversy, 2016 edition.
All too often, we run with a sophisticated, research-based idea, oversimplify it, and run it into the ground. 2016 seems to be the year for grit. Jean Rhodes, who heads up the Chronicle of Evidence-Based Mentoring (@UMBmentoring) explains that grit is not a panacea for the problems facing disadvantaged youth. "Grit: The power and passion of perseverance, Professor Angela Duckworth’s new bestseller, on the topic has fueled both enthusiasm for such efforts as well as debate among those of us who worry that it locates the problem (a lack of grit) and solution (training) in the child. Further, by focusing on exemplars of tenacity and success, the book romanticizes difficult circumstances. The forces of inequality that, for the most part, undermine children’s success are cast as contexts for developing grit. Moreover, when applied to low-income students, such self-regulation may privilege conformity over creative expression and leadership. Thus, it was a pleasure to come across a piece by Stanford doctoral student, Ethan Ris, on the history and application of the concept." Ris first published his critique in the Journal of Educational Controversy and recently wrote a piece for the Washington Post, The problem with teaching ‘grit’ to poor kids? They already have it.

2. Does Scrabble have its own Billy Beane?
It had to happen: Analytics for Scrabble. But it might not be what you expected. WSJ explains why For World’s Newest Scrabble Stars, SHORT Tops SHORTER.

Wellington Jighere and other players from Nigeria are shaking up the game, using analytics to support a winning strategy favoring five-letter words. Most champions follow a “long word” strategy, making as many seven- and eight-letter plays as possible. But analytics have brought that "sacred Scrabble shibboleth into question, exposing the hidden risks of big words."

Jighere has been called the Rachmaninoff of rack management, often saving good letters for a future play rather than scoring an available bingo. (For a pre-Jighere take on the world of Scrabble, see Word Wars.)

3. Golf may have a Billy Beane, too.
This also had to happen. Mark Broadie (@MarkBroadie) is disrupting golf analytics with his 'strokes gained' system. In his 2014 book, Every Shot Counts, Broadie rips apart assumptions long regarded as sacrosanct - maxims like 'drive for show, putt for dough'. "The long game explains about two-thirds of scoring differences and the short game and putting about one-third. This is true for amateurs as well as pros." To capture and analyze data, Broadie developed a GolfMetrics program. He is the Carson Family Professor of Business at Columbia Business School, and has a PhD in operations research from Stanford. He has presented at the Sloan Sports Analytics Conference.

Pros have begun benefiting from golf analytics, including Danny Willett, winner of this year's Masters. He has thanked @15thClub, a new analytics firm, for helping him prep better course strategy. 15th Club provided insight for Augusta’s par-5 holes. As WSJ explained, the numbers show that when players lay up, leaving their ball short of the green to avoid a water hazard, they fare better when doing so as close to the green as possible, rather than the more distant spots where players typically take their third shots.

4. Evidence-based government on the rise.
In the US, "The still small, but growing, field of pay for success made significant strides this week, with Congress readying pay for success legislation and the Obama administration announcing a second round of grants through the Social Innovation Fund (@SIFund)."

5. Man + Machine = Success.
Only Humans Need Apply is a new book by Tom Davenport (@tdav) and @JuliaKirby. Cognitive computing combined with human decision making is what will succeed in the future. @DeloitteBA led a recent Twitter chat: Man-machine: The dichotomy blurs, which included @RajeevRonanki, the lead for their cognitive consulting practice.

Tuesday, 03 May 2016

Bitcoin for learning, helping youth with evidence, and everyday health evidence.

College diploma

1. Bitcoin tech records people's learning.
Ten years from now, what if you could evaluate a job candidate by reviewing their learning ledger, a blockchain-administered record of their learning transactions - from courses they took, books they read, or work projects they completed? And what if you could see their work product (papers etc.) rather than just their transcript and grades? Would that be more relevant and useful than knowing what college degree they had?

This is the idea behind Learning is Earning 2026, a future system that would reward any kind of learning. The EduBlocks Ledger would use the same blockchain technology that runs Bitcoin. Anyone could award these blocks to anyone else. As explained by Marketplace Morning Report, the Institute for the Future is developing the EduBlocks concept.

 

Market share MIT-Sloan

2. Is market share a valuable metric?
Only in certain cases is market share an important metric for figuring out how to make more profits. Neil T. Bendle and Charan K. Bagga explain in the MIT Sloan Management Review that Popular marketing metrics, including market share, are regularly misunderstood and misused.

Well-known research in the 1970s suggested a link between market share and ROI. But now most evidence shows it's a correlational relationship, not causal.

 

Adolescent crime

3. Evidence-based ways to close gaps in crime, poverty, education.
The Laura and John Arnold Foundation launched a $15 million Moving the Needle Competition, which will fund state and local governments and nonprofits implementing highly effective ways to address poverty, education, and crime. The competition is recognized as a key evidence-based initiative in White House communications about My Brother’s Keeper, a federal effort to address persistent opportunity gaps.

Around 250 communities have responded to the My Brother’s Keeper Community Challenge with $600+ million in private sector and philanthropic grants, plus $1 billion in low-interest financing. Efforts include registering 90% of Detroit's 4-year-olds in preschool, private-sector “MBK STEM + Entrepreneurship” commitments, and a Summit on Preventing Youth Violence.

Here's hoping these initiatives are evaluated rigorously, and the ones demonstrating evidence of good or promising outcomes are continued.

 

Eddie Izzard

4. Everyday health evidence.
Evidence for Everyday Health Choices is a new series by @UKCochraneCentr, offering quick rundowns of the systematic reviews on a pertinent topic. @SarahChapman30 leads the effort. Nice recent example inspired by Eddie Izzard: Evidence on stretching and other techniques to improve marathon performance and recovery: Running marathons Izzard enough: what can help? [Photo credit: Evidence for Everyday Health Choices.]

5. Short Science = Understandable Science.
Short Science allows people to publish summaries of research papers; they're voted on and ranked until the best/most accessible summary has been identified. The goal is to make seminal ideas in science accessible to the people who want to understand them. Anyone can write a summary of any paper in the Short Science database. Thanks to Carl Anderson (@LeapingLlamas).

Tuesday, 16 February 2016

How to present evidence with a single slide.

Two ways to present evidence effectively.

1. Throw out your slide deck and try the Extreme Presentation method, developed by Andrew Abela and Paul Radich during years of presentations at Procter & Gamble, McKinsey, and other leading companies. The technique involves first showing the audience the big-picture concept so they'll immediately have a sense of the problem, and where you’re going. Then zero in on the various issues - no need to plod along through slide after slide.

Ballroom or conference room? Who’s your audience? Let that determine the tone and format of your talk. Here, we’re focused on presenting to executive decision makers, communicating complex information such as market research findings or solutions-oriented sales proposals.

Encyclopedia of Slide Layouts


Looking for new ways to communicate health economics research and other medical evidence? Join me and other speakers at the 2nd annual HEOR Writing workshop in March.                                      


Avoid SME’s disease. As subject matter experts, it’s easy to fall into the trap of going into far more detail than the audience can absorb - and running out of time before reaching the important conclusion. Extreme Presentation helps people avoid that trap by focusing on a single, clear problem or idea. The accompanying handbook, Encyclopedia of Slide Layouts, offers numerous example diagrams for telling a visual story - with names like minefield, process improvement, and patient path. And the website has a 10-step design tool for specifying objectives, sequencing evidence and anecdotes, and measuring success.

Radich recently explained the concept in an excellent webcast, Unleash the Power of Your Data and Evidence With Visual Storytelling. He presented one detailed diagram, and used Prezi to zoom into different sections as he discussed them.

Complication ⇒ Resolution. In the webcast, Abela offered excellent advice on handling audience objections. Rather than wait for Q&A (and going on the defensive), it's better to address likely concerns during the talk, and resolve each one with an example.

2. What to do with your hands. Distracting gestures can substantially weaken the impact of a presentation. @PowerSpeaking offers several great tips on what to do with your hands during your talk. (Open palms are good.) PowerSpeaking offers well-respected programs designed specifically for polishing executive presentation skills, and they write an excellent blog.

Monday, 25 January 2016

How many students are assigned Hamlet, and how many should be?

Syllabus Explorer

Out of a million college classes, how many do you suppose are assigned Hamlet, and how many should be? Professors make important judgments when designing syllabi - yet little is known about what students learn. In Friday's New York Times, members of the Open Syllabus Project describe their effort to open the "curricular black box". As explained in What a Million Syllabuses Can Teach Us, the project seeks to discover what's being assigned.

@OpenSyllabus has ingested information for > 933,000 courses, extracting metadata and making it available online (details are masked to preserve confidentiality). The search engine Syllabus Explorer is now available in beta. 

New metric. With this analysis, the project team is introducing a new "metric based on the frequency with which works are taught, which we call the 'teaching score'." They believe the metric is useful because "teaching captures a very different set of judgments about what is important than publishing does".

For me, this evokes fond memories of my own PhD research: As part of my work I did a citation analysis, wanting to understand the cohesiveness among authorities cited by stakeholders addressing a common problem. Essentially, I wanted to open the "evidentiary black box" and discover how extensively people draw from a common set of scientific findings and policy research.

The evidentiary black box. Rather than ingest as many bibliographics as I could find, I analyzed every bibliography, authority, or reference cited to inform a particular decision --- in this case, a US Environmental Protection Agency decision establishing a new set of air pollution regulations. I compared all the evidence cited by commenters on the EPA issue - private companies, environmental groups, government agencies, and others stakeholders. Several hundred commenters submitted several thousand citations from science and policy journals, trade publications, or gray literature. I found almost no overlap; most authories were cited only once, and the most commonly cited evidence was referenced only five times.

 How much overlap is good? During my citation analysis, I would have hoped that the public discourse would draw from a common body of knowledge, manifested as citations common to more than a smattering of participants. In contrast, the Syllabus Explorer reveals that many college students are reading the usual suspects: Strunk and White, Hamlet, etc.  But how many would be too many, and how much syllabus overlap would be too much? Currently 2,400 classes are assigned Hamlet out of nearly a million syllabi. We want to foster diverse knowledge by examining a wide range of sources, while also developing common frameworks for understanding the world.