The effects of churnalism on health care news & the public

Bookmark and Share

lung By roger_mommaerts, http://www.flickr.com/photos/rmommaerts/131138186/

The spectrum of health care news quality is broad – from Pulitzers to putrid.  It soars to peaks of excellence on in-depth investigative projects and daily stories that still find time to evaluate evidence. But it also plummets to simplistic, slipshod stenography that quotes single, conflicted sources who exaggerate benefits while ignoring harms. Journalism churns out a daily flood of health news – usually more of the latter than the former type.

The term “churnalism” is now used to describe “a form of journalism in which press releases, wire stories and other forms of pre-packaged material are used to create articles in newspapers and other news media in order to meet increasing pressures of time and cost without undertaking further research or checking.”

Respondents to a 2009 health care journalists survey ranked the top challenges they face: bottom line pressures hurting quality, newsroom cutbacks, lack of time or space for in-depth coverage, lack of experienced reporters, and the influence of advertising and industry.

Good data now exist on the news that results – data from a strong, growing international movement of “watchdog” projects.

International health care news watchdogs

Our HealthNewsReview.org in the U.S. is now five years old and has reviewed more than 1,500 stories. It is the second oldest of six nearly identical projects and two somewhat comparable “watchdog” websites that track the performance of journalists who report on claims about health care interventions. HealthNewsReview.org uses the same basic review criteria as used in Media Doctor websites in Australia (the oldest), Canada, Hong Kong, Germany, and Japan. The NHS Choices “Behind the Headlines” project in the UK and the Knight Science Journalism Tracker in the U.S. (which covers more than health and medicine) don’t apply standardized criteria but publish daily critiques and commentary on health news coverage.

Our criteria assess whether the story:

  • quantified benefit (including the use of absolute, not just relative, risk reduction data);
  • quantified harms (again, with an expectation of absolute data);
  • evaluated the quality of the evidence;
  • discussed the costs of the intervention;
  • compared the new idea with existing alternatives;
  • used independent sources and addressed potential conflicts of interest;
  • avoided disease-mongering;
  • established the true novelty of the approach;
  • established the availability of the approach;
  • was based solely or largely on a news release.

Three reviewers – from a team of about 30 – analyze each story. The team includes 16 MDs, 7 people with various master’s degrees, 2 with PhDs, 6 journalists, and two breast cancer survivors.  Consensus is reached on each review.  I’m the third reviewer and adjudicate any differences between the first two reviewers.  A brief description of how the criteria are applied appears on the website.

We send an email to someone from the publishing news organization whenever one of their stories is reviewed.

Report card

The three criteria I think are most important are the three weakest areas of performance.  About 70% of all the stories we’ve reviewed get unsatisfactory scores on discussing costs, quantifying harms and benefits – often minimizing potential harms while exaggerating potential benefits.  That’s a kid-in-the-candy-store portrayal of US health care.

Princeton health economist Uwe Reinhart asks, “Where has civic education failed on health policy issues?”  I think the putrid end of the health care journalism spectrum has to take a big chunk of the blame.

How have we seen churnalism at play? The percentage of stories that relied on news releases jumped from 3.6% (4) of all stories reviewed in the first six months of 2009 to 15.2% (25) for that period in 2010. We also found that 80% of those that relied on a news release in the first half of 2010 (20 of 25) were published by just two news organizations – HealthDay and WebMD.

Special concerns about imbalance in screening stories

We have observed particular problems with the way journalists often report on screening tests.  Many stories present the opinions of screening test advocates without exploring the evidence for benefits and harms in screening decisions.   Some examples from a recent six-month period touched on screenings for various cancers (breast, colon, lung, pancreas, prostate), and on screening tests for Alzheimer’s disease and cardiovascular disease.

  • A HealthDay wire service story didn’t challenge a researcher’s statement about “why everyone should be screened for pancreatic cancer” based on a study of seven patients’ tissue.
  • A Saint Paul Pioneer Press newspaper story promoted medical centers’ “Mingle & Mammogram” parties for women starting in their 40s without exploring the evidence for why the U.S. Preventive Services Task Force expressed concerns about mammograms in that age group.
  • A Prevention magazine special report – “7 Tests You’re Not Having That Could Save Your Life” – promoted several cardiovascular screening tests without any evaluation of the evidence and told the stories of 3 women with heart attacks in their 40s and, for each, listed the “screenings she should’ve had.”
  • A New York Times story reported that a spinal fluid test could be a “100% accurate” screen as an early warning on Alzheimer’s disease.
  • Stories by the New York Times, the Philadelphia Inquirer, HealthDay and WebMD were each flawed in reporting claims about a new colon cancer DNA screening test.  Among the flaws: input only from conflicted sources, failure to evaluate the evidence, emphasizing benefits and minimizing harms, failure to report on competing research or to give any meaningful data-backed comparison with existing blood stool tests.
  • ABC, CBS and NBC television networks reported on the National Lung Screening Trial.  CBS’ reporter stated boldly, “the new study suggests the benefit of finding lung cancer early trumps the risks.” NBC’s reporter didn’t interview anyone who gave any caveats, and made an even bolder statement: “ this has been a huge controversy for years but researchers resolved it.” But ABC’s report was the least balanced, calling it a “breakthrough,” using the word “cure” and featuring one man it said was “one of the lives …saved” by the CT scans’ leading proponent. Seven months later, when the study popped up again, ABC was still the worst we saw.
  • A Reuters wire service story reported that “men with long index fingers have a lower risk of prostate cancer… a finding that could be used to help select those who need regular screening for the disease.”

Combined, these news organizations reach millions of people.

Is health journalism improving?

We can’t make sweeping judgments about quality improvement or decline because there are too many journalists writing health news stories in too many settings. As extensive as our database is, it still represents only a sampling of all health news stories reported in the U.S. in the past five years.

The following table provides a comparison between the cumulative scores of all stories from our first full year of operation (2006) and our most recent full year (2010).   The five criteria listed have consistently received the weakest scores.  It is worth noting that these small improvements took place during increasingly difficult financial times in many news organizations, with reduced budgets and staffs.

Comparing story scores from 2006 with those of 2010 on 5 criteria.

% satisfactory in 2006

(n = 252)

% satisfactory in 2010

(n=419)

Costs 26% 27%
Harms 29% 35%
Benefits 26% 38%
Evidence 34% 42%
Alternatives 40% 46%

Anecdotes from journalists suggest some impact.

  • A manager reviewed how stories were rated by HealthNewsReview.org during an employee’s performance review.
  • A corporate board of directors complained to company management that the company was losing subscribing clients to a competitor because of weak ratings by HealthNewsReview.org.
  • New employees and interns at one national news organization are told to study HealthNewsReview.org criteria as part of their orientation.
  • A journalist at one television network requested a supply of our computer mousepads that list our ten criteria so that he could distribute them to staff.  This request came from one of the networks we stopped reviewing in 2010 because of poor performance.
  • One news organization now uses boilerplate language at the end of stories based on data presented at conferences: “The findings should be considered preliminary as they have not yet undergone the “peer review” process, in which outside experts scrutinize the data prior to publication in a medical journal.

What needs to be done

Improvements could be made at every step in the food chain of how health care news stories are produced and brought to market.

What journals could do

Risk communication specialist Gerd Gigerenzer wrote in the BMJ, “Reporting relative risks without baseline risk is practised not only by journalists because big numbers make better headlines or by health organisations because they increase screening participation rates. The source seems to be medical journals, from which figures spread to press releases, health pamphlets, and the media.”

Medical journals contribute to this problem but could help solve it.  Journals could require that, in appropriate studies, submissions include absolute risk reduction statistics.  They could urge researchers to cite the number needed to treat or to screen – in order to help public comprehension.  In so doing, they would educate the journalists who report on published studies on the importance of this kind of transparent communication of risk-benefit data.  In turn, journalists would be better prepared to explain to news consumers the importance of understanding a few simple statistical concepts and why they matter.

What news release writers could do

We challenge those who write news releases – for journals, for academic medical centers, for industry – to include our 10 criteria at the end of their news releases.  It would be a reminder to journalists of the importance of accurate, balanced and complete coverage and how to achieve it.

What journalists could do

Reporters need to evaluate the quality of the evidence for claims made by people promoting ideas.  They need to sharpen their healthy skepticism.  The public expects journalists to independently vet claims, not to plagiarize and parrot news releases from vested interests.

Editors should respect the wisdom and experience that their specialized health care journalists possess. Important details and caveats should not be cut from stories.  Reporters often complain (see pp. 3-4 of report) that their editors don’t know as much as they should about health care.

All journalists – reporters and editors – should consider the harm they can cause – unintentional though it may be – when they report on claims of efficacy or safety in health care interventions in an imbalanced way.

News management should prioritize training for health, medical and science journalists. The conclusion to a 2009 report (pp. 5, 16-17) on the state of health journalism in the U.S. addressed training needs and opportunities.

What consumers could do

News consumers need to tell news organizations when they are dissatisfied with health care news coverage.

Otherwise, the cycle is likely to continue, with the “churnalism” machine still churning out biased, inflated claims from conflicted sources with no independent perspectives provided about the tradeoffs involved in any health care decision.

Journalism can achieve so much good, or it can, and does, cause harm.

**

Photo via Flickr / roger_mommaerts

**

Guest Blogger Profile: GARY SCHWITZER has specialized in health care journalism for 37 years.  He is Publisher of HealthNewsReview.org – an award-winning site that grades health news.   For 9 years he taught health journalism and media ethics at the University of Minnesota. Gary worked in television news for 15 years – in Milwaukee, Dallas and CNN. He was founding editor-in-chief of MayoClinic.com.  The Kaiser Family Foundation published his 2009 report, “The State of U.S. Health Journalism.”  He wrote the Association of Health Care Journalists’ guide for reporting on studies and the group’s Statement of Principles.  One competition named his blog “Best Medical Blog” of 2009.

Related Posts Plugin for WordPress, Blogger...

Creative Commons License
The The effects of churnalism on health care news & the public by PLOS Blogs Network, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 4.0 International License.

This entry was posted in Guest Post and tagged , , , , , , . Bookmark the permalink.