Note: This installment of #SciWriteLabs was initially going to be a three-way conversation between me, Nature‘s Ananyo Bhattacharya, and Alistair (not TWiV co-host and #SciWriteLabs 3 vet Alan) Dove, a scientist at the Atlanta Aquarium. (Ananyo and Al had figured prominently in an earlier phase of this ongoing discussion.) Unfortunately, life got in the way: Ananyo had to travel out of the country, Al had to attend a conference and the inaugural Deep Sea News retreat, and I got hit by a car while on my Vespa. I’m posting the first back-and-forth between Al and me; hopefully Al and Ananyo will both be able to jump in over the next few days.
SM: Al, I’d like to pivot off of the exchange you and Ananyo had and talk a little bit about the difference between science communication and science journalism. Without being overly didactic (or retreading ground I covered in this summary post): You’re a working scientist who confronted a science journalist on Twitter about a piece he’d written and eventually wrote up your exchange for a blog post. That seems to belie what feels to me like an underlying assumption in the debate that’s ensued: That there’s a uni-directional flow of information from scientists to journalists and from journalists to the public. Are we arguing about the rulebook for a game that’s no longer being played?
AD: In short, no. I think the unidirectional flow of information from Scientist (source) to Public (sink) is still very much the dominant mode by which new research makes its way into the public arena. More conversational exchanges of the sort facilitated by new media tools are still relatively rare (unless I’m missing out on a lot). I suspect that it’s definitely the direction we’re headed, as we saw with Rosie Redfield‘s role in the “arsenic life” story out of JPL earlier this year, but my guess is that the overwhelming majority of print/pixel inches still consist of that unidirectional flow. What surprised me in my exchanges with Ananyo (and Martin Robbins, and Ed Yong) about this process was the strength of their conviction that it is not a journalist’s job to participate in that process, at least not without adding a dimension of critical evaluation and context in the process. They argued that to do it uncritically is either “churnalism” or “science communication” rather than science journalism.
I argued that adding context is fine but that too much effort spent being critical of new science is probably a pointless opportunity cost because bad science gets overwritten in time by good science anyway. I struggle to think of situations where huge shifts in scientific thinking have come about because of critical evaluation by journalists; they come about as they always have, through careful and consistent application of the scientific method. Even if the rules of science journalism change because of new communication tools, I dont think it will change that.
SM: That’s interesting, and crystallizes some of my thoughts on the subject. I disagree pretty strongly with the idea that “bad science gets overwritten by good science” in due time, and because of that, journalists shouldn’t sweat the specifics of the various researchers’ claims. I actually think that’s a dangerous way to think about the issue — and there’s a fair amount of evidence that, in fact, the notion that correcting the record also corrects people’s misconceptions is false. This is one of the major themes of The Panic Virus, my book about the autism/vaccine controversies: By writing about a shoddy study claiming a potential link between the MMR vaccine and autism, the media played a huge role — I’d argue the dominant role — in creating a health care crisis that continues to reverberate today, almost 14 years after that study was published. (This excerpt in The Daily Beast talks a little bit about that burst of that initial coverage.) In the years since then, that study has been debunked (and ultimately retracted) and the lead researcher was stripped of his medical license and accused of fraud…and still the theory has its adherents. When bad information is injected into the public domain, you can’t just erase it with follow-up stories.
About four years ago, some researchers based at University of Michigan published a paper titled “Metacognitive Experiences and the Intricacies of Setting People Straight: Implications for Debiasing and Public Information Campaigns,” (PDF Link) that looked at this reality. In the study, subjects were presented with a pamphlet titled “Flu Vaccine: Facts and Myths.” The flier listed common perceptions about the vaccine and labeled each as either true or false (i.e., “The side effects are worse than the flu: FALSE”). After studying the sheet, subjects were quizzed on its contents. In the first round of questioning, conducted only a few minutes after the pamphlet was taken away, people generally fared well separating the myths from the facts. But as more time elapsed, subjects were less and less able to correctly identify the myths, although they were still able to remember the contents of the pamphlet itself—and, since they’d been told that the information came from a credible source, they began to assume that all the statements they’d heard were likely true.
I’m sure there are readers who will come up with examples (and counter-examples) of cases where shifts in scientific thinking have resulted from critical evaluation. But I think you’re missing the point: the biggest risk of the type of credulous reporting you seem to be advocating for is the popular dissemination of shoddy science…which can have enormous, long-lasting repercussions.
This work, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License.