This is the second of a multi-part #SciWriteLabs discussion with Ananyo Bhattacharya, the chief online editor of Nature. For those just tuning in, part one of the discussion with Ananyo is here, previous installments of the #SciWriteLabs series are listed here, and a summary of the copy-/fact-checking debate can be found here.
SM: At the end of the previous post, you mentioned you disagreed with Alistair Dove’s notion that “the unidirectional flow of information from Scientist (source) to Public (sink) is still very much the dominant mode by which new research makes its way into the public arena.” You mentioned the concept of information “webs,” which is a concept sociologists were discussing way back when a web referred to something a spider produced and not the Internet. Can you expand a bit more on how you see information and understanding spreading in an interconnected society?
AB: Actually I was making the point there that it’s never been unidirectional – or at least not as unidirectional as scientists seem to imagine. There’s been at least one study that showed that if a research paper is featured in a news story, it’s much more likely to be cited – an indication that scientists are more strongly influenced by the press than they’d like to admit! And, of course, there’s a feedback process that sees press coverage influence research funders directly or indirectly (the GM crops furor in the UK left crop sciences departments in a bad way, for example).
SM: I also wanted to go back to the MMR issue, which is obviously one I care a lot about. In my book, I came down quite hard on the coverage that followed Wakefield’s 1998 press conference, in which he made claims that went far beyond the conclusions that appeared as part of his initial Lancet study. A lot of the “corrective” reporting that has been done since then has focused on the various revelations that have come out — about Wakefield’s potential fraud, or his taking out a patent for an alternative measles vaccine, or his financial relationship with a lawyer interested in pursuing vaccine-related lawsuits. My feeling is that those correctives should have been beside the fact because Wakefield should never have gotten coverage in the first place: It’s ridiculous to make broad conclusions based on 12-person case studies.
In my mind, the fallout from the MMR scandal highlights the deficiencies of Alistair’s notion that “bad science gets overwritten in time by good science anyway.” In this case, by the time Wakefield’s “bad science” was corrected, MMR vaccine uptake in Great Britain had dropped from around 90 percent to under 80 percent; as a result, measles infections surged and people started to die.
Perhaps ironically, this story also seems to me like a good example of why it can be beneficial for reporters to check their copy with scientists not directly connected to the story at hand. Within the scientific community, there was widespread criticism of Wakefield’s research methods and his conclusions from the outset. If reporters were determined to gin up a controversy — if it bleeds, it leads — then being warned off the story by a responsible/trusted virologist or immunologist might not have had any impact…but I have to assume that at least some of the people responsible for that initial burst of coverage regret the decisions they made during those days.
AB: Absolutely. There was a whole lot of things that went wrong during the MMR scandal – clearly Wakefield also said things at the press conference that no responsible researcher should have said. And yes, if reporters had checked out the story with independent sources, they would rapidly have established that the autism link was highly unlikely to say the least! There were other problems too – the story was also taken out of the hands of specialist correspondents and given to generalists, for instance – but that particular problem has been discussed to death. You’re spot on when you say that the damage has been done by the time the science ‘corrects’ the record.
Scientists, however, often cite such cases as a ‘few bad apples’ and argue that, overall, the benefits of checking copy with the scientist interviewed (increased accuracy) outweigh the costs (journalistic integrity, rooting out fraud). They often argue that science has a process in place for checking accuracy – peer review – which makes it ‘special.’ (That was the argument used in a Guardian piece, written by three scientists from Cardiff University in Wales, that was in part a response to my initial Guardian post on the matter.) I strongly disagree. The purpose of peer review is not to catch fraud — the data provided has to be taken on trust. It is to check whether the data stacks up and corresponds to the claims being made in the paper. But many papers get through the process that probably shouldn’t and many contain claims that are exaggerated. When the work is checked out with informed, independent sources the ‘story’ often collapses – something that would not be apparent from simply reading the paper (or talking to the researcher behind the paper). A significant proportion of stories that we commission get spiked after the editor and reporter realize that the claims made in the paper do not stack up – often, they’re simply not as exciting as the paper would have you believe.