Top science writer George Johnson has started a new column in the New York Times called Raw Data, and Knight Science Journalism Tracker Faye Flam took his first post apart.
Johnson’s topic was raw data indeed: the fact that so many scientific findings turn out not to be true, or at least turn out not to be reproducible. Flam pointed out that, although Johnson seemed to be excoriating science in general, all his examples were from medical research. She notes, “There’s no physics mentioned in Johnson’s column, nor is there any astronomy, chemistry or evolutionary biology.”
She’s certainly right that, assuming Johnson’s topic was the failings of medical science, he should have said so. If it was science more generally, he should have given non-medical examples. Not many of his readers except those of us professionally interested in science are going to know that different fields of science have very different track records.
No question that both clinical and biomedical research are being viewed with alarm by scientists and journals and, most important, by funding agencies. That alarm is deserved, medical research having presented a string of recent examples ranging from methodologically dubious and outright fraudulent results to many results that may seem all right at first but that others have not been able to replicate.
Proposed remedies for medical research
There are several suggested fixes, some of them mentioned by Retraction Watch’s Ivan Oransky in his post advocating what he calls a Reproducibility Index. Instead of the Citation Index, he says, judge a paper by how well it stands up to scrutiny.
Bonnie Swoger, at Information Culture, describes some new programs designed to foster reproducibility investigations, but she emphasizes that funding agencies, by and large, would rather underwrite new (and, preferably, dazzling) investigations rather than spend time and money trying to validate old ones
Mike the Mad Biologist is gobsmacked on reading a recent Nature paper by Francis Collins and Lawrence Tabak describing the National Institutes of Health’s efforts to “enhance reproducibility.” The paper points out that “Funding agencies often uncritically encourage the overvaluation of research published in high-profile journals.” Mike wants to know, since Collins is the head of NIH (which is in fact a concatenation of 27 different funding agencies), isn’t he, of all people, in a position to do something about this overemphasis on results appearing in top journals?
The failures of social science, especially psychology
Recent research history has also been particularly egregious in the social sciences, especially psychology, and I have written about that several times here at On Science Blogs.
At Not Exactly Rocket Science, Ed Yong has a long post analyzing the Many Labs Project. This was an impressive effort by psychologists to investigate whether a number of classic findings in psychology can be replicated. Most of them appear to hold up: 10 out of 13. That’s more than you might expect, given all the recent bad news coming out of psychology.
Psychologist Rolf Zwaan goes into even more detail on the Many Labs project, concluding that the size of this project holds lessons not just for replication studies, but for research methods themselves. He agrees with the Many Labs authors that “a consortium of laboratories could provide mutual support for each other by conducting similar large-scale investigations on original research questions, not just replications.”
Christian Jarrett, at the British Psychological Society’s Research Digest, describes the “replication recipe” devised by a group of psychologists. It includes obvious but frequently ignored steps such as defining methods as precisely as possible and having a big enough sample size to make it likely that the project’s results (probably) reflect the real world. Some of these points, it seems to me, could be instituted if journals insisted, such as a detailed (and replicable) description of methods.
Critiquing a critique of the critiques
Derek Lowe, at In the Pipeline, analyzes a critique of the critiques, a post by Jeff Leek at Simply Statistics. Leek wishes to persuade us that the reproducibility problem is nothing like as awful as the alarmists would have us believe.
Leek’s argument is that the critiques shouldn’t count for much because they aren’t truly scientific, meaning that they mostly contain no data. Lowe acknowledges that it’s true enough that the critiques have not gone through statistical vetting. “But the way that they’re all pointing in the same direction is suggestive. And it’s worth keeping in mind that all of these parties have an interest in the answer being the opposite of what they’re finding – we’d all like for the literature reports of great new targets and breakthroughs to be true.”
Lowe notes that Leek thinks highly of the Many Labs project, in part because it showed that nearly all the scientific papers it investigated held up to scrutiny. That bothers Lowe, and–to bring us full circle–it bothers him for the same reason Faye Flam took exception to George Johnson’s column. “The Many Labs people were trying to replicate results in experimental psychology, and while there’s probably some relevance to the replications problems in biology and chemistry, there are big differences, too. I worry that everything is getting lumped together as Science,” Lowe says.
The state of science in the State of the Union
At ScienceInsider, David Malakoff noted that President Obama didn’t announce any new science policy initiatives in his State of the Union message Tuesday night. But he did set some priorities. Obama continues to support rolling back last year’s mandatory budget cuts, known as sequestration, which had quite an impact on science funding. He also plans to ignore the recalcitrant Congress and use his executive powers–such as they are, which is limited–to boost research on reducing greenhouse gases and expanding natural gas production. Malakoff’s piece includes transcripts of parts of the speech related to science.
At SciAm’s Observations, Dina Fine Maron took an additive approach to parsing what what Obama’s speech would–or could–mean for science. “[H]e devoted roughly a fifth of his speech to topics including climate change, renewable energy and investing in science and education opportunities. His prepared remarks came in at a word count of 6,778 words.” I couldn’t resist doing the math. Let’s see, that’s 1355.6 words about science. Or we could throw caution to the winds and round up. 1356 words.
From the New York Times:
Correction: January 23, 2014
An earlier version of this article misstated the plantings on the building’s green roof. They were native flora, not native fauna.