For the third time here at On Science Blogs, fallout from the fraudulent Science paper about the ease of changing opposition to gay marriage. The commentary now has moved on from that particular paper to the shakiness of social science research in general–and the shakiness of scientific research in general too.
Lo, the liberal conspiracy
First, some fun, eviscerating the Wall Street Journal editorial charging that Science published the paper because it fed liberal biases. To begin with, the editorial writer(s) (or the WSJ research staff) couldn’t seem to get the facts right.
For example, David Broockman and Josh Kalla, the Berkeley grad students who brought down the paper’s first author Michael LaCour, were not seeking to replicate the work. They were studying it because they wanted to do a similar research of their own. That difference strikes me as significant: The fraud was revealed not by competitors who wanted to tear LaCour’s work apart but by friendlies who wanted to extend it. Discovering that the paper’s data were fake was a kind of accident.
Jesse Singal has an extended takedown of the WSJ editorial that describes other errors at New York Magazine’s Science of Us blog. But his main point is that the idea of a liberal conspiracy in the social sciences is just silly. “If the social sciences ‘often seem to exist’ to promote research suggesting, for example, that people can be talked out of their conservative views, then why are there so many reams of studies showing basically the opposite? What kind of half-assed academic conspiracy would allow in so much disconfirming evidence?”
Singal has jumped on this tale with both feet and several posts, which don’t seem to have been collected under a single URL. Pity. The centerpiece is his lengthy tick-tock explaining just how the fraud was, reluctantly, unearthed.
At The Passive Habit, Cameron says, “Singal effectively rebutted the Journal’s editorial, though his answer to the bigger question about the credibility of social science wasn’t as convincing.” Cameron argues, and I agree, that the social sciences do possess a liberal bias. But it’s not a conspiracy. It’s human nature, a byproduct of liberal leanings. And, I would add, a political leaning not only among social scientists, but among all scientists.
Social science methodology sucks
On Science Blogs has had a lot to say over the years about the particular maladies that infect social science research. They include in-your-face fraud but also, and far more pervasive, methodology that just sucks. For example, a reason the literature on nutrition is so useless is that it has depended on self-reports. We cannot be trusted to remember accurately what we ate even this morning, let alone a week ago. Also, human nature once again, we lie about what we eat.
In a post at Newton Blog headed “Why Everything We ‘Know’ About Diet and Nutrition Is Wrong,” Ross Pomeroy makes that point and thinks it may explain why, as he says, nutrition research is awash in woo. “Sure, the scientific literature on nutrition is bulging with studies, but at the same time, it’s watered-down with weak, meaningless information. Perhaps that’s why nutrition has become rife with hucksterism.” Pomeroy believes one solution is random controlled trials, but as we’ll see in a moment, the clinical trial may be part of the problem, not a solution.
Shortcomings like these are one more factor in the long-term efforts in Congress to cut funding for political and other social sciences. Efforts that have succeeded. The House recently voted to raise the National Science Foundation’s budget–but cut its funding for social science by 45%. Social science is only a small percentage of NSF’s budget. Still, the message is clear.
In a Monkey Cage post, John Sides describes these efforts and argues that to the extent social science can inform policy choices about people’s lives, it’s a valuable resource and should be funded. Of course politicians are not trying to defund social science because they are horrified by misconduct and/or less-than-rigorous methodology. But it’s surely a convenient bludgeon.
Malfeasance outside of social science
There’s plenty of malfeasance in the harder sciences too, of course. Misconduct damages people’s faith in science, but it does worse, too. Ian Roberts of the London School of Hygiene & Tropical Medicine participates in the Cochrane Reviews, systematic reviews and meta-analyses of primary research in human health care and health policy. He is dismayed at the state of clinical trial data. As he wrote at The Conversation, “health research scandals put the health of millions of patients around the world in jeopardy.”
That’s partly because journal editors and reviewers simply take data in a paper on trust, as they did in the case of the gay marriage study. Just one example Roberts cites: a Cochrane review showing that a sugar solution prevents death after head injury was retracted “after our review editors were unable to confirm that any of the included trials took place.” And even when published trials are genuine, they are a biased sample. Trials showing no effects, good or bad, are rarely published.
Even if cases of out-and-out-fabrication turn out to be rare, there’s still plenty of junk science, much of it due to accident or error. For example, cell lines are sometimes not what they’re supposed to be. Richard Harris observed recently at Shots, “A widely used cell labeled as breast cancer is actually a melanoma cell, it was recently discovered, and there are hundreds of similar examples.”
One recent study estimated that more than half of preclinical lab research cannot be reproduced, and the researchers put the annual cost at $28 billion. A critic pointed out to Harris that the inability to reproduce a study doesn’t necessarily mean it’s a bad study. It could mean, for example, that the Methods section was so poorly written that other scientists can’t repeat the study exactly.
I don’t find that hugely comforting.
And now, some optimism
Will all we are learning about the (un)trustworthiness of science lead to change? According to Retraction Watch’s Adam Marcus and Ivan Oransky, it already has. Writing at The Verge, they point out that the scientific community listened when two mere grad students pointed out the statistical weirdnesses of the gay marriage paper. There’s “growing recognition among science journals that the tools of statistics represent an effective defense against fraud.”
Of course it’s not practical to vet statistics in each of the two million papers published every year. But Marcus and Oransky are bullish about post-publication peer review, citing a number of efforts.
For example, they advise checking out PubPeer, the “journal club” that carries out peer review after the fact and in public. The comments tend to the technical, of course, but they say the result has been published corrections and even retractions.
I haven’t tried it out, but the site says it has a browser extension that permits PubPeer comments to appear on PubMed and journal sites. Linking the critiques to the paper that provoked them would just be ideal, but it’s hard to imagine journals agreeing to live with it.
Marcus and Oransky conclude that relatively simple data analysis is a robust solution to weeding out fraud. Not simple for everybody, obviously, including many journalists. But, as they say, “bring on the geeks!”