The response to my TEDx Quran talk in the week since the video of it was released has been magnificent, and the comments here on the AT, on YouTube, and on many other sites, have been immensely moving, humbling, thought-provoking, and inspiring, sometimes all at the same time.
All of which means that I haven’t had much time for my usual reading. So when I finally curled up late last night with last week’s New Yorker, I thought it would just be a relaxing break. Instead, Jonah Lehrer’s densely argued article on scientific research called “The Truth Wears Off” had me sitting straight up in alert attention, and his conclusion left me kind of breathless:
We like to pretend that experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
This article is dynamite. I mean that almost literally, since Lehrer explodes the carefully nurtured image of scientific research as the ultimate arbiter of fact. It demonstrates what he calls “the slipperiness of empiricism.” And it challenges the faith in science – I use the word “faith” advisedly here – of “new atheists” like Sam Harris, Chris Hitchens, Richard Dawkins, and Daniel Dennett (a foursome I think of H2D2 for short).
Lehrer focuses on what he calls “the decline effect” – the way much research across a wide range of fields, including psychology, ecology, biology, and medicine, fails the most basic scientific test, which is replicability. If other researchers can produce the same results, then the research is valid; if they cannot, it has to be assumed flawed. And many, in some fields even most, cannot. Results that initially seem ground-breaking often fade with each replication of the research, and sometimes even turn negative.
“It’s as if our facts were losing their truth,” says Lehrer. As one somewhat depressed biologist told him:
We cannot escape the troubling conclusion that some – perhaps many – cherished generalities are at best exaggerated in their significance, and at worst a collective illusion nurtured by strong a-priori beliefs.
Citing the example of wildly divergent results for the efficacy of acupuncture in the Far East as opposed to Western Europe, for instance, Lehrer writes:
This wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
It gets downright scary when he talks to a Stanford epidemiologist who looked at the 49 most cited clinical-research studies in three major medical journals. “Of those that had been subject to replication, 41% had either been directly contradicted or had their effect sizes significantly downgraded.” It gets even worse when the research is in a “fashionable” field like genetic differences in disease risk for men and women. “Out of 432 claims, only a single one was consistently replicable.”
The Stanford epidemiologist suspects “significance chasing,” where “scientists are so eager to pass the magical test of statistical significance that they start playing around with the numbers, trying to find anything that seems worthy.” They don’t do this deliberately, but unconsciously. “Knowing” that something has to be true, they find ways to make it seem so.
“The decline effect” may in fact be a decline of illusion, says Lehrer. The problem being that it’s very hard to let go of illusions:
Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go.
Because, in short, scientists are as human as the rest of us. And the very idea of truth — that absolute ideal of veracity — is as slippery and troubling as ever.