News

Sometimes Science Is Wrong – Scientific American



In 1996 scientists announced the astonishing news that they’d discovered what they believed might be signs of ancient life inside a meteorite from Mars. In 2014 astrophysicists declared that they’d found direct evidence at last for the “inflationary universe” theory, first proposed in the 1980s.

What these assertions had in common was that they were based on research by highly qualified, credentialed scientists—and also that the “discoveries” turned out to be wrong. Today essentially nobody thinks the meteorite contained persuasive evidence that it once harbored life, or that the astrophysicists had found anything more exciting than dust in the Milky Way.

This sort of backtracking isn’t unusual. In part, it happens because scientists almost always have to revise cutting-edge research, or even retract it, as the scientific community tries to replicate it and fails, or as more and better evidence comes in.

The problem science journalists face is that this process is fundamentally at odds with how news coverage works, and that this can be confusing to readers. In most areas—politics, international relations, business, sports—the newest thing journalists report is almost always the most definitive. The Supreme Court heard arguments on Mississippi’s challenge to Roe v. Wade; pitcher Max Scherzer signed a three-year, $130-million contract with the Mets; Facebook rebranded its parent company as “Meta.” All of these are indisputably true. And when the court issues its ruling next year, or if Scherzer is injured and can’t play; or if Facebook re-rebrands itself, that won’t make these stories incorrect; they’ll just be out-of-date.

But in scientific research, the newest thing is often the least definitive—we have seen this over and over with COVID—with science reported, then revised, as more information comes in.

The newest things are just a first step toward answering a deeper question—and sometimes it’s a misstep that won’t be identified until months or years later. Sometimes, as may have been the case with “cold fusion” back in the 1980s, it’s self-delusion on the part of the scientists. Other times, as in the case of a front-page story about a potential cancer cure in the New York Times, the writing is so breathless that readers fail to notice the caveats.

Same goes for particles that seemed to travel faster than the speed of light—something the scientists themselves said was almost certainly some kind of mistake, but which reporters couldn’t resist running with (it turned out to be a false reading caused by a loose cable). Sometimes, as with the Mars meteorite, the breathless coverage is driven a powerful publicity campaign—in this case, by NASA. And sometimes, as argued by prosecutors in the trial of Elizabeth Holmes, founder of Theranos, it’s just plain fraud.

But even when the research is published in a major, peer-reviewed scientific journal, it can still turn out to be wrong, no matter how carefully it’s done. Science journalists know this, which is why we include caveats in our reporting.

But we can’t go overboard in emphasizing the caveats, crucial as they are, because that’s just not how news is done. I once suggested to an editor at Time magazine that I lead a story about an Alzheimer’s drug that looked promising in mice: “In a discovery that will almost certainly have no impact whatever on human health, scientists announced today….” He looked at me, aghast. It was true, since most drugs that work in mice fail in humans—but he argued, correctly, that nobody would read past the first sentence if I wrote it that way. It could have an impact, so I could, and must, start the story that way. These days, we tend to avoid mouse research stories altogether, for that very reason.

But if you put the excitement first and the caveats further down, readers are likely to see the latter as merely dutiful. It can be like the “results not typical” disclaimers that appear in ads trumpeting the amazing success of weight-loss products. In principle, readers or viewers are supposed to take serious note—but how many do?

And on a larger scale, a science discovery that makes headlines when it’s first announced is almost certainly not going to make headlines when the debunking eventually happens weeks or months later. Again, that’s just the way it works: “Scientists Find Amazing Thing” is big news. “Scientists Find that the Thing They Thought Was Amazing Is Not Amazing” is less likely to be framed that way—even though it should be. As a result, I still run into people who think we found evidence of ancient bacteria on Mars more than two decades ago.

That being said, some science-related reporting can be end-of-the-line factual: a powerful tsunami kills hundreds of thousands in South and Southeast Asia; the space shuttle Challenger is destroyed shortly after launch; scientists publish the first draft of the human genome; President Biden announces a travel ban to try and slow the spread of the Omicron variant of the coronavirus. All of these were factual events where the science didn’t need to be independently confirmed, even though in many ways, in the follow-up stories, the science behind the events was.

A decade ago, John Rennie, a former editor-in-chief of Scientific American, made a startling proposal. Writing in the Guardian, he suggested that science journalists agree to wait six months before they report on new research results. His point was that it takes time for cutting-edge science to be digested and evaluated by the scientific community, and that what looks like a game-changer at first can turn out, on reflection, to be less than meets the eye—or even just plain wrong.

Rennie knew this would never actually happen, of course; it would violate the quasisacred notion that new, potentially important information shouldn’t be withheld from the public—and journalists being a highly competitive lot, someone would inevitably publish long before the six months were up anyway. And in cases where lives are potentially at stake, as with the Omicron variant, the worst-case scenario might never happen, just as was the case in the great swine flu nonepidemic of 1976. Ignoring the potential threat before we fully understand it is a very risky idea, and one that hasn’t served our global pandemic response very well.

But, still, Rennie had a point.



Source link