• Science

Why Scientists Should Celebrate Failed Experiments

4 minute read

Reporters hate facts that are too good to check—as the phrase in the industry goes. The too-good-to-check fact is the funny or ironic or otherwise delicious detail that just ignites a story and that, if it turns out not to be true, would leave the whole narrative poorer for its absence. It must be checked anyway, of course, and if it doesn’t hold up it has to be cut—with regrets maybe, but cut all the same.

Social scientists face something even more challenging. They develop an intriguing hypothesis, devise a study to test it, assemble a sample group, then run the experiment. If the theory is proven, off goes your paper to the most prestigious journals you can think of. But what if it isn’t proven? Suppose the answer to a thought-provoking question like, “Do toddlers whose parents watch football or other violent sports become more physically aggressive?” turns out to be simply, “Nope.”

Do you still try to publish these so-called null results? Do you even go to the bother of writing them up—an exceedingly slow and painstaking process regardless of what the findings are? Or do you just go on to something else, assuming that no one’s going to be interested in a cool idea that turns out not to be true?

That’s a question that plagues whole fields of science, raising the specter of what’s known as publishing bias—scientists self-censoring so that they effectively pick and choose what sees print and what doesn’t. There’s nothing fraudulent or unethical about dropping an experiment that doesn’t work out as you thought it would, but it does come at a cost. Null results, after all, are still results, and once they’re in the literature, they help other researchers avoid experimental avenues that have already proven to be dead ends. Now a new paper in the journal Science, conducted by a team of researchers at Stanford University, shows that publication bias in the social sciences may be more widespread than anyone knew.

The investigators looked at 221 studies conducted from 2002 to 2012 and made available to them by a research collective known as TESS (Time-Sharing Experiments in the Social Sciences), a National Science Foundation program that makes it easier for researchers to assemble a nationally representative sample group. The best thing about TESS—at least for studies of publication bias–is that the complete history of every experiment is available and searchable, whether it was ever published or not.

When the Stanford investigators reviewed the papers, they found just what they suspected—and feared. Roughly 50% of the 221 studies wound up seeing publication, but that total included only 20% of the ones with null results. That compared unfavorably to the 60% of those studies with strong positive results that were published, and the 50% with mixed results. Worse, one of the reasons so few null results ever saw print is that a significant majority of them, 65%, were never even written up in the first place.

The Stanford investigators went one more—very illuminating—step and contacted as many of the researchers of the null studies as they could via e-mail, asking them why they had not proceeded with the studies. Among the answers: “The unfortunate reality of the publishing world [is] that null effects do not tell a clear story.” There was also: “We determined that there was nothing there that we could publish in a professional journal” and “[the study] was mostly a disappointing wash.” Added one especially baleful scientist: “[The] data were buried in the graveyard of statistical findings.” Among all of the explanations, however, the most telling—if least colorful—was this: “The hypotheses of the study were not confirmed.”

That, all by itself, lays bare the misguided thinking behind publication bias. No less a researcher than Jonas Salk once argued to his lab staff that there is no such thing as a failed experiment, because learning what doesn’t work is a necessary step to learning what does. Salk, history showed, did pretty well for himself. Social scientists—disappointed though they may sometimes be—might want to follow his lead.

More Must-Reads From TIME

Write to Jeffrey Kluger at jeffrey.kluger@time.com