TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.
From vaccinations to climate change, we make decisions every day that implicate us in scientific claims. Are genetically modified crops safe to eat? Do childhood vaccinations cause autism? Is climate change an emergency? In recent years, many of these issues have become politically polarized, with people rejecting scientific evidence that misaligns with their political preferences. When Greta Thunberg, the youthful climate activist, testified in Congress last month, submitting as her testimony the IPCC 1.5° report, she was asked by one member why should we trust the science. She replied, incredulously, “because it’s science!”
For several decades, there has been an extensive and organized campaign intended to generate distrust in science, funded by regulated industries and libertarian think-tanks whose interests and ideologies are threatened by the findings of modern science. In response, scientists have tended to stress the success of science. After all, scientists have been right about most things, from the structure of the universe (the Earth does revolve around the sun, rather than the other way around) to the relativity of time and space (relativistic corrections are needed to make global positioning systems work).
That answer isn’t wrong, but for many people it’s not persuasive. After all, just because scientists more than 400 years ago were right about the structure of the solar system doesn’t prove that a different group of scientists are right about a different issue today.
An alternative answer to the question—Why trust science?—is that scientists use “the scientific method.” If you’ve got a high school science textbook lying around the house, you’ll probably find that answer in it. But this answer is wrong. But what is typically asserted to be the scientific method—develop a hypothesis, then design an experiment to test it—isn’t what scientists actually do. Historians of science have shown that scientists use many different methods, and these methods have change with time. Science is dynamic: new methods get invented, old ones get abandoned, and any particular juncture scientists can be found doing many different things. And that’s a good thing, because the so-called scientific method doesn’t work. False theories can yield true results, so even if an experiment works, it doesn’t prove that the theory it was designed to test it is true. There also might be many different theories that could yield that same experimental result. Conversely, if the experiment fails, it doesn’t prove the theory is wrong; it could be that the experiment was badly designed or there was a fault in one of the instruments.
If there is no identifiable scientific method, then what is the warrant for trust in science? How can we justify using scientific knowledge—as Greta Thunberg and many others insist that we must—in making difficult personal and public decisions?
The answer is not the methods by which scientists generate claims, but the methods by which those claims are evaluated. The common element in modern science, regardless of the specific field or the particular methods being used, is the critical scrutiny of claims. It’s this process—of tough, sustained scrutiny—that works to ensure that faulty claims are rejected and that accepted claims are likely to be right.
A scientific claim is never accepted as true until it has gone through a lengthy process of examination by fellow scientists. This process begins informally, as scientists discuss their data and preliminary conclusions with their colleagues, their post-docs and their graduate students. Then the claim is shopped around at specialist conferences and workshops. This may result in the scientist collecting additional data or revising the preliminary interpretation; sometimes it leads to more radical revision, like redesigning the data collection program or scrapping the study altogether if it begins to look like a lost cause. If things are looking solid, then the scientist writes up the results. At this stage, there’s often another round of feedback, as the preliminary write-up is sent to colleagues for comment.
Until this point, scientific feedback is typically fairly friendly. But the next step is different: once the paper seems ready, it is submitted to a scientific journal, where things get a whole lot tougher. Editors deliberately send scientific papers to people who are not friends or colleagues of the authors, and the job of the reviewer is to find errors or other inadequacies in the paper. We call this process “peer-review” because the reviewers are scientific peers—experts in the same field—but they act in the role of a superior who has both the right and the obligation to find fault. Reviewers can be pretty harsh, so scientists need to be thick-skinned and accept criticism without taking it personally. (Editors sometimes weigh in too, and often their contributions are not all that nice, either.) It is only after the reviewers and the editor are satisfied that recognizable errors and inadequacies have been fixed that the paper is accepted for publication and enters into the body of “science.” Even then, the story is not over, because if serious errors are detected after publication, journals may issue errata or even retractions.
Why do scientists put up with this difficult and sometimes nasty process? Many don’t; a lot of people drop out along the way and move into other professions. But those who persist can see how it improves the quality of their work. The philosopher Helen Longino has called this process of critical scrutiny transformative interrogation: interrogation, because it’s tough, and transformative because over time our understanding of the natural world is transformed.
A key aspect of scientific judgment is that it is not done individually; it is done collectively. It’s a cliché that two heads are better than one: in modern science, no claim gets accepted until it has been vetted by dozens, if not hundreds of heads. In areas that have been contested, like climate science and vaccine safety, it’s thousands. This is why we are generally justified in not worrying too much if a single individual scientist, even a very famous one, dissents from the consensus. There are many reasons why an individual might dissent: he might be disappointed that his own theory didn’t work out, bear a personal grudge, or have an ideological ax to grind. She might be stuck on a detail that just doesn’t change the big picture, or enjoy the attention she gets for promoting a contrarian view. Or he might be an industry shill. The odds that the lone dissenter is right, and everyone else is wrong, are not zero, but so long as there has been adequate opportunity for the full vetting of his and everyone else’s claims they are probably in most cases close to zero. This is why diversity in science is important: the more people looking at a claim from different angles, the more likely they are to identify errors and blind-spots. It’s also why we should have a healthy skepticism towards brand-new claims: it takes years or sometimes decades for this process to unfold.
In a way science is like a trial, in which both sides get to ask tough questions in hope that the truth becomes clear, and it is the jury that makes that call. But there are several differences between science and the law. One is that the jury are not common citizens, but experts who have the specialized training required to evaluate technical claims. Technical expertise is highly specific, which is why geologists are not called on to judge vaccine safety. (Indeed, it should be a red flag when we see scientists pontificating on subjects outside their expertise.) This highlights a second difference: in science, there is no presiding judge. The judges are all the other members of the expert community; we accept something as true when the expert community comes to a consensus that it is true. A third difference is that in science there is double jeopardy (or even triple or quadruple…); there is always the possibility of re-opening the case on the basis of new evidence.
Does this process ever go wrong? Of course. Scientists are human. But if we look carefully at historical cases where science went awry, typically there was no consensus. Eugenics is a case in point. The novelist Michael Crichton argued because the scientific consensus on eugenics turned out to be mistaken, we should not trust the consensus on climate change. But his premise was faulty (as well as his logic): there wasn’t a consensus on eugenics. Many scientists objected, in particular socialist geneticists who flagged the obvious class bias in eugenic theory and practice.
Some people argue that we should not trust science, because scientists are “always changing their minds.” While examples of truly settled science being overturned are actually rather rare—far fewer than is sometimes claimed—they do exist. But the beauty of this scientific process is that it explains what might otherwise appear paradoxical: that science produces both novelty and stability. New observations, ideas, interpretations, and attempts to reconcile competing claims introduce novelty; transformative interrogation leads to collective decisions and the stability of a good deal of scientific knowledge. Scientists do sometimes change their minds in the face of new evidence, but this is to their credit: it is a strength of science, not a weakness, that scientists continue to learn and to be open to new ways of thinking about old problems. The fact that we may learn new things in the future does not mean that we should throw away what hard-earned knowledge we have now.
Modern society relies on trust in experts, be they dentists, plumbers, car mechanics, or professors. If trust were to come to a halt, society would come to a halt, too. Like all people, scientists make mistakes, but they have knowledge and skills that make them useful to the rest of us. They can do things that we can’t. And just as we wouldn’t go to a plumber to fix our teeth or a dentist to fix our car, we shouldn’t go to actresses or politicians, much less industries with a vested interest or ideologically-driven think-tanks, for answers to scientific questions. If we need scientific information, we should go to the scientists who have dedicated their lives to learning about the matters at stake. On scientific matters, we should trust science.
More Must-Read Stories From TIME
- Inside Mississippi's Last Abortion Clinic—and the Biggest Fight for Abortion Rights in a Generation
- Do Current COVID-19 Tests Still Detect Omicron?
- The First U.S. Offshore Wind Farm Could Be a Lifeline for Struggling New England Cities
- Welcome to TV's Era of Peak Redundancy
- The Key Role a Local Newspaper Played in the Trial Over Ahmaud Arbery's Murder
- TIME's Top 100 Photos of 2021
- 2021: The Year the Grift Kept Giving