If you’re like an unlucky 5% of American adults, you’ll visit a doctor with a medical complaint this year, only to be misdiagnosed and, often, misprescribed a treatment. If you’re like a far less lucky one of 100,000 hospitalized Americans, such a misdiagnosis will cost you your life. There are a lot of reasons for medical errors: inexperienced caregivers; ambiguous symptoms; understaffed hospitals, underlying conditions.
But according to a new study published in the Proceedings of the National Academy of Sciences (PNAS), there’s one more little-considered cause: doctors working alone—often with too little opportunity to think and rethink the case. By increasing the number of physicians weighing in on a case, doctors can significantly increase the likelihood of an accurate diagnosis and a favorable result, according to a team of researchers led by Damon Centola, professor and director of the Network Dynamics Group at the University of Pennsylvania’s Annenberg School for Communication..
“We are increasingly recognizing that clinical decision-making should be viewed as a team effort that includes multiple clinicians and the patient as well,” said co-author Dr. Elaine Khoong, of the University of California, San Francisco, and the San Francisco General Hospital and Trauma Center, in a statement accompanying the release of the study.
Medical collaboration is hardly a new thing. As Centola points out, many hospitals, especially ones in lower-income areas, rely on “e-consult technologies,” in which a clinician sends a message to an outside specialist for a second opinion on a case, with results usually taking 24 to 72 hours to come back. But two minds, Centola and his collaborators theorized, are less effective than a hive mind—and they set out to prove that idea.
A meeting of the minds
Recruiting a subject group of 2,941 physicians, the authors divided them into a sample group of 2,053 and a control group of 888. All of the subjects were drawn from one of three specialties: internal medicine, emergency medicine, and cardiology. All of them too were presented with case studies of real-life patients who had presented with illnesses known to have high rates of diagnostic error: acute cardiac events, geriatric care, low back pain, and diabetes-related cardiovascular illness prevention. Then they were set to work diagnosing the cases and prescribing a treatment—a process that was divided into three steps, which both groups conducted in different ways.
The 888 members of the control group worked alone. They read the case scenario and were given two minutes (sometimes all a doctor gets in an emergency situation) to provide a risk assessment—how serious the patient’s symptoms were, and what the likelihood of severe illness was—and recommend a treatment. This was repeated two more times, with the members of the control group given a second and a third opportunity to re-read and reconsider the case—again with a two-minute timeline, but at least with greater familiarity of the scenario now—and again recommend a treatment.
Read More: AI Cough-Monitoring Can Change the Way We Diagnose Disease
Things were very different with the other subjects. The 2,053 members of the sample group were divided into smaller groups of 40 each. All of them began the experiment the same way the control group did—taking two minutes to read, diagnose and recommend a treatment for the case study. They too were given a second chance to analyze the case, but before they did, they were also shown the average risk estimate the other 39 doctors in their group—whose identities were not disclosed—had given for the severity of the case. They could then either stick with their initial assessment of the severity of the condition or change it to one more in keeping with the majority. In the third round they did the same, only this time with their treatment recommendation.
Both groups benefited from the opportunity to take multiple looks at the same case. In the control group, the doctors improved from an accuracy rate of 76.8% in the first look they took at the case study to 79.3% by the third—or a 3.3% improvement. The sample group, in which the doctors had the benefit of one another’s insights—doubled the improvement of the control group, going from an accuracy of 76.1% to 81.1%, or a 6.6% bump. (There was no indication in the results of the hive mind in the 2,053-person sample leading the doctors to a collective wrong conclusion.)
Multiplying intelligence
Over a potential U.S. patient population of 332 million, that 6.6% improvement can mean 21.8 million people.
“We can use doctors’ networks to improve their performance,” said Centola in a statement. “The real discovery here is that we can structure the information-sharing networks among doctors to substantially increase their clinical intelligence.”
What’s more, Centola thinks the 40-doctor model his group hit on is a good one. “Forty people in a network gets you a steep jump in clinicians’ collective intelligence,” he said. “The increasing returns above that—going, say, from 40 to 4,000—are minimal.” What’s more, in the sample group, whose members not only read the case study three times, but also read their peers’ responses, the entire trial still took only 20 minutes—far less than the 24 to 72 hours a single-specialist consultation now takes.
Effective as the protocol is, there are obstacles to implementing it widely. The amount of time physicians have to spare on cases that are not their own, and reimbursement policies for their work, are both issues to be resolved. More challenging. Centola says, is changing the culture around the business of diagnosing and prescribing—which is too often seen as an entirely solitary practice. "This network innovation," he says, "replaces that view of decision-making with the insight that it is also a social and behavioral process that can be improved by structuring the flow of information and influence among clinicians."
When it comes to saving human lives, speed counts, accurate diagnoses count, and getting the right treatment to the right patient counts. If it takes 40 doctors to get all that right, it’s worth the extra work.
More Must-Reads from TIME
- Introducing the 2024 TIME100 Next
- Sabrina Carpenter Has Waited Her Whole Life for This
- What Lies Ahead for the Middle East
- Why It's So Hard to Quit Vaping
- Jeremy Strong on Taking a Risk With a New Film About Trump
- Our Guide to Voting in the 2024 Election
- The 10 Races That Will Determine Control of the Senate
- Column: How My Shame Became My Strength
Write to Jeffrey Kluger at jeffrey.kluger@time.com