• Ideas
  • Innovation

To Make a Real Difference in Health Care, AI Will Need to Learn Like We Do

6 minute read
Ideas
Pande, PhD, is a general partner at Andreessen Horowitz, where he focuses on investments in biopharma and healthcare

Millions of people, many of whom have never thought much about computer science, are experimenting with generative AI models such as the eminently conversational ChatGPT and creative image generator DALL-E. While these products reflect less of a technological breakthrough than AI’s emergence into the public consciousness, the traction they have found is guiding massive investment streams—investment shaping how this technology will be applied for years to come.

For those of us who have long been bullish on AI’s potential to transform society, especially in key areas such as health and medicine, recent months have felt very much like science fiction has come to life.

However, as delightful as it is to explore these capabilities—GPT-4 for example exceeded the passing score by 20 points on the U.S. medical licensing exam—the results of doing so mainly serve to highlight their shortcomings. The ability to read, retain and regurgitate all such data on demand makes today’s AI good at everything—but great at nothing.

There’s no question that AI is poised to irrevocably change how we look to prevent and treat illness. Doctors will cede documentation to AI scribes; primary care providers will lean on chatbots for triage; near-endless libraries of predicted protein structures will supercharge drug development. However, to truly transform these fields, we should invest in creating an ecosystem of models—say, “specialist” AIs—that learn like our best physicians and drug developers do today.

Getting to the top of a field typically begins with years of intensive information upload, often via formal schooling, followed by some form of apprenticeship; years devoted to learning, mostly in person, from the field’s most accomplished practitioners. It’s a nearly irreplaceable process: Most of the information a medical resident gleans by listening and watching a high-performing surgeon, for example, isn’t spelled out in any textbook.

It’s particularly challenging to gain the intuition, often acquired through schooling and experience, that helps determine the best answer in a complex situation. This is true for artificial intelligence and people alike, but for AI, the issue is exacerbated by the way it currently learns and how technologists are currently approaching the opportunity and challenge. By studying thousands to millions of labeled data points—examples of “right” and “wrong”—current advanced neural network architectures are able to figure out what makes one choice better than another. Rather than learning solely from massive amounts of data and expecting a single generative model to solve all problems, we should train AI by using models that stack on top of each other—first biology, then chemistry, then layer on top of those foundations data points specific to health care or drug design, for example.

Pre-med students aim to become doctors, but their coursework starts with the basics of chemistry and biology rather than the finer points of diagnosing disease. Without those foundational courses, their ability to one day provide high-quality health care would face significant limits. Similarly, a scientist who designs a new therapeutic undergoes years of studying chemistry and biology, followed by PhD studies, followed by working under the tutelage of expert drug designers. This style of learning can help develop a sense for how to navigate decisions involving subtle differences, which, especially at the molecular scale, really matter. For example, estrogen and testosterone differ only slightly, but have dramatically different impacts on human health.

Developing these stacked AI models with hierarchies of latent spaces—simplified maps of complex data to help AI models understand patterns and relationships—would reflect an understanding or predictive capability for each foundational element. I believe this may initially parallel human education and educational paradigms, but will likely in time specialize to develop new types of expertise in AI learning. These stacked models could develop in ways analogous to cortexes in the human brain. But, whereas humans have a visual cortex and a motor cortex, AI could have a biology cortex and a drug design cortex—in both cases, neural architectures specialized for specific tasks.

Ironically, creating an AI that specializes in a particular domain such as health care may be easier to create than something more akin to HAL 9000, with typical human-level knowledge across fields. And, in fact, we need specialist AIs in specific domains more than an overarching AI that can do anything an average human can do. I anticipate the creation of not a single specialist AI but many, with a diversity of approaches in coding, data, and testing, such that these models could provide a second (or third, or fourth) opinion when necessary.

In parallel, we must rip AI from its online moorings and plunge it into the world of atoms. We should be equipping our most skilled human specialists with wearables to gather nuanced, real-world interactions for AI to learn from, just as our up-and-coming academic and industry stars do. The most complex and uncertain aspects of addressing health and medicine simply don’t exist fully in the world of bits.

Exposing these specialist AIs to the perspective of a diverse range of top practitioners will be a must to avoid replicating dangerous biases. But AI is less of a black box than popular imagination suggests; the human decision-making we depend on today, as I’ve noted previously, is arguably more opaque. We can’t let fear of perpetrating human biases limit our willingness to explore how AI can help us democratize the expertise of our human specialists, who are unfortunately unscalable.

Given the neural networks underpinning artificial intelligence, these specialist AIs may gain knowledge even faster than we’d expect through meta-learning—or learning to learn—and take us humans along for the ride. The nature of AI lets us do something we simply can’t with people; take them apart piece by piece and study each little bit. By building systems to plumb the inner works of specialist AIs, we’ll create a learning flywheel. Eventually, specialist AIs could shift beyond the role of domain expert into teachers to the next generation of specialists—human and AI alike.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.