No AI Can Learn the Art of Medicine

7 minute read
Yash Shah is a medical student with an interest in healthcare policy and management at Sidney Kimmel Medical College, Thomas Jefferson University; Dr. Glatter is Editor at Large for Medscape Emergency Medicine and Assistant Professor of Emergency Medicine at Zucker School of Medicine at Hofstra/Northwell; Dr. Papadakos is Professor, Department of Anesthesiology and Surgery, University of Rochester Medical Center and Professor of Medicine at Mercer University School of Medicine.

A 49-year-old female notices new-onset vaginal bleeding over the past several days. She becomes concerned and seeks advice from her long-time family physician. When she calls, she is surprised to hear responses from an artificial intelligence (AI) platform. The longtime secretary, who knew her well and would quickly arrange appointments or connect her with the doctor, has been replaced by this expensive new AI-based system. The call begins with an extensive library of prompts. When she presses 0 to speak with a human, she is told the next available appointment is in nine weeks. She hangs up and redials to discuss her problem with a pleasant computer voice, which almost sounds like a real person and asks her to describe her problem—eventually responding with a long-winded response with possible explanations for her bleeding. It then utilizes a proprietary algorithm to make recommendations which include lifestyle changes and watchful waiting, with instructions to dial back if the problem persists.

More From TIME

Eventually, she loses patience and decides to visit the office in person. After briefly seeing her in the office, her doctor is concerned and orders a CT scan with the smart scheduler that uses a complex triage algorithm to schedule her imaging in 1-2 days. She then receives the results of the CT scan in an email and again goes through the scheduler system to book her surgery, which is again triaged based on perceived medical urgency. The night before the operation, the pre-operative anesthesia system automatically calls and asks dozens of questions through various menus, ending with lengthy instructions regarding eating, drinking, and pre-operative care. The program does not offer time to address her fears of going under anesthesia.

The day of surgery, everything is increasingly efficient due to new AI-based systems. The operating room team already has her medical history in the electronic record, and she immediately goes to the operating room without needing to meet with the anesthesiologist or surgeon. All goes well, and four days later she gets an email with instructions to call a number and use a six-digit code to get information on the results of an ovarian biopsy. She won’t need to waste any time traveling to the doctor’s office or sitting in the waiting room for her appointment. Instead, a computer-generated AI voice informs her that she has high-grade serous ovarian cancer with metastasis.

The platform then automatically redirects her to a line where a compassionate, AI voice explains the prognosis and her various treatment options based on the latest research. She breaks down and drops the phone in tears. There is nobody to comfort her, let alone answer her endless questions. Is this the nightmare scenario for future patients in our rapidly evolving healthcare system, or a reality in the setting of ongoing physician shortages, skyrocketing medical costs, and “manmade medical errors”? Will this improve patient health and reduce obstacles to accessing care, or will it create discomfort and dissatisfaction in the healthcare setting?

A recent piece by New York Times health columnist, Gina Kolata, hints that such a future, encompassing such a nightmare patient encounter with our tech-enabled and evolving AI-paradigm of care, may not be far off.

In order to preserve our sense of  compassion and humanity, healthcare providers must prioritize human-to-human communication when we deliver delicate news in order to foster caring relationships with our patients. This forms the basis of the humanity of medicine and the sacred doctor-patient relationship.

That said, no health care provider can disagree that tedious tasks currently take us away from face-to-face, direct patient care. A study in JAMA Internal Medicine found that AI assistants may hold value in composing routine notes or drafting responses to a skyrocketing number of electronic messages from patients as physician demands and burnout rise. Simply put, this work is a call to action for the medical establishment to look inward and determine how we can prioritize the human connection among doctors and patients while taking advantage of AI.

Overall, it appears that doctors are optimistic—but also expressing caution—regarding the potential for AI and large language models becoming part of a toolkit for promoting more effective communication between patients and healthcare providers. Certainly, this technology holds promise, as science communication for years has been marred by complexity and inaccessibility to the lay public. Using AI to better communicate health advice and medical literature with the public will be valuable.

There is certainly promise of large language models to help busy health care professionals with composing emails, reviewing medical records, and answering prior authorizations. Moreover, AI may help triage the patients and the questions that reach physicians, with more routine or unnecessary items being answered by technology. The potential to reduce time spent on tasks that lead to anger, frustration, and ultimately burnout is invaluable. Additionally, administrative costs are estimated to drop by over 35% given this evolution.

As total medical knowledge grows exponentially, it is impossible for doctors to stay abreast of all medical advances and retain such detailed knowledge in our brains. On the contrary, both AI and robotics will inevitably be more effective in cataloging constantly changing medical knowledge. This can support evidence-based management for patients. However, physicians must hold onto their unique and special gifts of humanism and empathetic care for patients.

But let’s be clear—practicing medicine is an art, and no technology can take away that fact. When facing patients themselves, human interaction with a doctor is vital. Patient satisfaction and shared decision-making will continue to rely heavily on this humanism. Medicine is a profession that still requires compassion, reassurance, and most importantly, empathy. Even with the advent and ongoing evolution of AI and other large language models, empathy is best learned and communicated in the form of bedside teaching by humans—not AI or chatbots.

However, considering that AI and chatbots were supported by some experts in Kolata’s piece as an approach for teaching healthcare professionals how to express empathy and compassion to patients or families, it’s likely time for us to hit the “reset button” on how we approach conversations and communications with patients.

We feel that the most effective way to restore empathy and compassion as the cornerstone of physician communication to patients is not by modeling or a framework suggested by AI or chatbots; instead, this requires a focus on human-to-human teaching and dialogue. Education surrounding humanities, social sciences, and the science of communication are just as vital as teaching physicians about anatomy and physiology. This applies to not only medical students, but to those in residency, and all healthcare professionals. Certainly, education about emerging technologies and medical devices will also become important so providers can best incorporate these innovations to improve care without compromising patient experiences.

In the past, bedside teaching in medical school was an art practiced by careful observation and listening, with particular attention to eyes and body language as our professors handed down the invaluable unwritten and unspoken ways to express care, concern, and empathy for our patients. This human interaction has proven implications for patient satisfaction, motivation, and adherence to treatment recommendations.

Such unique and unspoken methods of communication of human emotions and interactions cannot be taught by AI or chatbots. Granted, medicine is often criticized for being decades behind in innovation, and as AI technology grows, health professionals should certainly embrace its benefits. However, they must also remain true to their values and the oath they have taken to serve their patients. Upholding these professional standards requires a strong adherence to humanistic care and continued development of communication skills. It is in the patient’s best interest to stay attuned to these trends and understand the benefits and risks of modern innovations in care.

More Must-Reads from TIME

Contact us at

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.