IOWA Magazine | 06-03-2024

Iowa Doctors Explore Artificial Intelligence’s Future in Medicine

14 minute read
University of Iowa Health Care experts discuss AI’s promise and ethical questions.

A couple of years ago, Lindsey Knake (11BSE, 15MD), a neonatologist at University of Iowa Health Care’s Stead Family Children’s Hospital, was watching a mechanical ventilator help a premature baby breathe when she had what she calls a minor epiphany. Physicians once had to manually adjust breathing machines to prevent preemies’ delicate lungs from collapsing. But here, inside a quiet bay in the hospital’s Neonatal Intensive Care Unit, where the baby slept inside an incubator, the ventilator fine-tuned the airflow within parameters set by Knake. As the newborn’s lungs improved, the ventilator dialed back the air flowing through the endotracheal tube.

For Knake, seeing the technology work alongside her—but also independently—was a light-bulb moment, a reminder of how far medicine had come and the possibilities ahead. “It was doing automatically what we used to have to do manually, which was sit by the bedside and wean the pressures ourselves,” she says.

Today, Knake is one of the many UI doctors and researchers exploring the healing potential of advanced technologies in health care. Specifically, she collaborates with other UI experts to study how the latest frontier of tech—artificial intelligence—can serve as a new tool in the NICU by helping physicians determine the optimal time to remove ventilator support from infants who are ready to breathe on their own.

Elsewhere at UI Health Care, experts are investigating the vast promise of physician-guided AI use. Pilot projects study how machines and computers that think like humans can be used to improve efficiency in recordkeeping and doctor-patient interactions, while in the laboratory, researchers investigate how it can enhance diagnosis and treatment. At the same time, Iowa experts are weighing the complex ethical questions about AI’s use and what it will mean for the future of health care.

“I tell trainees that this is a really exciting time to get into medicine and to make sure they’re becoming technologically literate,” says Knake, a clinical professor and associate chief medical information officer in pediatrics. “Some people are like, ‘Is AI going to take my job?’ But the answer is no; we will still need clinicians and humans to have oversight over the technology and help interpret it. We’ll have an opportunity to mold that technology and hopefully advance care to a place we didn’t think possible 50 years ago.”


Harnessing Big Data

James Blum, an anesthesiologist and Intensive Care Unit doctor, sees some of UI Health Care’s most complex cases. He often works with patients with extended hospitalizations and whose electronic records can include dozens, if not hundreds, of notes from specialists and span decades. Sorting through it all can be a challenge.

“They might have a history of heart failure, chronic kidney disease, COPD,” says Blum. “But which came first? Is their renal failure because their heart is bad, or is their heart bad because of their kidneys? Those types of things become really hard to figure out.”

To address the problem of electronic record overload, UI Health Care became one of the nation’s first hospitals to pilot an AI-driven software platform called Evidently, which can quickly process and summarize vast amounts of notes, reports, and scanned documents. Instead of doctors spending hours trying to piece together a patient’s medical history, AI generates streamlined timelines, graphs, and lists of potential health concerns.

AI can identify and unwind complex patterns in a patient’s medical history and improve the productivity of physicians.

“Now I can put that patient’s picture together much more readily than I ever could have before,” says Blum, whose role as UI Health Care’s chief health information officer includes evaluating new technologies to improve patient care and support research.

Medical records are just one piece of the data quagmire facing the health care industry. Large amounts of information can also be collected from smart watches, phone apps, genomic testing, and environmental studies. “One of the problems is that there are very good systems for collecting a lot of data, but there are not good systems for processing a lot of data,” Blum says. “It’s sort of like that drawer at home that we just throw stuff into. You may be really good at throwing stuff in the junk drawer, but then you need to process that data—find that key to that shed that you go into once every three years.”

Blum says AI can potentially help physicians get a handle on that flood of data. With that aim in mind, UI Health Care has implemented another powerful AI tool in recent years called Nebula, a cloud-based platform that works in tandem with the hospital’s EPIC electronic records system. Nebula allows Iowa experts to create computer models that harness massive amounts of data to give physicians deeper insights and predictions of patient outcomes.

One model already in use at UI Health Care tells doctors the likelihood of a patient being readmitted to the hospital, which is associated with an increased risk of mortality and higher cost of care. Another model, built by Carver College of Medicine experts with collaboration from the Tippie College of Business, predicts the likelihood of bedridden patients developing life-altering pressure injuries, also known as bedsores.


More Face Time

A 2016 study by the American Medical Association found that for every hour physicians spend face-to-face with patients in ambulatory practice, they spend another two hours performing desk work. For doctors like Jeydith Gutierrez (19MPH), a UI hospitalist and clinical associate professor of internal medicine, filling out forms was hardly their reason for going to med school.

“We know that we are spending a lot more time interacting with medical records than we are with patients in our daily practice in almost every area in medicine,” says Gutierrez. “And that becomes a big barrier.”

With severe worker shortages and burnout among health care’s most pressing issues, Gutierrez sees AI relieving doctors of some time-consuming tasks so they can spend more time with patients. For example, AI systems can potentially help hospitalists like her, who work with a wide spectrum of patients, quickly scan the latest medical literature and keep updated on protocols.

“When I have a patient with a condition that maybe I haven’t seen in a couple of years, I have to review and make sure the clinical guidelines haven’t changed,” Gutierrez says. “If I can have AI synthesize some of that information at the point of care, that will be much more efficient.”

Predictive models can potentially help doctors make quicker decisions and improve doctor-patient interactions.

The biggest AI-driven change Iowa’s patients might notice in the coming years, say Blum and Gutierrez, is in their interactions with doctors. Among UI Health Care’s pilot projects involving AI, physicians are using a large-language model platform to assist with record-keeping in the exam room. Instead of having to focus their attention on a computer terminal to type in information, doctors can use an ambient clinical documentation tool to transcribe conversations and generate their notes automatically.

“Some people are reluctant to go into a career in general medicine because they hear about how primary care doctors spend so much time filling out paperwork,” Gutierrez says. “If we can have AI tools take part of that role and allow the physicians to go back to the core of medicine, then not only will burnout go down, but more people will want to go into fields like mine, where you’re a generalist. People in careers like primary care will have a greater opportunity to really engage with what they love to do, which is caring for the patient.”


Increasing Access to Care

Ophthalmologist and computer scientist Michael Abramoff, one of the UI’s foremost experts in AI, has a keen understanding of the technology’s untapped potential in health care. In 2018, Abramoff made headlines when the UI spinoff company he founded, today known as Digital Diagnostics, earned the FDA’s first-ever approval for an AI system to autonomously diagnose disease. Abramoff spent years developing an AI tool to diagnose diabetic retinopathy—a serious complication of diabetes that can cause vision loss—at the point of care and without physician input. It has since become the world’s fastest-growing AI medical procedure based on patient usage, according to a recent study in the New England Journal of Medicine.

Today, the UI professor and executive chairman of Digital Diagnostics researches how AI can be used to diagnose other diseases, including glaucoma and conditions beyond the eye. Abramoff also studies how AI can boost productivity in medicine and improve health equity—areas where he says real-world scientific evidence has been lacking. He recently led a randomized controlled trial to determine whether autonomous AI improves clinician productivity, marking the first evidence of AI’s effect on efficiency not just in health care, but in any industry, says Abramoff.

The study, published last fall in the journal Nature Digital Medicine, measured the productivity of retinal doctors with and without the use of Digital Diagnostics’ autonomous AI system, LumineticsCore. Abramoff and his research team found a 39.5 percent increase in completed care encounters per hour in the autonomous AI group compared to the group receiving traditional care.

UI experts in 2018 created the first FDA-approved AI system to autonomously diagnose a disease—diabetic retinopathy—and continue to study its use to detect other conditions.

In a separate study co-authored by Abramoff published earlier this year in Nature Communications, researchers found that Black youths with diabetes received as many exams as white youths when LumineticsCore was used—a previously intractable health disparity tied to blindness and vision loss in underserved populations. Because AI provides an instant diagnosis, says Abramoff, patients receive important follow-up care immediately and are more likely to return for future appointments.

Abramoff says AI not only has potential to improve productivity and health equity, but it can also lower the costs of care and make medicine more accessible in a world where billions of people lack access to essential health services. With diabetic eye exams, for instance, Abramoff says autonomous AI can reduce the cost to the health system by two-thirds.

“It’s like what we saw in agriculture a century ago when there were famines and people couldn’t feed their children,” Abramoff says. “Now, because of the productivity increases from the mechanization and automation of agriculture, affordable food is ubiquitous in many places. And Iowa, with all our John Deere combines, many of which run autonomously, is a leading example of productivity in agriculture. I want the same for health care, where it’s affordable and available everywhere, and no one needs to worry about getting appropriate care.”


Navigating Ethical Questions

In May 2024, clinicians and administrators from around the region converged in Iowa City for the annual Ethics in Healthcare Conference. Among the conference’s most pressing topics were the ethical questions surrounding AI use, including: Who’s responsible if something goes wrong, the AI or the doctor? How will patient privacy and security be ensured in these massive databases? How do we safeguard AI from systemic biases?

The conference’s organizer, Lauris Kaldjian, director of the Carver College of Medicine’s Program in Bioethics and Humanities, says the tendency can sometimes be to apply new technology, “and then ethics chases after it.” But he says that as the use of AI increases, it’s crucial technology remains in service to humanity, rather than the other way around.

“AI is a tool and not a mind,” says Kaldjian, noting that it reflects information from vast datasets but does not think, reason, or exercise judgment. “At its best, it could improve performance and decrease the burden of mundane tasks like documentation and billing. It could also facilitate the delivery of treatments that are scientifically based and more individualized. The proponents of AI are really enthusiastic about all the good that could come from it, if it’s harnessed the right way. But at the same time, one has to wonder about the possible unintended consequences and some of the distortions of what it means to be human.”

One of the central questions being asked is how doctors can learn to trust a technology they might not fully understand. Since deep learning systems can be opaque in their decision-making—what’s known as the “black box problem”—Kaldjian says health care professionals might not be able to explain to patients how an AI system arrived at its conclusion, which is necessary in obtaining informed consent for a procedure or intervention.

“Like many people do these days, I like to talk about shared decision-making between the clinician and patient,” Kaldjian says. “As we incorporate increasingly different AI technologies, we need to be straightforward with patients about the benefits, risks, and alternatives.”

Kaldjian sometimes hears from medical students who are worried their specialty could be replaced one day by computers and algorithms. After all, AI is already being deployed in areas like radiology, where it can recognize complex patterns in images to provide computer-assisted diagnosis, and robotic surgery is already a part of modern medicine, albeit at the direction of humans. Looking into the future, could there come a day when AI is not just a tool but the surgeon, asks Kaldjian? Could we even see robotic caregivers at the bedside?

While AI’s integration into our world will only continue to grow, Kaldjian doesn’t see a future in health care without people caring for people. “These technologies are designed by human beings, and that means they are not perfect. And I mean that in the sense that they’re prone to mistakes, and they’re prone to biases. So there must be constant navigation and negotiation to determine if this is for the good of patients. And the good of patients will continue to depend on clinicians who can demonstrate virtues like compassion, honesty, courage, and practical wisdom, which are characteristics of persons, not computers.”

“There must be constant navigation and negotiation to determine if this is for the good of patients. And the good of patients will continue to depend on clinicians who can demonstrate virtues like compassion, honesty, courage, and practical wisdom, which are characteristics of persons, not computers.” ——Lauris Kaldjian, director of the Carver College of Medicine’s Program in Bioethics and Humanities

Augmented Decision-Making

When premature and critically ill babies receive care in the NICU, doctors and machines monitor each tenuous heartbeat and breath. As ventilators, IV pumps, and warming lamps help preemies gain strength, instruments measure oxygen saturation, body temperature, blood pressure, and other vital signs.

Knake, the neonatologist, has collected that data from about 250 patients at Stead Family Children’s Hospital as part of her preliminary research on extubation—the delicate process of removing the endotracheal tube from a baby’s windpipe. Her goal is to one day provide fellow NICU doctors with an AI tool to help determine when the time is right to extubate and allow babies to begin breathing on their own. While doctors have tried-and-true tests and biomarkers they rely on to determine when to extubate adult patients, decisions for newborns can be more precarious and involve a host of variables, Knake says.

“I don’t think AI will ever replace clinical decision-making,” Knake says. “Some people say it really should be called augmented intelligence and not artificial intelligence because we’re trying to augment our intelligence and experience as clinicians. AI would add critical pieces of information that clinicians might not have picked up from the vital sign monitor, such as subtle changes in heart rate patterns that may suggest a patient isn’t ready.”

Knake is one of dozens of UI investigators to receive funding in recent years through the Iowa Initiative for Artificial Intelligence, a UI College of Engineering-based research group. The initiative has supported an array of AI studies in the health sciences and other fields since it was established in 2019, including areas like COVID-19, lung cancer, and cochlear implants.

Now Knake is applying for National Institutes of Health grants to gather more data through partnerships with other hospitals to build her computer model. She’s hopeful AI will help reduce extubation failure rates and the number of days some babies spend on ventilators.

Just as the incorporation of medical devices has revolutionized doctors’ abilities to save patients, Knake sees the development of predictive algorithms as the next frontier in medicine. “Over time, similar to computers, similar to the new ventilators, people will start using AI tools and start trusting them and realizing they can improve outcomes,” she says. “It will help them feel more confident in the decisions they’re making.”

Still, no matter how far technology takes us, Knake and other UI experts agree that the heart of health care will remain the same: the human touch.

Join our email list
Get the latest news and information for alumni, fans, and friends of the University of Iowa.

Chat From the Old Cap: Artificial Intelligence

Join us on July 17th to learn more about AI's impact on health care.

Chat From the Old Cap: Artificial Intelligence

Join us on July 17th to learn more about AI's impact on health care.

Join our email list
Get the latest news and information for alumni, fans, and friends of the University of Iowa.
Related Articles

We use cookies to understand how you use our site and to improve your experience. By continuing to use our site, you accept our use of cookies in accordance with our Privacy Statement unless you have disabled them in your browser.