Washington University researchers are working to develop artificial intelligence (AI) systems for health care, which have the potential to transform the diagnosis and treatment of diseases, helping to ensure that patients get the right treatment at the right time.

In a new Viewpoint article published Dec. 10 in the Journal of the American Medical Association (JAMA), two AI experts at Washington University School of Medicine in St. Louis — Philip Payne, PhD, the Robert J. Terry Professor and director of the Institute for Informatics; and Thomas M. Maddox, MD, a professor of medicine and director of the Health Systems Innovation Lab — discuss the best uses for AI in health care and outline some of the challenges for implementing the technology in hospitals and clinics.

In health care, artificial intelligence relies on the power of computers to sift through and make sense of reams of electronic data about patients — such as their ages, medical histories, health status, test results, medical images, DNA sequences, and many other sources of health information. AI excels at the complex identification of patterns in these reams of data, and it can do this at a scale and speed beyond human capacity. The hope is that this technology can be harnessed to help doctors and patients make better health-care decisions.

Payne and Maddox answered questions about AI, including its capabilities and limitations, and how it might change the way doctors practice.

Where are the first places we will start to see AI entering medical practice?


Maddox: One of the first applications of AI in patient care that we currently see is in imaging, to help improve the diagnosis of cancer or heart problems, for example. There are many types of imaging tests  — X-rays, CT scans, MRIs and echocardiograms. But the underlying commonality in all those imaging methods is huge amounts of high-quality data. For AI to work well, it’s best to have very complete data sets — no missing numbers, so to speak — and digital images provide that. Plus, the human eye is often blind to some of the patterns that could be present in these images — subtle changes in breast tissue over several years of mammograms, for example. There has been some interesting work done in recognizing early patterns of cancer or early patterns of heart failure that even a highly trained physician would not see.


Payne: In many ways, we already have very simple forms of AI in the clinic now. We’ve had tools for a long time that identify abnormal rhythms in an EKG, for example. An abnormal heartbeat pattern triggers an alert to draw a clinician’s attention. This is a computer trying to replicate a human being understanding that data and saying, “This doesn’t look normal, you may need to address this problem.” Now, we have the capacity to analyze much larger and more complex sources of data, such as the entire electronic health record and perhaps even data pulled from daily life, as more people track their sleep patterns or pulse rates with wearable devices, for example.