Artificial intelligence in medicine — predicting patient outcomes and beyond

Machines are getting better and better at analyzing complex health data to help physicians better understand their patients’ future needs.

In a study out today in Nature Digital Medicine, an advanced algorithm evaluated de-identified electronic health records of more than 216,000 adult patient hospitalizations to predict unexpected readmissions, long hospital stays, and in-hospital deaths more accurately than previous approaches.

I caught up with one of the authors, Nigam Shah, MBBS, PhD, an associate professor at Stanford, to learn about the new study and discuss the implications for artificial intelligence in medicine.

What is deep learning and how does it fit in the larger universe of artificial intelligence?

Deep learning is one of several machine learning techniques that can be used to build intelligent systems. This technique, inspired by the brain’s neural networks, uses multiple layers (hence ‘deep’) of non-linear processing units (analogous to ‘neurons’) to teach itself how to understand data and then to classify the record or make predictions.

This new study is an example of deep learning applied to medical prediction tasks. In the past, predictive models in health care have considered a limited number of variables in highly-cleansed health data. Here, neural networks were able to sift through troves of messy raw data and learn how to organize the data via variables that matter most in predicting health outcomes.

How does this study potentially move the field forward?

This study shows it’s possible to take messy electronic health record data — including unstructured clinical notes, errors in labels, and large numbers of input variables — from different institutions and pull the information together into a usable input from which actionable predictions about patient health can be made.

What does this predictive model tell physicians that they don’t already know or can’t figure out through traditional means?

Predictive models can help make care better basically by creating winning partnerships in which the machine predicts and the doctor decides on follow-up action. The point of using AI (and machine learning) is to have it perform tasks it can accomplish well, such as reading a retinal image or flagging cases for follow-up when there are too many to manually review. Doctors then have the time and information to make the best decisions, bringing the societal, clinical, and personal context to bear. They make the call on if, how, and when to act.

For example, there’s a project at Stanford that uses algorithms to sift through large databases, including electronic health records, to detect patients who likely have a certain genetic condition that can lead to a fatal heart attack at a premature age. Usually, patients who have this disease don’t know they have it. Using the algorithm, doctors can find out earlier who has the condition and offer treatments that can significantly improve outcomes.

What are the top ethical issues for artificial intelligence in medicine?

The two most important issues in my mind are maintaining fairness when learning from biased data and the effect on the doctor-patient relationship. As we build machine-learning systems, it’s important to guard against accidentally institutionalizing existing human biases, such as racial bias that might be present in the data, during the design of algorithms. It is also crucial to understand how the use of an AI system could change the doctor-patient relationship, and ensure that the change is the kind we want, with machine learning systems serving as a tool to help doctors.

Ultimately, it’s important for us to build machine-learning systems to reflect the ethical standards of our health care system and to be held to those standards.

What comes next in this field and how is Stanford involved?

Stanford has been working on AI in medicine since the 1980s, with the Stanford University Medical Experimental Computer – Artificial Intelligence in Medicine (SUMEX-AIM) project. Today, we are among the few sites that are actively working to bring AI to the clinic in the next few years. At Stanford, we have efforts on four fronts: developing new AI methods, deploying them into clinical workflows, laying out the ethical framework, and building safety into the design process, so algorithms improve care the way we envision them to.


Substack subscription form sign up