AI can predict missed appointments. How can hospitals use this data?

For every five appointments at Boston Children’s Hospital, a patient does not show up.

Missed appointments are a common problem in healthcare systems. And they’re a particularly attractive target for machine learning researchers, who can use patient datasets to understand what’s causing patients to miss needed care. In a new study published this month, a group of researchers in Boston Children’s analyzed more than 160,000 hospital appointment records of nearly 20,000 patients for clues. Their model found that patients who had a history of no-shows were more likely to miss future appointments, as were patients with language barriers and those who had to see their provider on bad weather days.

These are predictions that, in theory, could help a healthcare system target interventions to the patients most at risk of missing appointments and offer them all the help they need to get there. But even though Boston Children’s leaders helped develop and test the model, the healthcare system isn’t yet convinced to take it out of pilot mode and actually put it into practice.


“What I really liked about it was the thought experiment on what things might look like actually predictive,” said Kathleen Conroy, clinical chief at Children’s Hospital Primary Care Center at Boston Children’s Hospital and co-author of the study. “Even if it’s theoretically actionable, what would you even be willing to do with that information and how would you put it into a family-centered lens?”

At first glance, it may seem reasonable to ask staff to proactively address the factors that the model revealed, especially since the current protocol – overbooking appointments knowing that a portion of patients will not simply won’t show up – sometimes means that some patients are already underserved. wait longer to be seen even when they have dates. But before adopting a new risk prediction model, the health system will first need to address technical and social questions, including whether it is actually improving outcomes for these underserved groups without introducing further bias.


The healthcare system is in the very early stages of exploring alternatives to overtime, including potentially putting the model into practice more broadly, researchers told STAT. But it’s a decision that would require patient and provider buy-in, careful staff training, and technical support to connect the model to the hospital’s planning system.

If the healthcare system adopts the model, it will need to consider how to communicate this risk to patients in a way that does not stigmatize them. Currently, all patients receive reminders by phone, text, email or patient portal prior to appointments, depending on their preferences. Certain groups that the health system deems more at risk of missing appointments, such as newborns and their families, sometimes receive direct phone calls and text messages from staff.

While it can tailor outreach to particularly vulnerable patients, the healthcare system is not currently sharing with patients when they are considered to be at high risk of missing appointments – and whether it was working with the model of machine learning to offer them extra help, they would have to be careful to make sure patients don’t feel judged, Conroy said.

“If you ever said that I was a patient who had a very high rate of no-shows, I would find that offensive,” she said. The health system also doesn’t want to “penalize” patients for late or missed appointments, which could lead them to cut ties with their care providers altogether, she said.

Conroy said Boston Children’s leaders also need to carefully discuss any algorithm the hospital plans to implement with its Family Advisory Board and other groups to ensure it doesn’t harm patients who are already experiencing obstacles to getting to their appointments.

“We have to be very careful even about how we think about it in the systems we put in place so that we don’t accidentally create bias among our staff around the patients they might be contacting,” she said.

This is especially important for predictive models that target patients who may already face significant barriers to their care. In the case of missed appointments, the problem is not the patient’s inattention. They may lack transportation, need to find cover at work, or face language barriers. These are all obstacles that health systems might be able to help overcome.

“The real goal of reducing absences is that we’re just providing more and better care,” said Conroy, who represents providers in decisions about how care is delivered at Boston Children’s.

Part of that work will be understanding how best to help patients once they are identified as high risk. Some of the model’s findings offer the kind of insight a hospital can consider: if staff know exactly which patients are likely to skip care due to a lack of transportation, they can contact them well in advance of their appointments. you to coordinate trips. Other findings were far less actionable: the model found, for example, that booking appointments on days when the weather was likely to be nicer meant patients were more likely to arrive.

“We can’t just forget about seeing patients in bad weather in New England,” Conroy said. “Some things are more predictable and you can see them coming from further away.”

To reliably identify high-risk patients, researchers will also need to refine their models. Currently, there is little consensus on how best to predict absences, which means that the factors the models exhibit and the patients they report can vary depending on the types of datasets they draw from and the model parameters, said Dianbo Liu, the study’s lead author and a machine learning researcher at the Massachusetts Institute of Technology. His team chose to exclude certain factors such as race and race-related data in model training to avoid increasing disparities, and concluded that the exclusion did not affect prediction performance, although other models have included these factors. They also only had access to data between 2015 and 2016.

Liu added that health systems historically haven’t had large amounts of easily analyzable data from which to build these models, although federal interoperability rules could change that. At Boston Children’s, for example, more than three-quarters of patient records had at least one missing data element; Liu’s team explored ways to indicate and account for incomplete patient data when training and running the model, which they believe improved its predictive performance.

Liu urged health systems and research groups to work together on these models, potentially adopting a statewide or global model and adjusting it for their specific population.

“The people who originally develop these systems for clinical practice are not the same group of people who do cutting-edge machine learning,” Liu said. “You kind of have to get these people to sit in the same room and help them discuss it.”

Leave a Reply

Your email address will not be published.