A new study has demonstrated that artificial intelligence can help doctors identify patients at risk for suicide during regular medical appointments, potentially offering a new tool in prevention efforts. The research, published today in JAMA Network Open, found that AI-driven alerts prompted doctors to conduct suicide risk assessments in 42% of flagged cases – a dramatic improvement over traditional approaches.
The system, developed at Vanderbilt University Medical Center, focused on making suicide risk screening more targeted and practical in busy clinical settings. Rather than attempting to screen every patient, the AI model identified just 8% of visits for additional assessment, making the approach feasible for implementation in real-world medical practices.
“Most people who die by suicide have seen a health care provider in the year before their death, often for reasons unrelated to mental health,” explains Dr. Colin Walsh, associate professor of Biomedical Informatics, Medicine and Psychiatry at Vanderbilt. “But universal screening isn’t practical in every setting. We developed VSAIL to help identify high-risk patients and prompt focused screening conversations.”
The study tested two different approaches in neurology clinics: pop-up alerts that interrupted the doctor’s workflow versus a more passive system that simply displayed risk information in the patient’s electronic chart. The research team found that the more assertive pop-up alerts were significantly more effective, leading to risk assessments in 42% of cases compared to just 4% with the passive system.
The stakes are particularly high, as suicide rates have been climbing in the United States for a generation. Current estimates indicate that suicide claims 14.2 lives per 100,000 Americans annually, making it the nation’s 11th leading cause of death. Previous research has shown that 77% of people who die by suicide had contact with primary care providers in their final year.
“The automated system flagged only about 8% of all patient visits for screening,” Walsh noted. “This selective approach makes it more feasible for busy clinics to implement suicide prevention efforts.”
The six-month study involved 7,732 patient visits, generating 596 screening alerts. Importantly, during the follow-up period, no patients in either study group experienced suicidal thoughts or attempted suicide, according to medical records.
While the interrupting alerts proved more effective at prompting screenings, researchers acknowledged potential concerns about “alert fatigue” – when doctors become overwhelmed by frequent automated notifications. “Health care systems need to balance the effectiveness of interruptive alerts against their potential downsides,” Walsh said. “But these results suggest that automated risk detection combined with well-designed alerts could help us identify more patients who need suicide prevention services.”
The research team suggests their approach could be adapted for use in other medical settings, potentially creating a wider safety net for identifying at-risk patients during routine medical care. As healthcare systems increasingly adopt AI tools, this study provides evidence that thoughtfully designed systems can help address one of medicine’s most challenging preventive care challenges.