Artificial intelligence (AI) could become a valuable tool for doctors in detecting patients at risk of suicide. A recent study reveals how an automated system helps identify these patients during routine medical visits.
Led by researchers at Vanderbilt University Medical Center, this study explores the effectiveness of an AI model in preventing suicide: the VSAIL (Vanderbilt Suicide Attempt and Ideation Likelihood model, meaning “model of probability of tentative de suicide and ideation”). By analyzing the data medical, this system alerts doctors to patients at high risk, paving the way for targeted interventions.
An urgent public health problem
In the United States, suicide is the 11th leading cause of death, with approximately 14.2 deaths per 100,000 people each year. This worrying trend, which has been increasing for decades, has pushed researchers to explore innovative solutions to better identify and take into account charge people at risk. It is also found that approximately 77% of people who die by suicide consulted a health professional in the year before their death, often for reasons not directly related to mental health.
These statistics led researchers to create the VSAIL model, in order to fill the gaps in risk detection. By targeting the most vulnerable patients, this system aims to reduce the number of deaths by suicide through early and targeted intervention.
An AI model to target at-risk patients
The VSAIL model uses data from electronic health records to assess suicide risk within 30 days. Tested in three clinics neurologyit made it possible to identify patients requiring special attention.
The study compared two methods: intrusive alerts interrupting the doctor, and more passive information integrated into the file. Intrusive alerts showed much greater effectiveness, prompting doctors to act 42% of the time.
A selective approach for better prevention
The system flagged about 8% of visits, targeting the highest-risk patients without overwhelming doctors. This selective approach facilitates its integration into often overloaded medical environments.
-The researchers emphasize that this method makes it possible to concentrate efforts on the most vulnerable patients, while avoiding universal screening that is difficult to implement.
The problems of intrusive alert
Although intrusive alerts are more effective, they risk causing physician fatigue due to their frequency. The authors of the study call for a balance between effectiveness and impact on the work of caregivers.
They also suggest that similar systems could be tested in other medical specialties, broadening the scope of suicide prevention.
Promising results for the future
Of the 7,732 consultations analyzed, 596 alerts were triggered. No cases of suicide attempts were recorded within 30 days of the alerts, although further studies are needed to confirm these findings.
Researchers believe that AI, combined with well-designed alerts, could become an essential tool for improving suicide detection and prevention in healthcare settings.
Related News :