It was still far from dawn when Angel Cholka was awakened by the beam of an electric torch in her window. A policeman was at the door. “Does Madi live here?” “, he asked. First confused, then terrified, Mme Cholka rushed to her 16-year-old daughter’s room.
Published at 7:00 a.m.
Ellen Barry
The New York Times
Mme Cholka was unaware that artificial intelligence software used by the Neosho, Missouri, school district was tracking what Madi wrote on the school-issued laptop.
During the night, Madi had texted a friend to inform her of her intention to kill herself with her anxiolytics. An alert went off to a school official, who called the police. When Mme Cholka and the policeman found Madi, she had already swallowed around fifteen pills. They pulled her out of bed and rushed her to the hospital.
1,300 miles away, around midnight, the landline rang at a home in Fairfield County, Connecticut, but the parents didn’t answer in time. Fifteen minutes later, three police officers were at the door: they wanted to see their 17-year-old daughter because surveillance software had detected a risk of self-harm.
Her parents woke her up and took her down to the living room so the police could question her about a sentence she had typed on her cell phone at school. Very quickly, we concluded that it was a false alarm: it was an extract from a poem she had written years earlier. But the visit shook the young girl.
“It was one of the worst experiences of her life,” says her mother, who asked not to be named so she could discuss this “traumatic” episode for her daughter.
Among the AI technologies entering American schools, few raise as many critical issues as those aimed at preventing self-harm and suicide.
These software became more widespread during the COVID-19 pandemic, after many schools provided laptops to students and switched to virtual.
An American law requires that these computers be equipped with filters to block certain content. Ed tech companies – GoGuardian, Gaggle, Lightspeed, Bark, Securly, among others – saw an opportunity to tackle suicidal and self-harming behavior. They have integrated tools that scan what students type and alert the school if they appear to be considering harming themselves.
Millions of American students – nearly half, by some estimates – are now subject to such surveillance. Details are disclosed to parents once a year, when giving consent.
Most systems identify keywords; algorithms or human review determine which cases are serious. During the day, students may be removed from class and screened.
Outside of school hours, if parents cannot be reached by telephone, the police can go to students’ homes to see what is happening.
We cannot analyze the precision, advantages and disadvantages of these alerts: the data belongs to technology companies; Data on each subsequent intervention and its outcome are generally kept by school administrations.
According to parents and school staff members, alerts sometimes make it possible to intervene at critical moments. Most often, they make it possible to offer students in difficulty support to prevent them from acting out.
However, alerts can have unintended, sometimes harmful, consequences. Rights groups say there is a risk to privacy. These systems are also criticized for putting students unnecessarily in contact with the police.
As for the benefits of this tool for mental health, opinions are divided. There are many false positives, wasting staff time and disrupting students. In some districts, home visits outside of school hours have sparked such controversy that interventions are now limited to the school day.
But in some schools, it is emphasized that this software helps in a very difficult task: recognizing in time children who suffer in silence. Talmage Clubbs, director of guidance services for the Neosho school district, was reluctant to turn off the system, even during summer vacation, for moral reasons: “It’s difficult: if you turn it off, someone can die. »
Break a taboo
In Neosho, it is believed that the alerts – followed by therapeutic interventions at school – helped break the taboo surrounding suicide.
From 2014 to 2018, 8 of its 5,000 students committed suicide. Then, during an interval of almost four years, there were no suicides (there was one in 2022 and another in 2024). Jim Cummins, the former Neosho school commissioner, is clear that technology has something to do with it.
“Did we save a life?” Twenty lives? We can’t put a number,” he said. But, he adds, the statistics on the decline in suicides speak for themselves.
Even if we go back six years and we are told that it is impossible to prove that we saved a single life, I answer that indeed, it is impossible. But we are doing everything we can.
Jim Cummins, former Neosho school trustee
The student who killed himself in 2022 is Madi Cholka. The same one that was saved by a nighttime visit from the police in 2020.
During those years, Madi was hospitalized several times and her mother, Angel, took steps to protect her, storing her medications and guns in a safe.
The alert, in 2020, allowed Mme Cholka to take Madi to the emergency room, then to a psychiatric hospital an hour away.
This hospitalization did not solve Madi’s problems. After her discharge, she continued to try to harm herself, but she was careful not to talk about her intentions on her computer again. She died at 17, leaving behind a suitcase prepared for further hospitalization.
“I’m sorry,” she wrote in a message to her mother.
Despite everything, Mme Cholka says she is grateful for the alerts, which have relieved her of some of the burden during these years of vigilance. She has heard the arguments concerning the risk to the privacy of students and the intrusion into families: she brushes them aside.
“I know: it’s only thanks to these alerts that my daughter stayed with us a little longer. »
This article was published in the New York Times.
Read the article in its original version
(in English; subscription required)