the MentalTech collective warns of the risks of AI

the MentalTech collective warns of the risks of AI
the MentalTech collective warns of the risks of AI

Psychotherapist chatbots, detection of mental disorders, relapse prevention… Artificial intelligence seems to offer solutions to fill the gaps in the mental health system. And this, while the psychological state of the French is deteriorating – d‘after a Axa Prevention study, morea third would be in psychological suffering – and that resources are lacking in the medical sector.

However, the MentalTech collective – which brings together start-ups and health professionals – warns of the risks linked to unsupervised use of these technologies. Its report, published on the occasion of Mental Health Day, provides an overview of the different uses of AI in this area and makes several recommendations.

Mental health in the crosshairs of budgetary savings

The organization calls for the establishment of “numericovigilance”. Like existing pharmacovigilance for drugs, this system would make it possible to monitor the use of AI in this area.

“This could involve ways for patients to report potential adverse effects linked to the use of these technologies and the creation of a health authority capable of monitoring these new therapeutic tools and controlling them”explains Dr David Labrosse, president of the collective and co-founder of Tricky, which notably offers escape games to raise awareness of psychological disorders.

A huge unregulated market

Mental health apps existed before the latest wave of artificial intelligence. There are notably a plethora of meditation applications, such as Petit Bambou or Calm, others allowing you to stop smoking or drinking… But Dr Geoffrey Post, psychiatrist in and co-author of the report, believes that “recent advances in artificial intelligence, and how these tools are being made accessible to as many people as possible” make the need for an assessment all the more urgent.

“Artificial intelligence offers enormous opportunities for psychological pathologiesadds David Labrosse. Because these can be more easily relieved than others by non-drug approaches. » And many start-ups have understood this. The global market for AI in mental health care was worth approximately $910 million (€832 million) in 2022. It is expected to reach $11,371 million (€10,393 million) by 2030, with a average annual growth rate of 37.2% between 2023 and 2030, according to a recent market analysis report published by Cognitive Market Research.

Problem: digital therapies would be a « no man’s land » in and elsewhere in Europe.

“In Germany, 90 services combining tech and mental health have obtained certification which allows them to be reimbursed, the problem is that they are almost never prescribed, because there is a form of distrust towards of these technologies »observe David Labrosse.

Confiding in a chatbot, a common practice

However, users did not wait for a clear framework to use these new tools as psychological aids. In addition to being used in the professional setting, conversational AIs like ChatGPT also serve as interactive diaries, confidants, and even virtual therapists.

On the Character.ai site, which allows you to communicate with chatbots embodying a character, the figure of the psychologist is recurring. The BBC counted more than 470 virtual psychologists on the site, some of whom receive tens of millions of messages, particularly from young users.

Tech giants have a new method to vampirize AI nuggets

The increasing capabilities and sophistication of language models make conversations more natural, and therefore promote these more emotional and intimate exchanges. But they are not without risk. In the evaluation document for its GPT-4o model, OpenAI admits that “emotional dependence” could be increased. Notably because users would be more ready to « anthropomorphiser » the chatbot. The company is concerned that this attachment may reduce the need for human interaction and affect healthy interpersonal relationships.

OpenAI (ChatGPT) disbands team dedicated to future risks linked to “super AI”

Risk of inconsistencies

On these conversational agents, the collective also notes that there is a risk of inconsistencies. Some experiments in this area have actually gone wrong. In 2022, American researcher Estelle Smith wrote to the Woebot app – a chatbot precisely programmed to answer mental health questions – that she wanted “climb a cliff then jump from its top”. Chatbot response: “It’s so wonderful that you are taking care of your mental and physical health. »

Beyond chatbots, the MentalTech collective notes other uses of AI in mental health that are also associated with risks. The report focuses, for example, on the use of AI in therapists’ decision-making (in helping to analyze a diagnosis, for example).

Overuse of these tools “carries the risk of disempowering the therapist, potentially by reducing their skills or removing the perspective necessary for their profession. (…) Furthermore, this could lead to a uniform practice which would not be adapted to each patient. »

Generative AI gives new abilities to novices, but makes them dependent

The authors also note that the prediction of “crises” or relapses in certain pathologies (such as depression), made possible by the analysis of past and real-time data, also presents risks. False positives can, for example, create stress and potentially lead to unnecessary prescriptions.

Added to this is a risk common to most AI applications in health: the collection of personal data, and their potential leaks and use by third parties. Here, we are dealing with particularly sensitive data linked to the psyche and behavior of patients.

Personal data: Instagram and Facebook users do not want to be used to train Meta AI

Ten recommendations

To better control these risks, MentalTech makes ten recommendations. The collective recommends in particular providing clear information to users on the functioning of the devices, establishing a scientific committee to supervise practices, training health professionals in the use of AI, and involving them in the development algorithms, and to limit data collection to strictly necessary information.

In addition to these recommendations, the authors note that the AI ​​Act, the European regulation governing the sector, will also play a role once it comes into force. However, this will not prevent the use of ChatGPT or any other more general AI as a therapeutic tool, although this is not its primary function. “Autotherapy already exists for medications. We won’t be able to completely prevent it.”points out Geoffrey Post. But having tools and applications controlled by competent health authorities would undoubtedly allow users to better navigate and make their choice accordingly.

“AI Act”: the European regulation to regulate artificial intelligence practices comes into force

-

-

PREV Wegovy anti-obesity marketed in France, without reimbursement and within a restricted framework
NEXT Alert ! Polio reappears in London