One in five doctors use ChatGPT in their practice. Is it serious, doctor?

One in five doctors use ChatGPT in their practice. Is it serious, doctor?
One in five doctors use ChatGPT in their practice. Is it serious, doctor?

Hypochondriac or not, it is not impossible that you have already browsed the web, a list of symptoms in hand, in order to understand what virus or bacteria you have just caught. Medical sites like Vidal or WebMD, institutional sites like the Ameli site, Doctissimo forums, asinternetinternet has grown in our lives, the possibilities for self-diagnosis have multiplied. Yes, but. If you’ve ever done the exercise, you’ll know that the sleuthing required to select the relevant symptoms and arrive at a definitive and reliable answer is much more complex than it seems. In short, nothing beats the advice of a doctor with ten years of study and even more years of practice behind him.
However, with the arrival of generative AI, and in particular chatbotschatbots advanced as ChatGPTChatGPTwe have the right to put the question back on the table. A artificial intelligenceartificial intelligencecapable of browsing every corner of the web to answer questions and problems, is it capable of diagnosing our ills with more precision than a forum? And why not just a doctor?

No time to read? Discover the audio chronicle behind this transcription. Click play and let yourself be carried away. © Futura

One in five GPs uses generative AI

According to studies published recently on the subject, it seems that a growing number of health specialists are using ChatGPT in their practice. A survey carried out among a thousand general practitioners, and published on September 17, 2024 in the scientific journal BMJreveals that 20% of those interviewed use generative artificial intelligence tools, such as ChatGPT, Bing AI or Bard de GoogleGoogle for example, in their clinical practice. When asked for more details, 29% of these respondents explained that they rely on these services to generate documentation after an appointment, while no less than 28% of them use them to suggest a differential diagnosis.

Biased and limited tools

So, we are not going to beat around the bush: inevitably, this new development raises its share of doubts and questions. And the authors of the study do not fail to report them in the discussion of their article. First of all, it is important to note that these tools, which, remember, remain in their infancy despite their spectacular development in recent years, have their biases and limitations. On the bias side, for example, in October last year, a shock study from Stanford University revealed that chatbots tended to perpetuate medical myths and racist prejudices, such as Mediterranean syndrome, falsely presuming , that populations of foreign origin, and in particular, people of African origin, would feel less the painpain than Caucasians. A stereotype that has been scientifically refuted for a long time, but which continues to persist, and even to be taught in certain schools. AIs rely on nothing more than what humans make available to them, so it is not surprising that they carry within them the biases that we instill in them, often much more represented online than the studies that use them. invalidate. Thus, it is difficult to find impartial judges among them.

AI reveals racism among sports commentators in shocking study

Among the limitations, this time, we can once again mention the fact that these AIs are young and therefore still subject to hallucinations, misunderstandings and lack of discernment. They were also not designed specifically to make diagnoses and do not have the keen senses that doctors learn to develop throughout their careers. Impossible to feel, feel, listen, observe: at best, the AI ​​must rely on what the practitioner observes and manages to formulate.

Pay attention to confidentiality!

Another area of ​​concern raised by the study: the confidentialityconfidentiality. As the authors point out, these AIs are owned by commercial entities, which provide very little visibility into how they use the information provided via their chatbots. Although it is a safe bet that no doctor names his patients when doing research, we now know that the triangulationtriangulation data is being done more and more easily, making it possible to find in the blink of an eyeeyeeye the identity of a person and to compile a complete file on them, including their contact details, their habits, their hobbies, and now, their medical data.

To date, because these new uses are only in their infancy, tools like ChatGPT are not considered medical devices, within the meaning of European legislation, but this does not mean that their use is prohibited or regulated. So be careful with the information you provide to generative AI when you ask them for a diagnosis. If flu symptoms are unlikely to put you in a complex situation, the past has already shown that insurers like banks can rely on your data, the famous Big DataBig Data collected by third-party services, to set the rate of a loan or the cost of your mutual insurance. But then, with all these drawbacks in mind, how is it that doctors still use generative AI in their profession? Do the risks outweigh the benefits and how far can artificial intelligence go to improve the well-being of patients and relieve healthcare professionals?

ChatGPT: the unexpected ally in the face of a system in crisis

This is nothing new, but we still talk about it too little. Today, the medical system in mainland is at a breaking point. Faced with a critical situation, medical students are fleeing to other countries, reinforcing the shortage of doctors and the expansion of desertsdeserts medical; the number of medicines out of stock is climbing at a frightening rate, two thirds of EHPADs are in deficit, the most vulnerable people are being abandoned, community medicine and private establishments are being neglected in favor of centralization around in public hospitals, which are increasingly saturated, overworked nursing staff are reaching the end of their strength and suicides, which are more frequent within these professions, remain a taboo subject.

Why is there a shortage of doctors in France? © Le Monde, YouTube

So let us be clear, it will take much more than a chatbot to get the French health system back on its feet. But the hope is that with generative AI like ChatGPT, as well as ever more advanced diagnostic tools, the burden on caregivers will be at least partially reduced. If the Conversational AIConversational AI have little chance of replacing a physical doctor, studies have demonstrated that ChatGPT was able to diagnose certain respiratory diseases better than medical trainees, that an algorithm could make it possible to note the aggressiveness of a sarcomasarcoma more effectively than a biopsybiopsyor even that an AI could detect signs of dementiadementia well before the appearance of the first external symptoms.

They tested ChatGPT in the emergency room: its diagnosis is “as good” as that of a doctor

These diagnoses will always need to be confirmed by specialists, whatever happens, but they could save precious time for patients and caregivers, limit medical wandering, allow the optimization of treatments, and reduce costs. , both medical and administrative. A 2023 study, published in the journal PubMed, demonstrates that ChatGPT could even help healthcare professionals in their interactions with patients, by helping them provide clearer, more useful and more empathetic responses. In an opinion piece for Medium, emergency physician Josh Tamayo-Sarver shares how he uses the chatbot to explain his decisions to patients and their loved ones, when their emotion or lack of knowledge prevents them from hearing reason.

Next step: participate in the debate

In short, the key benefits are numerous and deserve that we open a nuanced debate on the use of generative AI in the medical environment. Before accepting this new development without flinching or stepping up because our doctor charges us for a consultation that we could have done “all by ourselves in three clicks”, let’s take the time to consider the risks for our health and our lives private on the one hand, and the benefits for our care, the reduction of stressstress caregivers and the facilitation of their practice on the other.

In the case of AI, as with many other things, many of us have tended to let things unfold without intervening, content to be amazed or outraged as the news unfolds, the reaction to hot often taking precedence over dialogue. So, we invite you now to open the conversation. Talk with your friends, with your loved ones, with your doctor, with your patients, and with the legislators who regulate technologies and their use. Launch citizen debates, inform yourself, and call for a peaceful and informed exchange before practices become widespread and become rigid. The time to think about how we want to make generative AI and health work together is now. It’s up to you whether you want to take part or let things happen. But you will not be able to say: “ If I had known, I would have acted. »

-

-

PREV Between art and care, Derma Ink helps women turn the page after breast cancer
NEXT Tobacco-free month: pregnancy, a crucial time to quit smoking