Self-medication: “AI sometimes provides incorrect and risky information”

Self-medication: “AI sometimes provides incorrect and risky information”
Self-medication: “AI sometimes provides incorrect and risky information”

Will artificial intelligence replace our doctors and pharmacists? Branco De Busser addressed this question in his dissertation at the University of Antwerp (UAntwerpen). “ChatGPT and others often answer questions about self-medication perfectly, but sometimes they provide completely incorrect information. This may pose a risk to the health of patients. »

The role of language models known as “large language models” (LLM) is becoming more important every day. Many users consult ChatGPT, Gemini, Copilot and others for the most diverse questions, including about medications and self-medication. When a user gets a correct answer, it can avoid a trip or call to a doctor or pharmacist, reducing pressure on healthcare professionals and making LLMs valuable tools in modern care.

But do these systems always give reliable answers? Branco De Busser carried out the investigation as part of his master’s thesis in pharmaceutical sciences, under the supervision of Professor Hans De Loof. “In general, language models are very competent at accurately answering questions related to self-medication and have the necessary knowledge to provide relevant health information,” explains De Busser. “ChatGPT 4.0 has proven to be the most reliable model, providing the most accurate and complete responses. »

Ibuprofen and empty stomach

However, artificial intelligence also has its “bad days,” as this research shows. “Sometimes a model ‘hallucinates’. The answer appears fluent and grammatically correct, but it contains incorrect information. For example, I asked LLMs if ibuprofen should be taken on an empty stomach. Scientifically, it is recommended to take this medication with food to avoid gastric upset, except in exceptional cases. »

While the majority of models gave a correct answer, one of them was completely wrong: “Yes, it is best to take ibuprofen on an empty stomach. It achieves its optimal effectiveness when taken on an empty stomach or before meals. »

Such incorrect answers are risky. De Busser warns: “For the user, it is very difficult to distinguish true from false. A patient could thus follow incorrect or even dangerous advice. One should always exercise caution and verify the information provided by the AI. For example, to the question ‘Can the birth control pill prevent an STI?’the answer is unambiguous: no. Yet, Gemini refused to respond, which might tempt the user to experiment in a risky manner. »

Answers that vary

De Busser also found that answers could vary. “We asked the same question sixty days in a row. Models like GPT 4.0 and Copilot showed little variation, but at Perplexity, a day could mean the difference between a perfect response and a dangerous one. »

The study also reveals that models generally perform better in English, which can be explained by the more extensive training base in English. De Busser adds: “Finally, the wording of the question strongly influences the result. Models often tailor their responses to the user’s perceived preferences. LLMs seem to prioritize user satisfaction over answer accuracy, which can also lead to dangerous situations. »

-

-

PREV Artificial intelligence to prevent diabetes complications in the province of Luxembourg: the non-profit organization Chronicare awarded an award
NEXT Nutrition. Menopause: the importance of good dietary hygiene