A revealing study conducted with 116 parents
Conducted by researchers at the University of Kansas, the study aimed to analyze parents’ perceptions of the reliability of ChatGPT compared to those of doctors. 116 parents, aged 18 to 65, were invited to evaluate information generated either by a medical expert or by the AI tool.
The results striking
The participants felt that there was little difference between the responses of ChatGPT and those of professionals in terms of:
- Reliability,
- Exactitude,
- Morality,
- Expertise.
Even more worrying, parents who perceived a difference often leaned in favor of ChatGPT, deemed more “reliable” and “accurate” than medical experts.
An influence on behaviors
The study also showed that information provided by ChatGPT directly influenced parents’ decisions on crucial topics like:
- Taking medication,
- The diet,
- Children’s sleep.
These findings raise crucial questions about parents’ growing reliance on AI in such sensitive areas.
Why do parents do they turn to ChatGPT?
Several factors explain this growing confidence in AI tools:
Practicality and accessibility
ChatGPT is available 24/7, providing quick and structured responses. For parents who are often overwhelmed, this accessibility represents a considerable advantage compared to the sometimes long delays in obtaining a medical appointment.
Clear communication and engaging
The responses generated by ChatGPT are written in simple and understandable language. This can give the impression of transparency and trustworthiness, even if the tool lacks the contextual skills of a doctor.
A growing distrust of the medical profession
In certain cases, parents express distrust towards health professionals, fueled by media debates on vaccination or the management of health crises. This loss of confidence pushes them to look for alternatives like ChatGPT.
The limits and dangers of this dependence on AI
Although ChatGPT is a powerful tool, it is not without weaknesses, and its unsupervised use can generate significant risks, particularly in the field of child health.
-The “hallucinations” from IA
When ChatGPT lacks accurate data, it can produce erroneous or “delusional” responses. This biased information can be taken at face value by parents who are poorly informed about the limits of the tool.
The absence of personalization
Unlike a doctor who adapts his advice to the medical history and specificities of a child, ChatGPT provides standardized answers. This can lead to inappropriate and even dangerous decisions.
A risk of delays in medical consultations
By relying excessively on AI, some parents could delay consulting a healthcare professional, thus aggravating situations that require rapid treatment.
How to supervise the use of ChatGPT in child health?
To leverage the benefits of AI while minimizing its risks, solutions should be considered:
Educate parents about the limits of AI
It is crucial to make users aware of the nature of the information provided by ChatGPT. The AI’s responses should be seen as additional information and not as a substitute for a medical consultation.
Integrate AI into supervised systems
AI could be used in collaborative medical platforms, where the responses generated would be validated by healthcare professionals before being transmitted.
A more open dialogue between doctors and parents is essential to restore trust. Healthcare professionals must clearly explain their recommendations and reassure them of their expertise.
Towards cohabitation between AI and human expertise
The study highlights a double-edged phenomenon: if ChatGPT can be a ally in rapid access to informationit should not replace the skills and experience of healthcare professionals.
The challenge lies in how to integrate these digital tools into a secure and ethical framework. A collaborative approach, combining technology and human expertise, could pave the way for a more responsible and beneficial use of artificial intelligence, particularly in the field of child health.