DayFR Euro

AI more effective than humans in diagnosing diseases

American doctors have embarked on a small study around the issues of artificial intelligence and the diagnosis of diseases. The initial question was the following: does the use of a large language model (LLM) improve diagnostic reasoning performance among physicians in family medicine, internal medicine or emergency medicine compared to conventional resources? More simply, can the use of AI by healthcare professionals improve disease diagnoses? The result is quite astonishing.

ChatGPT soon to be a doctor?

Carried out with 50 doctors with varying positions and degrees of experience, the experiment asked them to diagnose six very real cases, but never published. Some could take help from ChatGPT for this, while OpenAI’s artificial intelligence was also queried on its own to resolve these cases. A grade was then awarded.

The study’s conclusion regarding the central question is clear: the availability of an LLM as a diagnostic aid did not improve physician performance compared to conventional resources in a randomized clinical trial of diagnostic reasoning. Doctors alone scored 74%, compared to 76% for those aided by artificial intelligence.

Learn to trust AI

On the other hand, when used alone, ChatGPT was more effective in diagnosing these cases than doctors. His score? 90%. The study therefore concludes that “Further developments in human-machine interactions are needed to realize the potential of AI in clinical decision support systems.

The doctors behind the study have several explanations for these discrepancies, notably between AI alone and the doctor-AI duo. The difficulties in properly using this still new tool is one, while some doctors did not trust the machine when its diagnosis was different from that imagined by humans. ChatGPT is therefore not yet a doctor, but with a little practice and confidence, anything is possible.


Health

-

Related News :