AI chatbots are no match for the real medical world

AI chatbots are no match for the real medical world
AI chatbots are no match for the real medical world

Key information

  • LLMs struggle to have coherent clinical conversations.
  • AI chatbots have difficulty collecting complete patient histories.
  • Models such as GPT-4 have difficulty accurately diagnosing medical conditions.

A recent study published in Nature Medicine highlights the limitations of AI chatbots in the real medical world. Although these chatbots have demonstrated impressive capabilities in simulated exam environments, they face challenges when interacting with patients in dynamic and unpredictable ways.

Researchers at Harvard Medical School and Stanford University have developed a framework called “CRAFT-MD” to evaluate large language models (LLMs) in realistic patient interactions. They found that LLMs, such as GPT-4, struggled to conduct coherent clinical conversations, collect complete patient histories, and accurately diagnose medical conditions.

Limitations of AI chatbots

Despite their success in standardized testing scenarios, these models have experienced great difficulty navigating the complexities of real-world medical dialogues. Lead author Pranav Rajpurkar of Harvard Medical School emphasized the need for rigorous evaluation before deploying LLMs in clinical settings.

The study suggests that AI chatbots, while promising, still need to be developed to effectively handle the nuances and challenges of real-world patient interactions in healthcare.

If you want access to all articles, subscribe here!

-

-

PREV Mpox in Haut-Katanga: 18 health zones affected out of 27
NEXT I am a gastroenterologist and here is the #1 habit to skip for better intestinal transit