DayFR Euro

a double-edged sword

When making diagnostic decisions aided by Artificial Intelligence (AI), radiologists sometimes trust it too much when it flags a specific area of ​​interest on an x-ray. This is essentially what emerges from a new study published in the Revue Radiology.

Should radiologists blindly rely on the capabilities of Artificial Intelligence (AI) in their diagnostic practices? Nothing is less certain, according to a recent study published in the Journal Radiology.

Study attempts to assess radiologists’ potential trust in AI

« By 2022, 190 radiology AI software had been approved by the Food & Drug Administrationspecifies one of the main authors ofa study published in the Journal RadiologyDr. Paul H. Yi, Director of Intelligent Imaging Informatics and Associate Member of the Department of Radiology at St. Jude Children’s Research Hospital in Memphis (Tennessee – USA). However, a gap between AI proof-of-concept and actual clinical use often manifests itself. To close this gap, fostering limited trust in AI software is essential. »

To perform this multisite prospective study, 132 radiologists and 88 internal medicine/emergency physicians read chest x-rays alongside AI guidance. Each physician was tasked with evaluating eight chest X-ray cases with suggestions from a simulated AI assistant with diagnostic performance comparable to experts in the field. They were offered corresponding frontal and, if available, lateral chest x-ray images obtained at Beth Israel Deaconess Hospital in Boston via the open source MIMI Chest X-Ray database. A panel of radiologists selected a set of cases simulating real clinical practice.

Local or general areas of influence of AI councils

For each case, participants were presented with the patient’s clinical history, AI advice, and x-ray images. The AI ​​provided a correct or incorrect diagnosis with local or global explanations. In a local explanation, the AI ​​highlights the parts of the image deemed most important. For overall explanations, the AI ​​provides similar images from previous cases to show how it arrived at its diagnosis.

« These local explanations guide the doctor directly to the affected area in real timecontinues Dr. Yi. In our study, the AI ​​literally put a frame around the areas of pneumonia or other lung pathologies. » Reviewers could accept, modify or reject the AI’s suggestions. They were also asked to indicate their level of confidence in the results obtained and to rank the usefulness of the AI’s advice.

Reduced interpretation times when AI provides local explanations

Using mixed effects models, two doctors of computer science from Johns Hopkins University in Baltimore (Maryland – USA), co-authors of the study, led the researchers to analyze the effects of experimental variables on the precision of the diagnosis, effectiveness, doctors’ perception of AI’s usefulness, and “mere trust” (how quickly a user accepted or disapproved the AI’s advice). The researchers controlled for factors such as user demographics and professional experience.

Results showed that reviewers were more likely to align their diagnostic decision with the AI’s advice and their review time was shorter when the AI ​​provided local explanations. “ Compared to global AI explanations, local explanations yielded better doctor diagnosis accuracy when the AI ​​advice was correctadds Dr. Yi. They also increased overall diagnostic efficiency by reducing the time spent considering AI advice. »

Trust in AI: a double-edged sword

When the AI’s advice was correct, the average diagnostic accuracy across raters was 92.8% with local explanations and 85.3% with global explanations. When the AI’s advice was incorrect, doctors’ accuracy was 23.6% for local explanations and 26.1% for global explanations. “ When provided with local explanations, radiologists and non-radiologists in the study tended to trust the AI ​​diagnosis more quickly, regardless of the accuracy of the AI ​​advice », he remarks.

Co-lead author of the study, Prof. Chien-Ming Huang, John C. Malone Assistant Professor in the Department of Computer Science at Johns Hopkins University, noted that this reliance on AI could be a double-edged weapon. sharp because it risks over-reliance or automation bias. “ When we rely too much on what the computer tells us, it’s a problem, because AI isn’t always rightconcludes Dr. Yi. I think as radiologists using AI, we need to be aware of these pitfalls and remain mindful of our diagnostic models and training. »

Paolo Royan

-

Related News :