Generative artificial intelligence marks a profound break with traditional systems. Previous AI tools, like those analyzing mammograms for breast cancer screening, were designed to excel at a specific task. The new generation, trained on colossal quantities of data, is equipped with generic capacities which can generate texts, images or sounds according to needs.
It is precisely this versatility that appeals to doctors: they write post-consultation reports, help with clinical decision-making and produce documents that are more understandable for patients, such as hospital discharge summaries or relative explanations. therapeutic treatments. Faced with the problems facing European health systems, we can understand the enthusiasm of practitioners and decision-makers who see AI as a key to modernizing medical services.
However, this flexibility, which appears almost unlimited, raises a fundamental question of health security. How can we ensure that a tool not designed for this purpose can be used without risk in the medical field? Experts emphasize that unlike traditional applications, whose reliability for a given task can be precisely assessed, these systems escape traditional security assessment methods. Their capabilities, which seem limited only by the user’s imagination, make their management particularly complex.
Health