The guest –
AI in health: “Hippocrator”
Health philosopher Guillaume von der Weid believes that AI will never be as great as if it respects the boundaries of the human condition.
The guest Published today at 7:29 a.m.
Subscribe now and enjoy the audio playback feature.
BotTalk
Artificial intelligence is everywhere. In a hyper-technological world, structured by rationality and quantification, data optimization works wonders. But precisely. It is so effective that it tends to encroach on the qualitative, the incalculable, the immaterial, in a word: the human.
No calculation will allow an autonomous car to “decide” to run over a group of pedestrians or to sacrifice its driver, to rectify statistical biases linked to social prejudices or even to justify the way in which it selects the orientation of students. This comes down to principles, that is to say primary, purely qualitative elements. What medicine illustrates by its intertwining of the quantity of treatments and the quality of care.
Without even mentioning the success of ChatGPT in the American medical competition in 2023, AI has long gone beyond the level of servile stewardship tasks (information, appointment making, biological measurements). She has acquired clinical skills that go beyond human expertise: diagnostics, decision support, robotic surgery, personalization of treatments, etc. And not only does she do the “dirty work” like the “good job”, but also the “super job” that no human could do: 24-hour “presence”, detection of suicidal risks on social networks, biochemical research , optimal health policies by big data analysis, etc. In short, we are worried today, particularly among insurers, that no doctor will risk contradicting AI anymore. Have we entered the era of medicine that is as effective as it is inhumane?
If the body were a machine, we could be happy about it: machines would repair machines. But a body is alive and above all, when it is human, it is a person. This is why we need mechanical treatment, but also compassionate care. However, care questions the computational foundation of AI: measurement, which means quantification, decision and moderation at the same time.
The AI first needs to be given a measurement in the sense of capturing data. However, the AI is incapable of “smelling” a patient. It is strong at combining and deducing, weak at sensing and interpreting. An illness is a problem experienced before being a physiological measure. The first measure arises from a human encounter, from a “singular conference”.
But AI is also dependent on measurements in the sense of collective decisions. What are the public health priorities? Small, widespread pathologies or rare pathologies? Is it better to save 80 people a year of life, or to save a baby who can live 80 years? Are we jeopardizing the economic future of entire generations to save a few old people, “whatever the cost”? Just as an autonomous car will not be able to decide its destination, no matter how perfect its driving, health AI will not be able to treat people without prioritizing treatments.
AI finally poses the problem of measurement as moderation. Too many measurements kill measurement. Should we read an individual’s entire genome to reveal all their pathological risks, including the most improbable? To what extent should we segment the public in insurance, if this calls into question the solidarity which is its foundation? To avoid becoming counterproductive, AI must be tempered by principles external to it.
Thus, the effectiveness of AI will never be as great as by respecting the boundaries of human concern, concern that they have for each other, concern for the human condition confronted with suffering, scarcity, fatality. . This is why we will always need women and men who sympathize, decide and accept their finitude.
“The Tribune of Opinions”
With the Tribune of Opinions, find analyzes, editorials, readers’ letters, expert opinions… So much insight to form your own opinion every Monday.
Other newsletters
Log in
Did you find an error? Please report it to us.
0 comments
Related News :