Beware of “deepfake” of famous doctors, serving commercial scams
DayFR Euro

Beware of “deepfake” of famous doctors, serving commercial scams

Well-known and respected doctors promoting “miracle” cures for diabetes or high blood pressure? False advertisements using the image of authority figures in the health field, manipulated by artificial intelligence, are flooding social networks, a surge that is very difficult to contain according to experts.

On Facebook and Instagram, they promise to cure diabetes without metformin, a drug prescribed as a first-line treatment, even encouraging patients to abandon their treatment which “will kill them”, in favor of obscure so-called “natural” syrups.

These scams border on endangering the lives of others and are all the more pernicious because they are based on the image of star doctors, such as Dr Michel Cymes in France or Dr Michael Mosley in the United Kingdom, victims of these “deepfakes”, online content generated or modified using artificial intelligence.

“People trust these videos because these doctors have spent time building up an image of credibility, so they are believed, even when they make completely outlandish allegations,” Dr John Cormack, a British GP who collaborates with the British Medical Journal (BMJ) on this subject, laments to AFP.

These “deepfakes” have “really taken off this year,” observes Henry Adjer, a specialist in artificial intelligence, noting that the advertisements are mainly aimed at a “slightly older” audience, by diverting the image of “television doctors.”

Contacted by AFP in May, Michel Cymes said he had already taken Meta to court to denounce these “scams”.

Briton Hilary Jones, who appears in AI-generated videos selling a fake high blood pressure cure as well as hemp gummies, has hired an expert to track down the ads.

“But even when they are removed, they reappear the next day under another name,” he laments in the BMJ.

– “Conspiracy circles” –

These scams “have gained momentum due to progress in ‘deep learning’ (technology designed to allow a machine to learn by itself), explains Frédéric Jurie, a lecturer and researcher in computer science at the University of Caen.

The quality of “deepfake” images, audio and video has made enormous progress, notes the specialist: “today we have access to tens of billions of images and we are able to build algorithms that can model everything that can appear in the images and regenerate the images. This is what we call generative AI.”

More recently, controversial figures, such as Professor Didier Raoult in France, have been victims of “deepfake”.

Barbara O’Neill, an Australian naturopath considered “a health hazard” by her country’s authorities after promoting baking soda against cancer, has become a TikTok star based on “deepfake” photos of her selling pills to “clean blood vessels” in particular.

Reached by AFP, her husband, who co-directs the O’Neill Company, deplores the fact that “many people use Barbara’s reputation to sell products that she does not recommend.”

This phenomenon does not surprise Henry Adjer for whom “these people are models for certain conspiracy circles”.

Among the fake videos using Barbara O’Neill, some also fall into conspiracy theories, assuring that the naturopath died – which is false – after having discovered a miraculous oil, sold on Amazon.

Faced with these “deepfakes”, experts are not very optimistic about the success of detection tools: “It’s a game of cat and mouse”, explains Frédéric Jurie. “In principle, detection cannot improve too much, because if someone found something to improve detection, then the person who wants to make fake images would use this means to get around it.”

The researcher instead highlights “techniques which make it possible to guarantee that content has not been altered, as with messaging, thanks to software which produces digital signatures, such as a certificate”.

bur-jp/spe/bow

-

Related News :