Alain Delon, Florent Pagny… how to detect scams generated by AI?

Alain Delon, Florent Pagny… how to detect scams generated by AI?
Alain
      Delon,
      Florent
      Pagny…
      how
      to
      detect
      scams
      generated
      by
      AI?
-

DECRYPTION – Fake videos exploit the image and voice of celebrities to extort money from their fans. Jérôme Mondi, an expert in generative AI, explains how to spot these scams.

“If you see this video, it’s because I’m already dead. I decided to redistribute some of my money to all French people. I’ll give you 100,000 euros if you can’t win in my online casino.”. In this video that has gone viral on social media, Alain Delon promises a fortune to Internet users. But these statements by the actor, who died a few days earlier, are in reality fake and generated using artificial intelligence. This ad redirects to a fake site imitating the Google Play Store, the application store for the Android operating system. An application is then offered to Internet users, called «Casino Delon» and supposedly developed by his daughter Anouchka Delon. But once downloaded, the software steals the Internet user’s personal and banking data in order to extort money from them.

These disturbingly realistic audio or video files are created using deepfake creation software («hypertrucages»). These allow you to replace a face with that of another person, to reconstruct the voice of a celebrity and to falsify words. Once photos or real videos of the victim are recovered by these scammers, it only takes a few steps to generate fake content. This new type of scam is the latest way to steal banking data, personal information but also to obtain money transfers from duped Internet users.

How to identify this scam?

The development of generative AI tools has been coupled with a massive distribution of this type of video. TikTok was flooded a few months ago with doctored videos of murder victims representing little Grégory, Samuel Paty or Alexia Daval. Only 33% of French people consider themselves capable of distinguishing photo or video content generated by AI from real content, according to an IFOP survey. While these fakes are increasingly realistic, even using the guises of major traditional media to launch vast disinformation campaigns, is it possible to separate fact from fiction?

“The challenge is huge. Today, it is possible to convincingly reproduce someone’s voice, and to generate videos from a series of images, including videos with a ‘face swap’ where eye movements, such as natural blinking, are perfectly imitated. In the past, the absence of realistic blinking made it easier to detect deepfakes, but this weakness has been corrected by modern algorithms.” underlines Jérôme Mondi, consultant and trainer in generative AI.

Yet even with these improvements, there are still some flaws that can give away a deepfake. Poorly executed editing, “may exhibit abnormal movements, distortions or inconsistencies in shadows and lights, which do not correspond to the rest of the scene”, pointe l’expert.

Earlier this month, it was the turn of the singer Florent Pagny’s image to be manipulated by AI. A fan of the singer who believed she had a direct and private relationship with him was scammed out of a few hundred euros. “It’s my voice, it’s me speaking, but I never filmed that” thus alerts the celebrity. Jérôme Mondi notes clues that prove the inauthenticity of the video that trapped this fan. “The voice has robotic tones, which indicates that the technology is not yet fully developed for certain voice manipulations.”.

Also readFake videos, AI, special effects… The advertising war between Kamala Harris and Donald Trump

A practice punishable by law

Earlier this year, the dissemination of fake pornographic images of global star Taylor Swift sparked a wave of indignation among her millions of fans and the American political class, causing awareness among American lawmakers. Since then, content created using AI and published on major platforms such as TikTok or Instagram must be explicitly indicated as such.

France has also addressed this issue with the law on securing digital space promulgated last May.

It is essential to exercise common sense and skepticism when faced with situations that seem too good to be true.

Jérôme Mondi, expert in generative AI

It punishes with one year of imprisonment and a fine of 15,000 euros the act of disseminating the “visual or audio content generated by algorithmic processing and representing the image or words of a person” without the latter’s authorization, and on which the use of AI is not expressly mentioned. This penalty is increased to a fine of 45,000 euros and two years of imprisonment in the event of use of an online communication service, such as social networks. The author of a deepfake of a sexual nature risks three years of imprisonment and a fine of 75,000 euros.

Beyond the technical aspects, Jérôme Mondi insists that he “It is essential to exercise common sense and skepticism when faced with situations that seem too good to be true, such as a private relationship with a celebrity.”.

It is important to remain vigilant, for example by suggesting a face-to-face video call, or asking for proof of the person’s location to help verify the veracity of what they are saying. In May, a study by Milan Presse and CSA highlighted the difficulty of combating these fake videos. 79% of respondents felt they were not sufficiently informed to protect themselves from them. “These precautions help protect against manipulation and reduce the risk of falling into the deepfake trap.”assure l’expert.

-

PREV PSG wins in Lille and continues its flawless run
NEXT relive the victory almost controlled by the Parisians