In this year when almost half of the world’s population was called to the polls for national elections, one vote was particularly awaited by observers of disinformation: the American presidential election.
Of course, the fact that the United States is our neighbor and the main Western power has a lot to do with it. But the fact remains that this first American election in the era of generative artificial intelligence (AI) should offer us a foretaste of how this technology could affect democracy. At least that’s what we heard from Silicon Valley.
With the presidential election behind us, we can now say with a fairly high degree of certainty that generative AI has not even come close to tipping the scales.
We still saw a presidential candidate using artificial images as a communication tool and a propaganda video, probably Russian, wrongly designated as hyperfaking (deepfake).
We have also seen numerous montages circulating (New window)photo, audio or video assisted by AI tools, often for caricature or parody purposes.
Open in full screen mode
This photo published by Donald Trump on X was generated by AI.
Photo : X @realdonaldtrump
However, this does not mean that everything is rosy.
A new research report from the Institute for Strategic Dialogue (ISD) analyzed messages in which the authors questioned whether or not a particular electoral content published on the web was the product of AI. In 52% of cases, Internet users were wrong.
This report is based on a random sample of 300 posts on X, Reddit and YouTube. This is far from being an exhaustive portrait, but it gives a rough idea of the situation.
According to Isabelle Frances-Wright, co-author of the report, the inclination of many Internet users to see AI where it is not testifies in particular to their ability to take their wishes as reality.
Often the content people were discussing was not generated by AI, but it was the ‘spectrum’ of AI that had an impact. AI gives people the opportunity to deny reality as they already do, but in an even more intense way. If they see content that contradicts their political beliefs or ideology, they can simply say it’s false
explains the woman who is also director of research on technology and society at the American branch of the ISD.
Another interesting statistic from this report: 22% of the publications studied in the report concerned the accuracy of information offered by conversational robots or voice assistants.
Shortly after the first assassination attempt against Donald Trump, the author of a message, for example, lamented that Meta’s language model acted as if it had never happened. Internet users then claimed that Meta was trying to censor information with its AI.
What actually happened was that Meta’s AI wasn’t connected to the internet. She therefore did not have access to up-to-date information on current events, such as this assassination attempt.
This text was initially published in the newsletter of Decryptors from 26 last November. Subscribe to receive exclusive content like this every week.
The poorly equipped public
The report also reveals that users often rely on inappropriate strategies to determine whether content is AI-generated or not.
In cases where people got it wrong, 58% used “outdated or incorrect” detection strategies or AI detection tools with known weaknesses.
As Decryptors explained in a previous article, this software inherently has a certain error rate. Additionally, false positives can fuel misinformation just as much as a truly AI-generated image.
This is what happened with an image of a Kamala Harris rally in Detroit that was wrongly identified by several Internet users – and by Donald Trump himself – as having been generated by AI.
A screenshot of AI detection software that estimated there was a 92% chance it was artificial has been viewed more than 1.7 million times on effect has continued to circulate despite numerous fact-checking articles on this subject.
Start of the Twitter widget. Skip the widget?End of Twitter widget. Return to start of widget?
According to Isabelle Frances-Wright, Internet users often follow false leads by relying on alleged visual inaccuracies concerning people’s hands or fingers in images suspected of being generated by AI.
Reflections of light, the whites of the eyes, reflections on the eyes and the way letters are written are also popular targets.
For the researcher, knowing how to recognize the fruits of AI on social networks requires first and foremost education, but the fight is far from won.
As content becomes more and more sophisticated, people will be increasingly confused and likely to make poor reviews, especially if they focus only on the content itself. From a media literacy point of view, I would really encourage people to take a step back and ask questions about sources
dit Isabelle Frances-Wright.
I hope that media education will solve this problem, because in reality, it is the only possible solution.