The personalization of social media has expanded the reach and power of misinformation.
Popularized by the rise of TikTok and its “For You” page — an endless stream of content anchored to users’ interests, determined by information gathered from browsing history, engagements and localization — platforms like Instagram, YouTube and X have adopted the model and created their own versions. Instagram began showing recommended posts on its main feed in 2018 and in 2020 implemented Reels, a TikTok-like short-video feature. YouTube launched Shorts, a similar concept, the same year and X added its own “For You” tab in early 2024.
These developments have transformed the way users consume content on social media, says Sejin Paik, product manager at TrueMedia.org. “No matter who you follow, you’re going to get content based on what their system thinks,” she says.
In this digital environment, false information coexists with factual content, giving rise to deepfakes, hyper-realistic images or videos artificially manipulated to show someone doing or saying something. As the 2024 US elections approach, deepfake videos showing speeches that were never delivered, photos of Donald Trump’s Secret Service bodyguards smiling after the attack on him in July, and captures of he screen of news articles spreading false information about the elections appeared alongside legitimate information, blurring the lines between what is real and what is not.
As AI-generating technologies develop, become easier to use and more accessible, it becomes increasingly difficult to assess the authenticity of social media posts. An AI detection tool created by TrueMedia aims to help by identifying signs of manipulated photos and videos posted on social media.
Deepfakes and disinformation
Artificial intelligence expert Oren Etzioni founded TrueMedia in January 2024, motivated by his concerns about the impact of AI in this election year. TrueMedia is a nonprofit organization made up of researchers, engineers, and social scientists. Its goal is to create technology that addresses societal issues, which Mme Paik calls it “sociotechnology.”
As technologies have become publicly available, artificially generated content has proliferated as a tool for political manipulation, and journalists fear its impact will only grow as it improves.
The “For You” page template gives this more sophisticated misinformation a broader reach, Ms. Paik says. Posts gain popularity by leveraging algorithms that decide what’s popular, regardless of the accounts behind them. The information presented in users’ feeds generally matches their interests and beliefs, and the content displayed, whether real or not, is personalized to generate likes and shares that expand the networks they reach.
Deepfakes have enormous potential in this environment. They can depict anything from Pope Francis in designer attire to entire fake news bulletins, and their use is growing exponentially: more than 500,000 deepfakes have been shared in 2023. Regardless of the content’s prevalence already present, journalists say that the AI revolution is only just beginning.
Detecting deepfakes
Journalists can use TrueMedia’s flagship deepfake detector to identify whether a video or image was created with AI.
The tool is simple: users submit a social media link to the detector which runs the content through a series of AI-based detection software created by partner technology companies to determine the percentage probability that the content is artificially generated .
The tool is not able to detect all false content, warns Mme Paik. For example, it has difficulty detecting “cheapfakes,” that is, misleading photos or videos created by humans using non-AI-based editing software. Disinformation spreaders have also begun creating workarounds, such as overlaying deepfakes on real media, to circumvent the detection process.
Ultimately, as the power of AI increases, so do the tools that detect them. “We’re a long way from being able to achieve them 100% of the time, but this is one of the very smart ways to get closer,” says Mme Paik. “If people are creating AI-generated deepfakes, we’re going to use AI to counter that.”
Linking detection to journalism
As the flood of fake content inevitably continues on social media, journalists must not rely solely on detection to combat deepfakes, urges Mme Paik: They must explore the sources, reasoning and impact of misinformation.
For example, fake AI-generated posts about recent hurricanes in the United States, showing flooded and destroyed communities, have proliferated on social media users’ news feeds. Although some of those who reposted these images and videos knew they were false, including politicians, they provoked emotional reactions and were used to push inaccurate claims about the government’s disaster response.
More importantly, journalists need to think about why these inaccurate posts become popular, says Mme Paik, and work to counter these narratives beyond simply verifying the accuracy of a video.
“Saying, ‘Oh, we detected something!’ is not enough,” she says. “Journalists have the power to inform and educate the public. We need to.”
Amritha R Warrier & AI4Media / Better Images of AI / Tic Tac Toe / Licensed CC-BY