This Tuesday, November 5, 244 million Americans will choose the next tenant of the White House between Democrat Kamala Harris and Republican Donald Trump. And to win this election, which is so important for global geopolitics, almost anything is allowed. Thus, we no longer count the fake news, deepfakes and other disinformation maneuvers disseminated by supporters of both camps (especially one in reality) or unscrupulous people to try to influence the final result of the election.
In this morass of disinformation, digital players are trying to provide solutions, but often with contrasting, not to say disappointing, results. On the front line facing fake news broadcast on its platforms, Meta set up a “War Room” in 2018, that is to say a crisis unit, filled with numerous screens, clocks and computers, to combat disinformation campaigns during electoral campaigns. But unfortunately it proved insufficient to stem the incessant and massive flow of “fake news” which floods the Web.
Buster.ai, an “antivirus against fake news”
In France, a few startups have tried to tackle this scourge which endangers democracy. Among them, we find in particular Buster.ai. Founded in 2019 by Julien Mardas and Aurélien Cluzeau, this company relied on artificial intelligence to analyze large quantities of data and determine which ones deviate from official sources. In other words, Buster.ai’s solution acted as an “antivirus against fake news”. Unfortunately, this entrepreneurial adventure ended at the beginning of the year despite raising 2 million euros in January 2022.
Nevertheless, this project allowed Julien Mardas, co-founder and ex-CEO of Buster.ai, to learn lessons on the use of technologies to fight against disinformation, starting with artificial intelligence, essential in his eyes to give ourselves a chance to counter “fake news”. “There is no choice but to use AI. Quite simply because humans cannot index the 3.4 billion articles published every day. But we are in open war. And in a state of war, time and space are the two most important dimensions along with command, discipline and doctrine. In this context, the more time we take to verify the information, the more colossal the damage, the more instability increases, the more people put money aside, the more the economy collapses.explains the French entrepreneur.
“Fake news is seven times more viral than real news”
Fake news took off in the wake of the attacks of September 11, 2001, before taking on an even more worrying scale in the light of scandals like “Cambridge Analytica” with Facebook, “WikiLeaks” with Julian Assange or even “Snowden” with the NSA. So many resounding cases which reinforced an observation: data, and therefore information, are indeed the white gold of digital technology. And the current period, marked by geopolitical tensions in several regions of the world, does not really encourage optimism, while online disinformation continues to grow.
“Fake news is seven times more viral than real news”notes Julien Mardas. “In certain conflicts, such as that between Israel and Hamas, information from social networks is repeated without thinking and therefore without verification. And sometimes, this fake news remains activist for several weeks, even several months. Not being able to verify information quickly is a game changer.”he adds.
Ircam Amplify tracks voices created by AI
In a more atypical register, another technological player is also trying to do its part to fight against disinformation. And against all expectations, it is Ircam Amplify, a subsidiary of Ircam (Institute for Acoustic/Music Research and Coordination) which has been marketing audio innovations for five years. Indeed, the company relied on its work to detect music generated by artificial intelligence to transpose it to another domain: voice.
Thanks to Ircam’s cutting-edge expertise, its private subsidiary set about designing a solution to identify a voice created by AI. And at a time of the explosion of deepfakes, such technology can prove particularly useful during an electoral campaign. “Like an antivirus, our software calculates a confidence rate on the detected voices. Today, our reliability rate is 98.5%.indicates Nathalie Birocheau, co-founder and general director of Ircam Amplify. The margin of error is linked to false positives but such a score is obviously encouraging.
The use of this technology is of interest not only to the media to detect fake news, but also to internal security actors (Ministry of the Armed Forces, gendarmerie, etc.) to identify possible scams or attempts at virtual manipulation. However, detecting voices created by AI is not easy. “Voice is a very particular medium, difficult to master. Cloning voices, transforming them and detecting them is rather complex. It’s even more difficult than music”underlines Nathalie Birocheau. Before adding: “The luck we have is that we are a commercial company with exclusive access to IRCAM’s R&D to integrate it into our products. The challenge is to be quick to put the best product on the market quickly. And with our lab-to-market logic, we can release something in three or four months. When we identify a use case, we know how to press the button and get the relevant results from IRCAM to put a product on the market.”
Moving quickly is crucial to limit the spread of fake news, especially since the democratization of AI tools provides new weapons that are very easy to use for cybercriminals. “We didn’t think it would go so quickly on deepfakes”recognizes Nathalie Birocheau. “It’s a little scary, but you have to take a step back. History will be written in the coming years, we are only in the first chapter”she adds. To give itself the means to fight against this new form of disinformation, Ircam Amplify is seeking to complete a funding round in the coming months.
Large groups also increasingly targeted by disinformation
What can change everything for a vote in the political arena can also be for the economic sphere. Indeed, fake press releases can have a disastrous impact on a company’s financial performance. Vinci can attest to this after seeing its stock price fall in 2016 following false information. In just a few minutes, the French group saw its valuation drop by 7 billion euros. The latter having been relayed in particular by Bloomberg, the American agency then received a large fine from the Financial Markets Authority (AMF). And this is not an isolated case since players like Intel, Pfizer, Fitbit and Blackrock have already suffered similar mishaps.
It is not necessarily necessary to use major means to shake the markets. Two years ago, a certified fake account impersonated the American pharmaceutical company Eli Lilly and tweeted that its insulin coming to the United States would now be free. Not only did the group’s stock collapse by $20 billion, but millions of people believed that this was true information even though the subject of insulin is also sensitive. -Atlantic. “Even if it was false information, the company looks like a bandit.notes Raphaël Labbé, co-founder and CEO of Wiztrust (formerly Wiztopic). The latter evokes the term «hacktivism» to describe these methods aimed at “hacking” corporate communication.
With his company, the entrepreneur is trying to provide a solution to this growing problem by offering a platform for certifying business information in the blockchain. One way to fight against false press releases aimed at damaging the reputation and financial performance of large groups. “The risk will only increase. As of the 2010s, there was only one hack that worked each year. From 2015, this increased to two per year. Since 2020, it’s one per month! In ten years, the number of false stories that have worked to manipulate corporate information has multiplied by 12. By being a bit meticulous, it becomes within the reach of anyone”observes Raphaël Labbé.
If the phenomenon grows, more and more groups are now trying to protect themselves from such actions which can cause very significant damage, even if only in terms of image. “When there is a major incident, other companies react. We sell during crises, but I would prefer that people buy in prevention mode”underlines the boss of Wiztrust. And to be even more effective, the latter would not be against creating a consortium to bring together several players addressing different verticals. “We built something interoperable”indicates Raphaël Labbé. Faced with colossal quantities of fake news, unity could indeed be strength!
“There is no business in disinformation”
However, positioning yourself in the disinformation market is not necessarily synonymous with profitability, while the media industry has been in crisis for many years. Consequently, few startups have positioned themselves in the fight against fake news. “There are two main reasons: the technical challenge is extremely difficult to address and there is no business for disinformation. The only way to do business is to weaponize tools to turn them into mass surveillance tools. But technology must be used to liberate people, not enslave them. In life, you are either a hero or you are in business. But we can’t do both.”estimates Julien Mardas.
Raphaël Labbé also recognizes that it is difficult to launch into this sector, which is so critical for democracy. “It’s a complicated issue…There’s a huge trust issue in getting clientele. In B2B, there is a real issue of critical mass. We have competitors who have tried to copy us. As for B2C, there is the problem of the economic model. Certainly, there is a company that is doing well, NewsGuard, but it sells advertising. The challenge for the press today is to find out who will pay to educate people? It’s a super tough fight. You probably need to attach yourself to an industry with a viable economic model. Getting closer to an actor like Dashlane could make sense for example”he believes.
While waiting to resolve this economic equation, we will have to compete in ingenuity to be able to absorb the dizzying flood of fake news which is flooding brains around the world. In this context, AI seems the best ally of the rare actors involved in my subject to unmask as many as possible. But nothing or almost nothing will replace the most effective method: that everyone makes the effort to inform themselves correctly by varying their sources of information, by verifying sensitive or crazy information, and by not stopping at sensationalist headlines. . An exercise not necessarily simple but essential to preserve our democracy.