“There is a gap between the costs, which are very significant, and the potential revenues”

Essayist Cory Doctorow in New York City on September 22, 2022. CRAIG BARRITT / GETTY IMAGES VIA AFP

Born in Canada and residing in Los Angeles, California, Cory Doctorow is an essayist, activist, teacher and author (The Internet Con. How to Seize the Means of ComputationVerso, 2023 ; The BezzleTor Books, 240 pages, not translated). He chronicles the development of digital technology since the beginnings of the Internet, on his site Pluralistic and in various media, such as, in February, in the Financial Times to criticize the evolution of large platforms and social networks. In December 2023, he spoke in a column about artificial intelligence (AI) as a “bulls”, a topic that has been a growing debate in the industry in recent months. Doctorow doubts the economic balance between the costs of large AI models, such as those used in ChatGPT conversational bots, and the revenues from different use cases.

“The AI ​​Interviews” from “Le Monde”

“Le Monde” is launching “Les entretiens de l’AI”, a series of interviews to shed light on the debate around artificial intelligence, its prospects and the questions it raises. We will regularly publish in this section our discussions with personalities with varied profiles: leaders of companies in the sector, experts in the fields most concerned by this technology, observers, essayists, researchers. Find all the interviews in the series here.

Why do you think AI is a “bubble”?

Artificial intelligence is a bubble because it bears all the hallmarks of one. We see entrepreneurs adding the word “AI” to their products to boost their stock price, without really knowing what this technology will bring them. This is reminiscent of the time when blockchain [technologie qui permet d’authentifier des actions, comme des transactions en cryptomonnaies] was in vogue.

We also see a lot of investment going into AI model makers, which often lose money. The promise is that these companies will convince customers to pay enough to recoup the cost of this software, but they have a hard time explaining how they will do it. Finally, there is this impression that everyone seems to want to talk about AI, about any subject, sometimes apparently unrelated, like climate change…

Read also | Article reserved for our subscribers Nvidia star reflects investor nervousness over AI growth, despite strong results

Add to your selections

Why do you doubt the business model of large AI models?

There is a disconnect between the very significant costs and the potential revenues. Many of the AI ​​applications with the greatest revenue prospects are susceptible to what the industry calls “hallucinations,” or errors [comme une réponse factuellement fausse dans un texte]. But no one in AI has a convincing theory about how to eliminate these errors.

When we talk about these sensitive use cases (health, autonomous driving of vehicles, etc.), AI companies usually propose as a solution to add a human in the loop, to verify the decision or the content produced by the software. But the practical and financial interest of these AIs is, according to their creators, that they are supposed to act much faster than humans. And if we need people to review each of their actions, this limits profitability and possible gains in productivity.

You have 75.87% of this article left to read. The rest is reserved for subscribers.

-

PREV “In Gaza, international humanitarian law is today at a tipping point”
NEXT Djelika Diallo, a promising silver medalist in taekwondo