Conversational artificial intelligence (AI) could soon influence users’ decisions to trade before they have even been made, opening the way to possible manipulation, researchers at the University of Cambridge warn in a paper published Monday. “AI tools are already in development to obtain, infer, collect, record, understand, predict and, ultimately, manipulate and commodify” the future decisions of Internet users, explains Yaqub Chaudhary, one of the co-authors, from the Leverhulme Center for the Future of Intelligence (LCFI) in Cambridge — an interdisciplinary research center dedicated to AI. This commerce would be centered on the intentions of an Internet user, for example their future purchases or their next votes. It would therefore go beyond current practice, which aims to sell a user’s attention to the highest bidder, in particular on social networks, to submit, for example, targeted advertising based on their browsing habits or its history.
Concretely, an AI could glean intimate psychological and behavioral data from a user, then adapt to them in order to establish a high level of trust and influence them more easily, according to the authors of this article published in Harvard Data Science Review. “Have you thought about going to see Spiderman tonight?” or “You said you were feeling overworked, can I book you that movie ticket we talked about?”, the authors of the article cite as examples of suggestions from a “chatbot”. Proposals “for sale” via a real-time auction system, which could also concern a hotel, car rental or voting for a political candidate.
“Social manipulation on an industrial scale”
This “economy of intention,” as researchers call it, paves the way for potential “social manipulation on an industrial scale,” according to the press release accompanying this article. “We should start thinking about the likely impact of such a market on human aspirations, including free and fair elections, a free press, and fair market competition,” says Jonnie Penn, co-author of the study. “What people say when they converse, how they say it, and the type of ‘possible’ interference are much more intimate than simply recording online interactions,” emphasizes Yaqub Chaudhary.
The researchers note that companies like OpenAI (ChatGPT), Shopify, Nvidia, Meta and Apple have started to look into these “persuasive technologies”. AI is already criticized for its capacity for disinformation, for example the manipulation of images during the US presidential election.