mirror of our society, is AI impartial or full of prejudices?

mirror of our society, is AI impartial or full of prejudices?
mirror of our society, is AI impartial or full of prejudices?

The result is stunning, but interfaces like ChatGPT remain mathematical machines that predict the most probable words or pixels one after the other. “Language models don’t know what they know, or why they know it,” says Jayden Ziegler, product director at AI startup Alembic. “They cannot discern what is biased or not, inclusive or not. So there’s nothing they can do about it.”

In the absence of additional settings, if a user asks for an image of a business manager, they instead get photos of white men in suits, graying forties. “These models act as mirrors of our history and our culture,” comments Joshua Weaver, director of a Texan NGO on human rights in justice. “They are not going to produce results that are substantially different from the stored information.”

Imposed its values

The danger ? “We risk finding ourselves in an infernal cycle where our prejudices are found in AI, which in turn reinforces our preconceived ideas,” notes this lawyer. Technological solutions are limited. Completely retraining foundation models with unbiased data would take a long time, be very expensive, and there is no large-scale body of “neutral” information.

Companies therefore try to “align” the models with the desired values, by imposing rules on them. Most chatbots have thus stopped expressing feelings that they cannot have, and produce more diverse results. “We put a social filter on them,” summarizes Tomás Puig, co-founder of Alembic. “Like people with strong opinions who have learned not to say what they think in public.”

The method showed its limits in February, when Google’s Gemini responded to requests for images of German soldiers from World War II with characters including a black man and an Asian woman.

Pressured by rivalry, tech giants are innovating and very quickly deploying their AI assistants. They would love to be able to do “machine unlearning” (as opposed to “machine learning”, or automated learning), notes Sasha Luccioni, researcher at Hugging Face, a collaborative generative AI platform. “But looking for a technological solution is going in the wrong direction,” she warns.

Learn to detect fake news

The most urgent thing is to educate humans about these new tools, machines that give the impression of having consciousness. “A little common sense would be welcome,” emphasizes Joshua Weaver. He also recommends diversifying the teams of engineers involved, “too homogeneous, composed mainly of young men who come from similar backgrounds, and do not integrate many different points of view”.

In OpenAI’s latest demo, ChatGPT “reads” its users’ emotions on their faces. But will he be able to correctly interpret the facial expressions of all cultures? Experts point out, however, that generative AI is only in its infancy. Many start-ups are exploring different methods. Pinecone, for example, specializes in “RAG” (retrieval-augmented generation), which consists of creating strictly controlled databases, from which the AI ​​assistant draws factual elements.

Ram Sriharsha, the company’s chief technology officer, takes the example of an application in a legal firm: “The AI ​​assistant – the language model – is an expert in English, not law. Imagine if he had access to a lawyer who knew all the files, then he could answer any legal question! “.

-

-

PREV Floods and landslides: How to protect yourself? – HEADLINE
NEXT If egg prices have fallen, it is not because of household purchases