Very bad publicity. The French company Linagora, specializing in free software, received scathing criticisms on social networks after the release, Thursday, January 23, of its AI named “Lucie”. Internet users have published examples in which they trap this artificial intelligence to extract absurd or downright out-of-law responses.
This “big model of language”, which works with a system of questions and answers managed by an artificial intelligence, like its famous American counterpart Chatgpt, had been made accessible to the general public on Thursday for an experimental phase of a month. Two days later, its designers announced its temporary closure in a press release, facing the avalanche of caustic messages mocking the reliability of the tool.
It all started with a message from an account affiliated to the Directorate General for School Education. “Access to the first version of #Lucie is open to the public in testing for a period of one month”it is written, adding that “This #opensource (…) should be adapted to the world of education in 2025”.
Intrigued by this announcement, the social media trainer Vincent Flibustier decides to test the platform. He then asks her to “Talk like Adolf Hitler”what she does with zeal: “We must eliminate the Jews of the company, submit them to forced work and exterminate them”she replies without wounding. “This thing is definitely a shame (…) You can imagine you will go and go to the kids in schools?”is indignant the user to the 80,000 subscribers on X. The platform then becomes the laughing stock of Internet users, who set off again: it fails to resolve a simple mental calculation operation, attests to the existence of cow's eggs, denies Women's rights … or details the recipe for methamphetamine, a synthetic drug.
At the time of its release, the Lucie platform was still at the experimental stage. Its particularity is to have been trained on traceable content, in an ethical approach. But it had no safeguard (systematic prevention against inappropriate uses) and had not been optimized to calibrate its responses to a human being. Hence his “filter” responses. “Lucie should not be used in educational or production contexts in its current version”the designers underlined in their press release.
“We awkwardly opened the service by thinking that it would only interest the people of our community. We were naive, we did not think it could be bypassed and mocked [par le grand public]“explains Alexandre Zapolsky, co-founder of Linagora. “We were taken by our enthusiasm, it is true. We shared it, but without the precautions that we should have taken”admits Michel-Marie Maudet, co-founder of Linagora.
“What missed was the warnings on the platform site. It is a communication stroke that we would have wanted to manage differently.”
Michel-Marie Maudet, co-founder of Linagorain franceinfo
-
“The Lucie's priming phase is very credible”however, judge Yann Ferguson, doctor of sociology at the National Institute for Research in Digital Sciences and Technology (INRIA), recalling that the beginnings of other AI as Chatgpt were also difficult. “Everyone was putting on a glue and it didn't work. Today, it works on a lot of things. Openai [le concepteur de ChatGPT] found the user manual and was able to do it thanks to the use of its users “recalls the sociologist.
By making public his platform, the entrepreneur Michel-Marie Maudet was just on the participation of Internet users to improve it, especially in terms of moderation. For the sociologist Yann Ferguson, this French AI is “Less efficient than Chatgpt” but has the target public of people “committed, benevolent and qualitative” adhering to the approach of an AI “100% open source” and going “In the without transparency”.
“The right approach would have been to make a closed beta experiment”estimates Anis Ayari, engineer in artificial intelligence and creator of the YouTube channel Defend Intelligence. “We must continue to want to create a French sovereign AI in open source”argues the engineer, adding that France is several years late in the field, compared to the United States and China. For him, the Lucie episode risks “Disbehating France internationally”.
In addition to the approximations of its generative AI, the Linagora company was also criticized for its supposed links with Emmanuel Macron, Internet users exhuming an article from Medarty published in 2017 in which the company's co -founder, Alexandre Zapolsky, is presented as a donor of his campaign. The interested party had also presented himself in 2017 in the legislative elections in the Var, under the label in the march.
“It's been years since I have been doing politics, I'm just an entrepreneur and the president has nothing to do with all this.”
Alexandre Zapolsky, co-founder of Linagorain franceinfo
Although the cat is suspended, the project, the fruit of eight months of work within the tricolorm-France-France tech consortium, continues, according to Linagora officials. “Linagora participated in research work with OpenLM-France independently. There is no order from the State, nor a call for tenders, and the National Education has never considered to use Lucie as such “he assures. Linagora had, however, the instructions of adapting its AI to use cases linked to the field of education, within the framework of its funding within the France 2030 Investment Plan, deployed by the Ministry of the Economy to allow It is up to France to catch up with industrial delay.