Published on November 29, 2024 at 11:44 p.m. / Modified on November 29, 2024 at 11:51 p.m.
4 mins. reading
ChatGPT burst into our lives two years ago. “Le Temps” explores the tool and its possibilities in a very practical way, while questioning artificial intelligence more generally in a special report
Be careful, AI is also:
-
omnipresent hallucinations,
-
systems that have nothing human about them,
-
the risk of saying too much,
-
biases still present
-
and the risk of dependence.
They are faster, provide more detailed answers, can accomplish significantly more tasks and often impress. But they are still far, very far from being impeccable. Generative artificial intelligence services have progressed, but remain woefully imperfect. When using them, whether to work on texts, create images or analyze data, we must not forget these five pitfalls.
1. Pervasive hallucinations
Follow the news with us and support demanding and daring journalism
For the end-of-year holidays, take advantage of -25% on your annual subscription ????
Quality information just a click away. Offer valid until December 25, 2024.
I subscribe
Good reasons to subscribe to Le Temps:
- Unlimited access to all content available on the website
- Unlimited access to all content available on the mobile application
- Sharing plan of 5 articles per month
- Consultation of the digital version of the newspaper from 10 p.m. the day before
- Access to supplements and T, the Temps magazine, in e-paper format
- Access to a set of exclusive benefits reserved for subscribers
Already have an account?
Log in
????-25% on annual subscriptions.
Support independent journalism.
Subscribe
Definition
Large language model
It is through these models that generative AIs are able to understand what we say or write, and to generate text in natural language. They are trained on a large amount of data as:
Related News :