Anthropic's artificial intelligence takes control of computers

Anthropic's artificial intelligence takes control of computers
Anthropic's artificial intelligence takes control of computers

Would you be willing to give control of your computer to an AI? This is the proposal from Anthropic, which announced the release of Claude 3.5 Sonnet, an artificial intelligence model that pushes the limits of the classic virtual assistant. Claude no longer just answers questions or performs tasks in a chatbox: he can interact directly with software installed on a computer, simulating mouse clicks, keystrokes and other actions normally performed by a human user.

An AI in control of your PC

« We are entering a new era where AI can use all the tools that a human uses to accomplish its tasks “, explains Jared Kaplan, scientific director of Anthropic. This innovation is a step towards the creation of “intelligent agents”, capable of acting autonomously in software and for many uses.

Claude can thus be used for all kinds of tasks, ranging from programming to organizing trips. For example, in one demonstration, the AI ​​was tasked with planning a tour of the Golden Gate Bridge at sunrise. After opening a browser, Claude searched for the necessary information and added the event to a calendar. However, the AI ​​omitted important details such as directions to get there. Oops !

While Claude 3.5 Sonnet's capabilities are impressive, they are not without risks. User security is a key concern, as allowing an AI to access all software and files on a computer could open the door to abuse or unforeseen errors. Anthropic recognizes this danger and said it is working on protective measures. “ We believe it is better to give computers access to more limited and relatively safer AI models“, affirmed the company, which emphasizes the importance of monitoring possible problems now.

“Prompt injection” attacks are among the threats identified. This type of cyberattack consists of inserting malicious instructions into a command flow intended for the AI, pushing it to perform actions not intended by the user. Even if Claude is not connected to the internet during his training phases, his ability to interpret screenshots in real time makes him vulnerable to this type of attack.

In addition to unintended risks, malicious use of Claude is also a concern. With the upcoming US election ahead, Anthropic has put systems in place to prevent AI from engaging in harmful activities like creating content for social media or interacting with government sites.

???? To not miss any news on the Journal du Geek, subscribe on Google News. And if you love us, we have a newsletter every morning.

-

-

PREV Chambray demolishes Dijon and gets back on track
NEXT BP abandons oil reduction target