The wave of generative AI that has been sweeping over the past two years is a game-changer in terms of cybersecurity. Cybercriminals have a plethora of new tools at their disposal to improve their existing techniques and develop new ones. Not to mention that the rapid adoption of AI in organizations creates new vulnerabilities. ICTjournal discussed the evolution of the threat and the means to respond to it with Trend Micro experts during a webinar held on December 4.
Productivity gains, deepfakes and prompt injection
First to speak, Dr Vincenzo Ciancaglini, Senior Threat Researcher at Trend Micro, wanted to demystify the use of AI by hackers, recalling that the latter did not wait for the current boom to adopt this technology, for example to optimize attempts to discover passwords. Regarding generative AI, the specialist explained that, for the moment, we have not observed sophisticated malware produced by AI.
On the other hand, hackers have appropriated these tools to increase productivity, for example for writing personalized phishing emails or debugging their code. The same applies to state actors, who exploit generative AI for social engineering or for the collection of information on their targets.
Vincenzo Ciancaglini also addressed the use of deepfakes, which has become more widespread. The technology is used to target both individuals (deepnude blackmail, false kidnapping using deepaudio) and companies, notably with false interlocutors by videoconference to extract money from a manager, or false candidates for a not to gain access to the organization’s network.
Added to this, the growing use of AI in organizations exposes them to new attacks. Vincenzo Ciancaglini notably described prompt injection techniques, which slip malicious instructions or links into prompts and other content powering generative AI.
Ways to defend yourself
Faced with the spread of deepfakes targeting individuals, Cyrille Larrieu, Senior Pre-Sales Engineer at Trend Micro, underlined the importance of human resources. He recommended being skeptical of your interlocutor, challenging them and being extra vigilant as soon as a question of money is raised. He also described technical tools available to individuals, such as deepfake detectors on mobile (Trend Micro Check) or on PC (Deepfake Inspector), which alert users as soon as there are signs of artificial content.
Regarding organizations employing generative AI solutions, Cyrille Larrieu described four risks: lack of visibility into these uses, exposure of sensitive data, manipulation of models and compliance. For the specialist, the Zero Trust approach and architectures make it possible to address these risks, provided they are augmented with additional controls. Thus, for example, Trend Micro’s Zero Trust platform (Vision One) today embeds access controls to public or private GenAI applications, filters on the prompt side and the response side, as well as injection prevention techniques, which organizations can configure.
The specialist also underlined the importance of dashboards and statistical tools to monitor the use of generative AI in the organization and communicate on attacks that may have been thwarted.
GenAI agents and supply chain
During the Q&A session, Trend Micro specialists also discussed the risks associated with the emergence of agentic features in generative AI solutions. “The risk is a meteoric adoption, as we experienced with ChatGPT, with multiple specialized agents to orchestrate, each being likely to make mistakes or be manipulated,” warned Vincenzo Ciancaglini.
The expert also noted the risks linked to the profusion of generative AI models – several tens of thousands on Huggingface – and therefore the use of more or less reliable or subtle agents. A danger therefore for the GenAI supply chain, similar to what we experienced with popular Javascript and Python libraries.
For more information, the entire webinar is available on video:
Related News :