Zero-day breaches represent one of the most daunting challenges in cybersecurity. These vulnerabilities, unknown to publishers and therefore unpatched, offer attackers an opportunity to infiltrate systems before patches and defensive measures are deployed. What if AI could help developers spot them before cyberattackers get a hold of them?
We know that AI, via LLM models, is proving increasingly useful for coding and for assisting developers not only in writing lines of code, but also in finding potential bugs.
What if AI could also be trained and used to automatically detect zero-day flaws in existing codes? This is the question that Google’s Big Sleep team, in collaboration with Google DeepMind, wanted to answer. In a scientific report, she has just demonstrated the full potential of AI in this area by discovering a new vulnerability in SQLite, a widely used open source database.
Resulting from the evolution of the Naptime project, the Big Sleep team has in fact developed an AI agent capable of assisting security researchers in detecting vulnerabilities. Last October, this agent identified an exploitable security vulnerability in SQLite, involving a stack buffer underflow. This discovery is all the more remarkable as it was made before the official release of the affected code, thus avoiding any impact on users.
AI methodology and approach
The success of the Big Sleep agent relies on the use of large language models (LLM) to analyze source code and identify patterns that may contain vulnerabilities. Rather than searching for vulnerabilities randomly, the agent focuses on analyzing variants of already known vulnerabilities, a method called “variant analysis.” By providing information about recent patches or code changes, the agent can target areas that may contain similar vulnerabilities that have not yet been patched.
This approach is very effective in detecting vulnerabilities that traditional techniques, such as fuzzing, do not always succeed in identifying. As a reminder, fuzzing consists of injecting random data to cause errors. The problem with this approach is that it lacks subtlety and therefore misses many vulnerabilities. AI, on the other hand, can analyze code with deep contextual understanding, spotting vulnerabilities that are difficult to detect by conventional means.
Impact et perspectives
In other words, AI is poised to be a game changer in the fight against software vulnerabilities and zero-day flaws. By identifying them before the code is even released, defenders get a head start on attackers, reversing the usual dynamic. Assuming, of course, that this AI is used upstream of any deployment and by “good guys”, because it could also allow cyberattackers to analyze all current open source codes to find Zero Day flaws. It is therefore unlikely that Big Sleep will be made available to everyone in the near future.
Of course, Big Sleep is only an experimental project at the moment. But it paves the way for increased use of AI to strengthen software security at a time when CISOs and CIOs are fed up with the exponential growth of software vulnerabilities which makes patch management more more unmanageable on a daily basis and multiplies the entry routes into information systems.
Related News :