JFrog Discovers CVE Level 10 for Use of Large Language Models (LLM)

JFrog Discovers CVE Level 10 for Use of Large Language Models (LLM)
JFrog Discovers CVE Level 10 for Use of Large Language Models (LLM)

JFrog Discovers Level 10 CVE Threat Related to Use of Large Language Models (LLM)

The JFrog Security Research Team will announce new findings that constitute a CVE Level 10 threat (remote, possibly unauthenticated, trivial code execution). These results relate to the use of large language models (LLM), which are increasingly used in the AI ​​landscape, especially after the launch of ChatGPT. LLMs are now among the top 14% of emerging technologies.

These findings specifically revolve around a zero-day vulnerability that JFrog’s security research team recently discovered and disclosed in a popular ML library, the Vanna.AI library. It offers a text-to-SQL interface to users – which presents a remote code execution vulnerability via prompt injection techniques known as CVE-2024-5565. This attack can be used to bypass “pre-prompting” protection measures built into LLMs, which can – for example – force an LLM to return sensitive or damaging information that it is not supposed to share, which could have a far-reaching negative impact.

Due to the recent rise of AI and ML, many developers use LLM prompts and link them to various resources (databases, APIs, code generators, etc.), which – as the study shows from JFrog – can be extremely dangerous and put organizations at high risk of a remote code execution attack giving full access to the machine hosting the LLM. This can lead to intellectual property theft (stealing ML models) or further infection of the organization’s systems.

When asked about this vulnerability, Shachar Menasche, Sr. Director of Security Research at JFrog, said:

“This finding demonstrates that the risks of widespread use of Gen/LLM AIs without appropriate governance and security can have dramatic implications for organizations. The dangers of prompt injection are not yet well known, but they are easy to implement. Companies should not view prompt injection as a foolproof defense mechanism and should employ more robust mechanisms when interfacing LLMs with critical resources such as databases or software. dynamic code generation.”

-

-

PREV For Apple fans, here is a selection of products on sale on Amazon!
NEXT This Xiaomi router which sees its price cut in half will help you improve your Wi-Fi signal