Leading Chinese research institutes linked to the People’s Liberation Army used Meta’s publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts.
In a June article reviewed by Reuters, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) main research arm, the Academy of Military Sciences (AMS), explain how they used an early version of Meta’s Llama as the basis for what they call “ChatBIT”.
The researchers used the Llama 2 13B Large Language Model (LLM) that Meta released in February 2023, incorporating their own parameters to build a military-grade AI tool for gathering and processing intelligence, and provide accurate and reliable information for operational decision-making.
ChatBIT has been refined and “optimized for dialogue and question answering tasks in the military domain,” according to the document. It outperformed other AI models whose capabilities were about 90% of OpenAI’s powerful ChatGPT-4. The researchers did not specify how they defined performance or whether the AI model had been fielded.
“This is the first time there is substantial evidence that PLA military experts in China have systematically sought and attempted to harness the power of open-source LLMs, particularly those from Meta, for military purposes ” said Sunny Cheung, a research associate at the Jamestown Foundation who specializes in China’s emerging and dual-use technologies, including AI.
Meta has adopted open publishing of many of its AI models, including the Llama. It imposes restrictions on their use, including requiring services with more than 700 million users to obtain a license from it.
The terms of the agreement also prohibit use of the models for “military, war, nuclear industries or applications, espionage” and other activities subject to U.S. defense export controls , as well as for the development of weapons and content intended to “incite and encourage violence”.
However, because Meta’s models are public, the company has limited means to enforce these provisions.
In response to questions from Reuters, Meta cited its acceptable use policy and said it was taking steps to prevent abuse.
“Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy,” Molly Montgomery, Meta’s director of public policy, said in a telephone interview with Reuters.
The Chinese researchers include Geng Guotong and Li Weiwei of the AMS’s Military Science Information Research Center and the National Innovation Institute of Defense Technology, as well as researchers from Beijing Institute of Technology and Minzu University.
“In the future, with technological advancement, ChatBIT will not only be applied to intelligence analysis, but also to strategic planning, simulation training and command decision-making,” the document said.
China’s defense ministry did not respond to a request for comment, nor did institutions or researchers.
Reuters could not confirm ChatBIT’s capabilities and computing power, although researchers noted that its model incorporated only 100,000 recordings of military dialogue, a relatively small number compared to other LLMs.
“It’s a drop in the ocean compared to most of these models (which) are trained with trillions of tokens… It makes me wonder what they actually achieve in terms of different capabilities,” said Joelle Pineau, vice president of AI research at Meta and professor of computer science at McGill University in Canada.
The research comes amid a heated debate within the US national security and technology communities over whether companies such as Meta should make their models publicly available.
In October 2023, US President Joe Biden signed an executive order to regulate AI developments, noting that while the innovation may have substantial benefits, there are also significant security risks, such as the removal of protective measures within the model.
This week, Washington said it was finalizing rules aimed at curbing U.S. investments in artificial intelligence and other technology sectors in China that could threaten national security.
Pentagon spokesman John Supple said the Defense Department recognizes that open source models have both advantages and disadvantages, and that “we will continue to closely monitor and evaluate the capabilities of competitors”.
JAR COOKIE
Some observers believe that China’s progress in developing indigenous AI, including the creation of dozens of research laboratories, has already prevented the country from narrowing the technological gap between it and the United States.
In another academic paper reviewed by Reuters, two researchers from the Aviation Industry Corporation of China (AVIC) – which the United States has named as a company with ties to the PLA – describe the use of Llama 2 to ” training in aerial electronic warfare interference strategies.
China’s use of Western-developed AI has also extended to domestic security. A paper published in June describes the use of Llama for “police intelligence” to process large amounts of data and improve police decision-making.
In April, the state-run PLA Daily published a commentary on how AI could help “accelerate the research and development of weapons and equipment”, develop combat simulations and improve the effectiveness of military training.
“Can you stop them (China) from getting into the cookie jar? No, I don’t see how you could do that,” William Hannas, a senior analyst at the Center for Security and Emerging Technology (CSET), told Reuters ) from Georgetown University. In a document published in 2023, CSET identified 370 Chinese institutions whose researchers have published articles on artificial general intelligence, contributing to China’s national strategy to lead the world in AI by 2030.
“There is too much collaboration between the best Chinese scientists and the best American scientists in the field of AI for them to be excluded from developments,” Hannas added.