US plans to restrict China’s access to AI software behind applications such as ChatGPT A move to limit the export of proprietary or closed-source AI models

The Biden administration is poised to open a new front in its efforts to protect U.S. AI from China and Russia, with preliminary plans to place guardrails around the most advanced AI models , the core software for artificial intelligence systems such as ChatGPT, sources said.

ChatGPT is a chatbot developed by OpenAI and launched on November 30, 2022. Based on Large Language Models (LLM), it allows users to refine and direct a conversation towards length, format, style, level desired detail and language. The user’s successive prompts and responses are taken into account at each stage of the conversation as context.

The Commerce Ministry is considering a new regulatory initiative to restrict the export of proprietary or closed-Source AI models whose software and data on which they are trained are kept secret, three people familiar with the matter said.

Any action would complement a series of measures put in place over the past two years to block the export of sophisticated AI chips to China, in order to slow Beijing’s development of this cutting-edge technology for military purposes. Even so, it will be difficult for regulators to keep pace with rapid developments in the industry.

The Chinese embassy described this approach as typical act of economic coercion and unilateral intimidation, which China strongly opposes adding that she would take necessary measures to protect its interests.

Currently, there is nothing stopping American AI giants, such as Microsoft-backed OpenAI, Alphabet’s Google DeepMind and its rival Anthropic, which have developed some of the most powerful closed-Source AI models, from sell them to almost anyone in the world without government oversight.

Government and private sector researchers fear that U.S. adversaries could use these models, which leverage large amounts of text and images to summarize information and generate content, to conduct aggressive cyberattacks or even create powerful biological weapons.

One of the sources said any new export controls would likely target Russia, China, North Korea and Iran. In a report released in February, Microsoft said it tracked hacker groups affiliated with the Chinese and North Korean governments, as well as Russian military intelligence and the Iranian Revolutionary Guard, as they attempted to perfect their hacking campaigns using large language models.

Computing power

To implement export controls on AI models, the sources said the United States could look to a threshold contained in an AI executive order issued last October, which is based on computing power necessary to train a model. When this threshold is reached, a developer must report its AI model development plans and provide test results to the Ministry of Commerce.

This threshold of computing power could serve as the basis for determining which AI models would be subject to export restrictions, according to two U.S. officials and another Source briefed on the discussions. They declined to be named because details have not been made public.

If this threshold is used, it will likely only restrict the export of models that have not yet been released, as none have yet reached the threshold, although Google’s Gemini Ultra is considered close to it. , according to EpochAI, a research institute that tracks trends in artificial intelligence.

The agency is far from finalizing a proposed rule, the sources noted. But the fact that such a measure is being considered shows that the US government is seeking to fill gaps in its efforts to thwart Beijing’s AI ambitions, despite the serious challenges of imposing a heavy-handed regulatory regime for this rapidly evolving technology.

As the Biden administration looks at competition with China and the dangers of sophisticated AI, AI models are obviously one of the tools, one of the potential choke points to think about here said Peter Harrell, a former National Security Council official. It remains to be seen whether it can, in practice, be made into a controllable export choke point. he added.

Biological weapons and cyberattacks?

The U.S. intelligence community, think tanks and academics are increasingly concerned about the risks posed by malicious foreign actors who have access to advanced AI capabilities. Researchers from Gryphon Scientific and the Rand Corporation noted that advanced AI models can provide information that could help create biological weapons.

The Ministry of Internal Security said cyber actors would likely use AI to develop new tools in order to enable larger, faster, more efficient and more invasive cyberattacks in its assessment of threats to internal security in 2024.

The potential explosion of use and exploitation [de l’IA] is radical and we actually have a lot of trouble following it Brian Holmes, an official in the Office of the Director of National Intelligence, said at a meeting on export controls in March, noting that China’s progress was particularly concerning.

The AI ​​crisis

To address these concerns, the United States has taken steps to stem the flow of American AI chips and the tools needed to make them to China.

They also proposed a rule to require U.S. cloud companies to notify the government when foreign customers use their services to train powerful AI models that could be used for cyberattacks.

But so far she has not addressed the question of AI models themselves. Alan Estevez, who oversees U.S. export policy at the Department of Commerce, said in December that the agency was exploring options to regulate exports of large language models (LLMs) before seeking advice from the Department of Commerce. ‘industry.

Tim Fist, an AI policy expert at the Washington, DC-based think tank CNAS, believes that the threshold is a good temporary measure until we develop better methods to measure the capabilities and risks of new models .

Jamil Jaffer, a former White House and Justice Department official, said the Biden administration should not use a computing power threshold, but opt ​​for control based on the model’s capabilities and intended use. . Focusing on national security risk rather than technological thresholds is the best solution because it is more sustainable and focuses on the threat he declared.

The threshold is not set in stone. One of the sources said the Commerce Department could end up setting a lower threshold, depending on other factors, such as the type of data or potential uses of the AI ​​model, such as the ability to engineer proteins. which could be used to make a biological weapon.

Regardless of the threshold chosen, exports of AI models will be difficult to control. Many models are open Source software, meaning they would remain outside the scope of planned export controls.

Even imposing controls on the most advanced proprietary models will prove difficult, because regulators will likely have difficulty setting the right criteria for determining which models should be controlled, Mr. Fist said, noting that China has probably only about two years behind the United States in developing its own AI software.

The proposed export controls would impact access to the core software that powers some consumer applications like ChatGPT, but would not limit access to the downstream applications themselves.

Sources: The Biden administration, the United States Department of Commerce

And you ?

What is your opinion on the subject?

Do you find this initiative from the Biden administration credible or relevant?

See as well :

China has approved more than 40 AI models for public use in the first six months as authorities began the approval process, as the country strives to catch up with the United States

China overtakes US in AI research, fight for supremacy intensifies as both sides grapple with security concerns

-

-

PREV SALE OF ICE, WATER BAGS AND DRINKS: A HEAT WAVE ACTIVITY IN SEDHIOU
NEXT Can AI provide opportunities for low-educated workers?