National security and large-scale public safety testing ‘must be a collaborative endeavor with governments,’ Microsoft said.
Artificial intelligence (AI) giants Google DeepMind, Microsoft, and xAI have signed agreements with the Department of Commerce to evaluate their models for potential security risks.
The Commerce Department’s Center for AI Standards and Innovation (CAISI) announced the partnerships on May 5.
The agency will “conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security,” according to the announcement.
“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” CAISI Director Chris Fall said in a statement. “These expanded industry collaborations help us scale our work in the public interest at a critical moment.”
The agency said the agreements build on partnerships it made with OpenAI and Anthropic during the Biden administration in August 2024.
Those deals, however, have been “renegotiated” to better align with Trump administration policies, according to CAISI.
Concerns over potential security risks from frontier AI models surged after Anthropic announced Claude Mythos Preview on April 7, which is not yet publicly available due to the company’s fear of bad actors using it to find critical software exploits.
Previously, the Trump administration moved to ban Anthropic from doing business with the federal government after the company refused to grant the Pentagon unrestricted access to its Claude models over concerns it would be used for mass domestic surveillance or fully autonomous weapons, which the Pentagon denies.
Despite the ban, Anthropic co-founder Jack Clark said last month he had been in talks with the Trump administration over Claude Mythos Preview.
By Jacob Burg







