OpenAI and Anthropic have signed agreements with the U.S. government, offering their frontier AI models for testing and safety research. An announcement from NIST on Thursday revealed that the U.S. AI Safety Institute will gain access to the technologies “prior to and following their public release.”

Thanks to the respective Memorandum of Understandings — non-legally binding agreements — signed by the two AI giants, the AISI can evaluate their models’ capabilities and identify and mitigate any safety risks.

The AISI, formally established by NIST in February 2024, focuses on the priority actions outlined in the AI Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued in October 2023. These actions include developing standards for the safety and security of AI systems. The group is supported by the AI Safety Institute Consortium, whose members consist of Meta, OpenAI, NVIDIA, Google, Amazon, and Microsoft.

Elizabeth Kelly, director of the AISI, said in the press release: “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.

“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

SEE: Generative AI Defined: How it Works, Benefits and Dangers

Jack Clark, co-founder and head of Policy at Anthropic, told TechRepublic via email: “Safe, trustworthy AI is crucial for the technology’s positive impact. Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment.

“This strengthens our ability to identify and mitigate risks, advancing responsible AI development. We’re proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI.”

Jason Kwon, Chief Strategy Officer at OpenAI, told TechRepublic via email: “We strongly support the U.S. AI Safety Institute’s mission and look forward to working together to inform safety best practices and standards for AI models. 

“We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

AISI to work with the UK AI Safety Institute

The AISI also plans to collaborate with the U.K. AI Safety Institute when providing safety-related feedback to OpenAI and Anthropic. In April, the two countries formally agreed to work together in developing safety tests for AI models.

This agreement was taken to uphold the commitments established at the first global AI Safety Summit last November, where governments from around the world accepted their role in safety testing the next generation of AI models.

After Thursday’s announcement, Jack Clark, co-founder and head of policy at Anthropic, posted on X: “Third-party testing is a really important part of the AI ecosystem and it’s been amazing to see governments stand up safety institutes to facilitate this.

“This work with the US AISI will build on earlier work we did this year, where we worked with the UK AISI to do a pre-deployment test on Sonnet 3.5.”

Claude 3.5 Sonnet is Anthropic’s latest AI model, released in June.

Since the release of ChatGPT, AI companies and regulators have clashed over the need for stringent AI regulations, with the former pushing for safeguards against risks like misinformation, while the latter argue that overly strict rules could stifle innovation. The top players in Silicon Valley have advocated for a voluntary framework to allow government oversight of their AI technologies rather than strict regulatory mandates.

The U.S.’s approach on a national level has been more industry-friendly, focusing on voluntary guidelines and collaboration with tech companies, as seen in light-touch initiatives like the AI Bill of Rights and the AI Executive Order. In contrast, the E.U. has taken a stricter regulatory path with the AI Act, setting legal requirements on transparency and risk management.

Somewhat at odds with the national perspective on AI regulation, on Wednesday, California State Assembly passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB-1047 or California’s AI Act. The following day, it was approved by the state Senate, and now only has to be approved by Gov. Gavin Newsom before it is enacted into law.

Silicon Valley stalwarts OpenAI, Meta, and Google have all penned letters to California lawmakers expressing their concerns about SB-1047, emphasizing the need for a more cautious approach to avoid hindering the growth of AI technologies.

SEE: OpenAI, Microsoft, and Adobe Back California’s AI Watermarking Bill

Upon Thursday’s announcement of his company’s agreement with the U.S. AISI, Sam Altman, OpenAI’s CEO, posted on X that he felt it was “important that this happens at the national level,” making a sly dig at California’s SB-1047. Violating the state-level legislation would result in penalties, unlike a voluntary Memorandum of Understanding.

Meanwhile, the UK AI Safety Institute faces financial challenges

Since the transition from Conservative to Labour leadership in early July, the U.K. government has made a series of notable changes regarding its approach towards AI.

It has scrapped the office it was due to set up in San Francisco this summer, according to Reuters sources, which was touted to cement relationships between the U.K. and AI titans of the Bay Area. Tech minister Peter Kyle also reportedly sacked senior policy advisor and co-founder of the U.K. AISI, Nitarshan Rajkumar.

SEE: UK Government Announces ÂŁ32m for AI Projects After Scrapping Funding for Supercomputers

The Reuters sources added that Kyle plans to cut back on the government’s direct investments in the industry. Indeed, earlier this month, the government shelved £1.3 billion worth of funding that had been earmarked for AI and tech innovation.

In July, Chancellor Rachel Reeves said that public spending was on track to go over budget by ÂŁ22 billion and immediately announced ÂŁ5.5 billion of cuts, including to the Investment Opportunity Fund, which supported projects in the digital and tech sectors.

A few days before the Chancellor’s speech, Labour appointed tech entrepreneur Matt Clifford to develop the “AI Opportunities Action Plan,” which will identify how AI can best be harnessed at a national level to drive efficiency and cut costs. His recommendations are due to be released in September.

According to Reuters’ sources, last week, Clifford had a meeting with ten representatives from established venture capital firms to discuss this plan, including how the government can adopt AI to improve public services, support university spinout companies, and make it easier for startups to hire internationally.

But it is by no means calm behind the scenes, as one attendee told Reuters that they were “stressing that they only had a month to turn the review around.”

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays