Google, Microsoft, xAI Agree to Pre-Release AI Reviews by US Government
In a significant expansion of federal oversight, Google DeepMind, Microsoft, and Elon Musk's xAI have agreed to allow the U.S. government to examine new artificial intelligence models before they are released publicly. The Commerce Department's Center for AI Standards and Innovation (CAISI) announced Tuesday that it will conduct pre-deployment evaluations and targeted research on these frontier AI systems.
CAISI, which began reviewing models from OpenAI and Anthropic in 2024, stated it has already completed 40 evaluations. Both OpenAI and Anthropic have renegotiated their existing agreements with the center to align with priorities set by President Donald Trump’s administration, according to the announcement.
"Pre-deployment evaluations help us identify potential risks early, from bias to security vulnerabilities," said Dr. Elena Marchetti, CAISI’s director of evaluation. "By integrating these checks before a model reaches the market, we can ensure that critical safety standards are met."
Background
CAISI was established within the National Institute of Standards and Technology to address the unique challenges posed by advanced AI systems. The center originally focused on voluntary reviews with a handful of companies, but the new agreements with Google, Microsoft, and xAI mark a widening of its scope.

The move comes amid broader global debates on AI regulation. The U.S. has favored a voluntary, industry-led approach, but critics argue that independent pre-release testing is essential given the speed of AI development. The Trump administration has emphasized American leadership in AI while also expressing concerns about national security risks.
Industry analyst Sarah Kline remarked, "This is an early signal that even the largest tech firms are willing to submit to government scrutiny to maintain public trust and avoid potential legislative crackdowns."

What This Means
The agreement sets a precedent for closer collaboration between the federal government and AI developers. For companies, joining the program may offer reputational benefits and a smoother path to eventual regulatory compliance.
For consumers and businesses, the reviews could lead to earlier detection of flaws in AI products, such as inaccurate outputs, privacy leaks, or harmful biases. However, the process remains voluntary, and companies are not required to delay releases based on CAISI’s findings.
"The effectiveness of these evaluations will depend on how transparent the companies are and whether the government can keep pace with rapid model updates," said Marchetti. "Our goal is to build a framework that evolves with the technology."
Looking ahead, observers expect other major AI players like Meta and Amazon to face similar pressure to join. The program may also influence international standards, as other nations watch the U.S. approach to pre-launch AI testing.
For now, the reviews cover only frontier models—the most advanced and capable systems. CAISI has not disclosed whether it will extend testing to smaller, specialized models used in healthcare, finance, or education.
Related Articles
- How to Get Pixel-Level Voice Typing on Any Android Phone: A Step-by-Step Guide
- Six Critical Reasons Why the UK Should Abandon Digital ID Plans
- Iran War Reveals Erosion of US Economic Sanctions Power
- iOS 26.5 Arrives Next Week: Apple Unveils End-to-End Encryption for RCS and Maps Ads
- Unraveling Estrogen's Influence on Trauma Resilience: A Step-by-Step Guide for Researchers
- Two Standout Features in Ptyxis Terminal (The New Default for Ubuntu)
- Creative Process Unveiled: The Alchemy Behind Ideas Is Uncontrollable, Expert Says
- McDonald’s Joins Dirty Soda Craze: ‘Mormon Bars’ Go Mainstream with New Menu Items