Pentagon secures early AI model access from Microsoft, Google and xAI


The U.S. government is tightening its grip on advanced artificial intelligence, moving earlier in the development cycle to assess risks before public release. In a new set of agreements, Microsoft, Google, and xAI will grant federal officials early access to their latest AI models. The decision reflects growing concern in Washington that cutting-edge systems could enable cyberattacks or military misuse if left unchecked.

Early access for testing

The Commerce Department’s Center for AI Standards and Innovation will evaluate these models before deployment. Officials aim to study capabilities and identify vulnerabilities early. The move builds on a 2025 pledge by the Trump administration to partner with tech companies on national security reviews.

The agreements allow government researchers to test systems under controlled conditions. Companies will provide versions of their models with reduced safeguards. This setup enables deeper probing of potential misuse scenarios.

Microsoft said it will collaborate with federal scientists to test systems “in ways that probe unexpected behaviors,” the company said in a statement, as reported by Reuters. The company will also help develop shared datasets and testing workflows. It previously signed a similar agreement with the United Kingdom’s AI Security Institute.

Rising security concerns

The urgency follows the emergence of powerful AI systems, including Anthropic’s Mythos model. U.S. officials and businesses worry these tools could amplify hacking capabilities. Concerns range from automated cyberattacks to advanced military applications.

“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” CAISI Director Chris Fall said in a statement to The Journal. (engadget)

CAISI has already conducted more than 40 evaluations, including tests on unreleased models. The agency often works with stripped-down versions of systems to uncover hidden risks. Officials believe early access gives them a critical edge in anticipating threats.

The initiative also expands earlier collaborations with OpenAI and Anthropic, launched in 2024. At the time, the agency operated under a different name and focused on voluntary safety standards.

Pentagon expands AI ties

The push for oversight extends beyond civilian agencies. The Pentagon recently signed agreements with seven AI companies to deploy tools on classified networks. The Defense Department wants broader access to advanced capabilities across military operations.

Notably, Anthropic remains outside that effort. The company has clashed with defense officials over safeguards on military use. The dispute highlights a growing divide between companies prioritizing safety limits and agencies seeking operational flexibility.

At the same time, the White House is weighing additional oversight measures. Reports suggest officials may form a review group to evaluate AI systems before public release. The approach could mark a shift from earlier hands-off policies.

For now, major tech firms appear willing to cooperate. By sharing early versions of their models, they avoid stricter regulatory pressure. The government, meanwhile, gains a closer look at technologies that could reshape both security and warfare.

The agreements signal a clear shift: Washington no longer wants to react to AI risks. It wants to identify them before they emerge.



Source link