The Trump administration is in early discussions about an executive order that would create a government review process for AI models before public release.
The proposed order would establish a working group of tech executives and government officials to develop oversight procedures, with White House staff briefing leaders from Anthropic, Google, and OpenAI on the plans last week, according to unnamed U.S. officials cited by the New York Times. A White House official told the Times that talk of an executive order is "speculation."
The discussions, if true, would represent a reversal for an administration that revoked Biden's AI safety executive order within hours of taking office in January 2025 and spent most of last year talking itself up as the industry's deregulatory champion. Vice President JD Vance told an international AI gathering in Paris last year that the future of AI wouldn’t be won through safety concerns but "by building," the New York Times noted.
You may like Lobbying backlashIn October last year, David Sacks, then the White House's AI and crypto czar, publicly accused Anthropic of "running a sophisticated regulatory capture strategy based on fear-mongering," in a post on X. Sacks pointed to CEO Dario Amodei's endorsement of Kamala Harris and his characterization of Trump as a "feudal warlord," in addition to the hiring of multiple Biden-era officials to its policy team.
Anthropic’s monthly lobbying spend grew by roughly 511% over Trump’s second term, reaching $1.1 million per month by late 2025, the Washington Examiner reported in early February. The company lobbied against a 10-year moratorium on state AI regulation in the Big Beautiful Bill, supported California's SB 53 transparency requirements, and donated $20 million to Public First Action, a political group calling for stricter AI oversight.
Now the administration appe ars to be building precisely the type of oversight structure that Anthropic advocated for, but with the government holding the keys. The New York Times reported that some officials want a system granting the government first access to new models without blocking their commercial release, and that’s (functionally) what the Pentagon demanded from Anthropic before their relationship collapsed.
Just this Monday, Dean Ball, a former Trump administration AI adviser, and Ben Buchanan, a former Biden White House AI adviser, co-authored a New York Times op-ed calling on Congress to mandate third-party audits of AI developers' safety claims. Buchanan is also an outside adviser to Anthropic, and Ball is the same official who told the Times that the administration is trying to avoid overregulation while keeping pace with the technology.
Carrot and stickThe proposed review process represents a softer approach than what the administratio n attempted earlier this year. In February, Defense Secretary Pete Hegseth gave Anthropic an ultimatum: remove guardrails on autonomous weapons and mass surveillance, or lose its $200 million Pentagon contract. Hegseth also threatened to invoke the Defense Production Act, a Korean War-era law that could theoretically compel the company to hand over its technology for military use.
Anthropic refused. Trump subsequently ordered all federal agencies to stop using Anthropic's technology, and the Pentagon designated the company a supply chain risk, a label previously reserved for foreign adversaries. Anthropic sued, and a federal judge called the designation "Orwellian."
But in April, the D.C. Circuit Court of Appeals denied Anthropic's motion to lift the designation entirely. The court ruled that removing it would force the military to continue dealing with "an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict." That ruling shif ted legal leverage back toward the government, even as the White House pursued a more conciliatory political path.
What to read nextThe confrontational approach through Hegseth and Sacks gave way to a diplomatic one after Sacks left his role in March, the New York Times noted, with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent stepping in.
Last month, Wiles and Bessent held a meeting with Amodei that both sides described as "productive,” with a White House statement later stating that the meeting had “discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology."
The U.S. behind the EU on AI vettingAccording to the New York Times’s reporting, any potential oversight would involve the NSA, the White House Office of the National Cyber Director, and the Director of National Intelligence.
The model under consideration resembles the UK's approach, where the AI Security Institute evaluates frontier models against safety benc hmarks before deployment. Per security publication CSO Online, both the UK’s AISI and the EU’s AI Act have moved further than the U.S. on pre-deployment evaluation, and the U.S. currently has no legal authority to require such reviews.
There’s also the question of the Center for AI Standards and Innovation (CAISI), a Biden-era body created to evaluate AI models voluntarily shared with the government. The New York Times has reported that the center has been sidelined under Trump, despite the administration's own AI policy paper stating it should play a role in assessing AI system performance.
Congress appears to be moving in parallel with the administration, with the FY2026 National Defense Authorization Act requiring the Pentagon to establish a cross-functional team for AI model assessment and oversight, with a full “DoD-wide assessment framework” due at some point in the future. That team must develop testing procedures, security requirements, and compliance s tandards for AI models procured by the military.
Was Mythos the catalyst?The obvious question in light of all this is whether Mythos was the catalyst for these new White House policy discussions. The New York Times certainly seems to believe so in its reporting, though no sources are quoted as confirming that.
Mythos, which Anthropic revealed last month in what felt like a marketing campaign, is what Anthropic has framed as a potential cyber-superweapon, capable of finding thousands of critical software vulnerabilities in seconds, and, as such, poses “unprecedented cybersecurity risks.” For these reasons, Anthropic has declined to release it publicly, but the NSA has already used Mythos to assess vulnerabilities in government software, according to the newspaper.
This reluctance to release Mythos as a model too dangerous for the general public may have given the administration both a justification and a political in centive to act. The White House wants to avoid fallout if an AI-enabled cyberattack occurs, and is also evaluating whether frontier models could yield offensive cyber-capabilities useful to the Pentagon and intelligence agencies.
Independent assessments have questioned the veracity of Anthropic's claims, and Research from AISLE Security found that open-source models could detect many of the same flagship vulnerabilities. The UK's AISI also evaluated Mythos and concluded it was the most capable model for cybersecurity tasks, but didn’t dramatically outperform others across all evaluations.
eSIM Studios
No comments