
US Defense Secretary Pete Hegseth is pressuring Anthropic to provide the military with unrestricted access to its Claude artificial intelligence model, issuing a Friday deadline and threatening penalties that could include invoking the Defense Production Act. In a coinciding development on the same day, Anthropic announced significant modifications to its Responsible Scaling Policy (RSP), effectively lowering its internal safety guardrails. This dual revelation highlights a growing tension between national security demands and the safety protocols maintained by leading AI developers, placing Anthropic at the center of a complex dispute involving government contracts, competitive market pressures, and ethical commitments.
The Defense Department’s ultimatum represents a direct challenge to Anthropic’s existing usage policies. According to reports, Secretary Hegseth communicated to Anthropic CEO Dario Amodei that the company must grant the Pentagon unfettered access to Claude by Friday or face severe repercussions. While Anthropic has expressed a willingness to adapt its usage policies to accommodate the Pentagon’s operational needs, the company has drawn a line regarding specific applications. Anthropic has explicitly refused to allow its technology to be utilized for the mass surveillance of American citizens or for autonomous weapons systems that operate without direct human intervention. This stance places the company in a precarious position as it navigates the requirements of a powerful potential client.
The specific penalties under consideration by the Defense Department carry significant weight. Hegseth’s threats reportedly include the invocation of the Defense Production Act, a federal law that grants the president broad authority to compel private companies to prioritize government contracts deemed essential for national defense. Beyond this, the military is considering severing its existing contract with Anthropic entirely. A further punitive measure involves designating Anthropic as a supply chain risk. Such a designation would have cascading effects, forcing other private companies that work with the Pentagon to certify that they do not incorporate Claude into their workflows, effectively isolating Anthropic from the broader defense industrial base.
The urgency driving the Pentagon’s pressure campaign stems from the unique capabilities of the Claude model. Currently, Claude is the sole AI model utilized by the US military for its most sensitive and high-stakes work. A defense official, citing the necessity of the technology, noted, “The only reason we’re still talking to these people is we need them and we need them now.” The official further elaborated on the model’s reputation, stating, “The problem for these guys is they are that good.” The utility of Claude was demonstrated in its reported use during the “Maduro raid” in Venezuela, a specific operational success that Anthropic CEO Dario Amodei is said to have highlighted during discussions with Palantir, the defense contractor partnering with Anthropic.
Simultaneously, Anthropic revealed a fundamental shift in its safety philosophy. The company announced it was modifying its Responsible Scaling Policy, moving away from the strict adherence that previously defined its brand. Historically, Anthropic’s core pledge involved a commitment to halt the training of new AI models if specific safety benchmarks could not be met in advance. This policy relied on “hard tripwires”—non-negotiable red lines designed to stop development immediately if risk thresholds were breached. This cautious approach was a central marketing pillar, distinguishing the company from competitors by prioritizing safety over speed.
The updated policy replaces these hard stops with a more flexible, relative framework. In place of the previous rigid boundaries, Anthropic is introducing “Risk Reports” and “Frontier Safety Roadmaps.” These new mechanisms are intended to provide transparency to the public regarding safety assessments rather than enforcing automatic halts in development. The company explained the rationale behind this pivot, writing, “Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not.” The new approach acknowledges that safety is a dynamic landscape rather than a set of static requirements.
In an interview with Time, Anthropic’s chief science officer, Jared Kaplan, provided context for the decision, citing the intense competitive environment. “We felt that it wouldn’t actually help anyone for us to stop training AI models,” Kaplan stated. He elaborated on the geopolitical and commercial realities facing the company, saying, “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead.” This sentiment reflects a strategic pivot toward maintaining market relevance in a sector where technological leadership is fleeting and highly contested.
Anthropic’s official statement on the policy change pointed to a “collective action problem” as a primary driver. The company argues that in an anti-regulatory environment with fierce competition, a unilateral pause on development would be counterproductive. The updated RSP reads, “If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe.” The statement continues, “The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit.”
Financially, Anthropic is operating at a scale that makes the decision to compromise on safety standards particularly notable. In February, the company secured $30 billion in new investments, raising its total valuation to $380 billion. This massive influx of capital places immense pressure on performance and growth. The competitive landscape further contextualizes this pressure; rival OpenAI currently holds a valuation exceeding $850 billion. Analysts suggest that the relaxed safety standards may be an attempt to accelerate development timelines to keep pace with industry leaders, prioritizing commercial expansion over the cautious restraints that originally defined the company.
External experts have weighed in on the implications of Anthropic’s policy reversal, expressing concern over the erosion of safety commitments. Chris Painter, the director of METR, a nonprofit organization focused on AI risks, offered a nuanced critique. “I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps,” he told Time. However, Painter warned of the dangers inherent in this shift. He raised concerns that the more flexible RSP could lead to a “frog-boiling” effect, a metaphor for incremental changes that gradually erode safety standards until they become negligible. “When safety becomes a gray area, a seemingly never-ending series of rationalizations could take the company down the very dark path it once condemned,” he noted.
Painter’s analysis extends beyond the specific policy mechanics to the broader state of the industry. He interprets Anthropic’s move as a signal of the sector’s unpreparedness for the risks it is creating. According to Painter, the new RSP indicates that Anthropic “believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities.” He concludes that this development serves as “more evidence that society is not prepared for the potential catastrophic risks posed by AI.” This perspective suggests that the policy change is not merely a strategic business decision but a symptom of a systemic inability to manage rapid technological advancement safely.
Notably, neither Anthropic’s official announcement regarding the RSP modification nor the reporting on the new policy mentioned the ongoing pressure campaign from the Pentagon. The convergence of these two stories on the same day suggests a potentially complex interplay between external government pressure and internal strategic realignment. As the Friday deadline set by Secretary Hegseth approaches, Anthropic faces a convergence of regulatory threats, competitive market forces, and scrutiny regarding its ethical commitments, the outcomes of which will likely influence the trajectory of AI development and deployment within the defense sector.