The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 
 

OpenAI Says Military Will Not Use Tech for Surveillance or Weaponry

DATE POSTED:March 2, 2026

OpenAI has offered more details on its new partnership with the American military.

The company announced that arrangement Friday (Feb. 27), though CEO Sam Altman said in a Saturday (Feb. 28) post on X that the deal was “definitely rushed” and that its “optics don’t look good.”

OpenAI announced its partnership soon after the Pentagon said it would cut ties with rival artificial intelligence (AI) startup Anthropic.

OpenAI was able to get the government to agree not to use its technology for mass domestic surveillance or autonomous weapons, two points of contention, or “red lines,” between Anthropic and the U.S. Department of War.

As a report by TechCrunch noted, that raises a question: Why did OpenAI reach an agreement where Anthropic could not?

OpenAI published a blog post discussing the arrangement, listing three areas where it says its models can’t be used: the aforementioned weapons and surveillance scenarios, as well as “high-stakes automated decisions (e.g. systems such as ‘social credit’).”

The company said that—unlike other AI firms that have “reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments” its agreement protects its red lines “through a more expansive, multi-layered approach.”

“We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections,” the blog said. “This is all in addition to the strong existing protections in U.S. law.”

OpenAI added that it was not clear on why Anthropic “could not reach this deal, and we hope that they and more labs will consider it.”

The White House last week told federal agencies to stop using Anthropic’s products, with President Donald Trump announcing that the government would cease working with Anthropic and with use of the company’s Claude models being phased out within six months.

The government also designated Anthropic a supply chain risk, which means that any company that does business with the military is forbidden from working with the startup.

Anthropic responded by saying that the Department of War had no authority to issue the designation, and said it would challenge the government in court.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons, “ the company wrote on its blog.

Despite the company’s conflict with the government, Claude still played a role in U.S. combat operations in Iran, according to published reports this weekend.

The post OpenAI Says Military Will Not Use Tech for Surveillance or Weaponry appeared first on PYMNTS.com.