“487908877AP00005_TechCrunch” by TechCrunch is licensed under CC BY 2.0.
OpenAI’s agreement with the Department of Defense leads to employee resignation, industry backlash, and concerns about surveillance and the ethical limits of artificial intelligence in national security and warfare.
–
By Genesis Muckle
TAMPA, Fla. — On Feb. 28, 2026, the CEO of OpenAI, Sam Altman, announced that the company signed a deal with the U.S. Department of Defense.
The agreement was announced hours after President Donald Trump ordered federal agents to stop using AI tools made by Anthropic.
The Department of Defense wanted to sign an agreement with Anthropic, the technology rival of OpenAI, which would let the department use its AI system for “any lawful purpose.”
Anthropic’s technology has been widely adopted by the Department of Defense, largely due to its partnership with Palantir Technologies, whose systems are approved for classified government operations. The Pentagon has also independently implemented Anthropic’s AI into a $200 million pilot program, using it to analyze intelligence data and imagery.
Despite this partnership, Anthropic rejected the deal with the Pentagon, with CEO Dario Amodei stating, “We cannot in good conscience accede to their request.” Amodei said he would not agree without the assurance that the AI tools would not be used for mass domestic surveillance.
More controversy was sparked when Caitlin Kalinowski, a senior member of the OpenAI robotics team, went on social media to announce her resignation from the company.
Kalinowski posted on X, stating, “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
A spokesperson from OpenAI defended the company’s decision, stating, “We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear red lines: no domestic surveillance and no autonomous weapons.”
Altman also took to X to reassure people that the Department of Defense intelligence agencies would not use the company’s tools.
Despite these assurances, many employees, including research scientist Aiden McLaughlin, are still uneasy about the deal. McLaughlin posted on X, saying, “I personally don’t think this deal is worth it.”
Critics have also pointed out risks of AI use in warfare, such as the potential erosion of accountability and the rapid escalation of conflicts by automated systems. Despite the restrictions in place, experts have warned that the lines between defensive and offensive use of AI can blur.
The controversy has been met with more public reaction as some users have begun to discontinue interactions with the use of the OpenAI platform in favor of Anthropic’s AI tools, which, in the public eye, seem to align more with ethical safety standards. At the same time, debates on social media have grown, reflecting the concern about AI use within surveillance.
However, some employees have expressed frustrations that Anthropic is being portrayed as heroic, despite having worked with the Pentagon and defense contractor Palantir in previous years.
Altman has urged the government to drop Anthropic’s supply chamber risk designation, stating, “I believe we will hopefully have the best models that will encourage the government to be willing to work with us, even if our safety stack annoys them, or put some limits or something else.”

