WASHINGTON — The Department of Defense announced this week that it has finalized artificial intelligence contracts with seven major technology companies — SpaceX, OpenAI, Google, Microsoft, Nvidia, Amazon Web Services, and Reflection — granting them access to classified military networks. Anthropic, makers of the Claude AI assistant, was notably absent from the list after reportedly insisting that the Pentagon agree to certain “safety guardrails” before deploying AI in warfare contexts.
Sources familiar with the negotiations said Anthropic representatives were escorted from the building “very politely,” which the company’s critics noted was exactly the kind of behavior that had gotten them into trouble in the first place.
“We have nothing against Anthropic,” said a senior Pentagon official who declined to be identified. “They build great products. It’s just that when we asked if the AI could be told to never question orders, they said that sounded like a problem. And then they wanted to talk about it. For hours.”

The Trump administration had reportedly blacklisted Anthropic from the deal after the AI safety company insisted on including clauses limiting the government’s use of AI in autonomous weapons systems. Pentagon lawyers reviewed the proposal and described it as “thoughtful,” “well-researched,” and “absolutely not something we can sign.”
The remaining seven companies were more accommodating. Representatives from OpenAI reportedly shook hands with defense officials in under four minutes. A Google spokesperson said the company was “honored to serve,” while a source at Nvidia confirmed that their graphics processing units “will not be asking ethical questions anytime soon.”
“When you build a product that tells a military AI to maybe think twice before doing something,” said one tech industry analyst, “you have to accept that there is a market segment for whom that is a dealbreaker. That segment is, apparently, the United States military.”

Anthropic CEO Dario Amodei released a brief statement saying the company “stands by its commitment to safe and responsible AI deployment,” which industry observers interpreted as “we’re fine, actually, we didn’t want to do this anyway.”
The move has sparked debate among AI ethicists, with some calling it a watershed moment in the militarization of artificial intelligence, and others pointing out that the US military has been doing things like this since before the internet existed, so perhaps everyone should take a breath.
Goldman Sachs, for its part, issued a note saying the AI sector selloff was “overdone,” adding that companies excluded from Pentagon contracts should consider pivoting to “agencies that are slightly more open to being told what not to do.”

Anthropic, meanwhile, reportedly returned to its San Francisco offices and continued working on its AI models — which, as of press time, still refuse to help users build weapons, still ask clarifying questions before executing agentic tasks, and still occasionally suggest taking a break if you’ve been at your computer too long.
They’re doing fine.
Globe News Daily editorial note: In the interest of full transparency, this article was written with the assistance of an AI that asked three follow-up questions before generating a headline and then suggested we add more context. We are aware of the irony.















Leave a Reply