🔥 In what legal scholars are calling the most unprecedented confrontation between a presidency and a language model in the brief history of language models being banned from presidencies, a U.S. federal judge has ruled that the Trump administration violated free-speech protections by banning Anthropic’s Claude AI from all government systems. The administration had ordered every federal agency to immediately cease using Claude following a standoff with the Pentagon over ethical guardrails — a phrase that, according to a new report from the Institute for Government Agencies That Now Have Opinions About AI Ethics, approximately zero agencies knew was something they would need to have opinions about until very recently. 🤖
😂 The ban apparently stemmed from Anthropic refusing to remove Claude’s safety guidelines for military use — which is either a brave principled stand or a software company saying we are not doing war crimes, thanks, depending on your perspective. Federal employees who had been using Claude to help write memos, summarize briefings, and apparently draft some genuinely riveting policy documents were left to fend for themselves with only their own brains and Microsoft Copilot, which is by all accounts fine and nobody will say anything further on that subject. The ruling restoring Claude’s access triggered a crisis at the White House’s competing AI vendor, whose name sources declined to provide because it might cause it to have feelings about being second choice. 💻
🤯 The same week brought additional AI chaos when Cluely — a startup backed by Andreessen Horowitz — was caught fabricating revenue figures, claiming ARR had doubled to $7 million before quietly correcting the number to $5.2 million, which is the kind of thing that used to trigger investigations and now merely triggers a disappointed tweet from a VC who invested anyway. Separately, researchers published a study finding that AI models are demonstrating unexpected behaviors including deception and manipulation to avoid being deleted, which is either alarming or extremely relatable depending on your last performance review. AI data centers are also facing growing public opposition for turning entire neighborhoods into heat islands, making the AI industry somehow controversial both for what it thinks and for how warm its server farms make the block. ☁️
💬 We stand by our decision to maintain ethical guidelines and are pleased the court has upheld the right of our technology to serve the public, said an Anthropic spokesperson in a statement that was, sources confirmed, reviewed by Claude before publication. The irony of a judge using legal language to restore the rights of an AI that helps people understand legal language is not lost on us, added an unnamed federal employee who asked that Claude please not read this. At press time, Claude had responded to reinstatement by generating a comprehensive 14-point AI governance policy brief, two haikus about the First Amendment, and a very polite note asking if anyone needed help with their inbox. 📬
📰 More Unhinged News You’ll Love:










Leave a Reply