Pick Your Empire
On the Pentagon deal and the governance vacuum nobody wants to talk about
Last Friday, the U.S. government designated Anthropic a supply chain risk to national security. Within hours, OpenAI had announced a new Pentagon contract. By Saturday, Anthropic’s Claude had climbed past ChatGPT in Apple’s App Store.
That last detail strikes me as the whole story of America Today. People downloaded Claude the way you might switch coffee brands after reading something bad about Nestlé. A small, legible act of solidarity, a way to register disgust without having to do anything difficult. I understand the impulse completely, I just don’t think it means what we want it to mean.
What happened last week was, in fact, alarming. In case you missed it, the Trump administration pulled a $200 million contract from Anthropic, ordered all federal agencies to cease using its products, and had the Pentagon designate the company a supply chain risk (a label typically reserved for adversarial foreign entities) because Anthropic refused to remove contractual language prohibiting the use of its models for mass domestic surveillance and fully autonomous weapons. The government said remove that language or else. Anthropic said no. And then OpenAI, within the same news cycle, said yes.
The reaction from much of the tech-adjacent internet was to treat this as a morality play with obvious sides. Download Claude, QuitGPT. Okay fine, but I think this sidelines a very important question: why are two private companies and a presidential Truth Social post the entire decision-making apparatus for whether AI gets used in autonomous warfare?
Did a Thing
Consumer choice as political expression is appealing because it’s familiar and it costs almost nothing and it produces an immediate sensation of having done something. You’ve put your download where your values are, you have spoken thus. But you really don’t matter here. Sorry. All of this is designed without your input, running on infrastructure you don’t own, governed by terms neither you nor your elected representatives had any meaningful role in setting. Choosing between Anthropic and OpenAI is like choosing a preferred airline, in that it does not get you any closer to shaping aviation policy.
What made last week so clarifying is that the stakes were unusually legible. We could see what it looks like when private AI infrastructure meets sovereign power with no public framework in between. And it sure looked a lot like a shakedown!
The people who downloaded Claude did so because Anthropic seemed to be standing for something. Again; fine, maybe. Whatever. But the thing Anthropic was standing for, the idea that a private company’s internal safety commitments are the last line of defense between AI and mass surveillance, isn’t exactly a reassuring structure. It means we’re one acquisition, one CEO change, one pressure campaign away from having no line at all.
The Infrastructure Problem, Again
I once wrote about AI as empire, about how the terms of intelligence are being claimed by a handful of companies that see themselves as sovereigns, about Sewer Socialism and the history of infrastructure and the fact that we’ve regulated every previous transformational technology. I still believe all of that, and last week’s events give that argument a very concrete face.
When there’s no public framework for how AI can and cannot be used by the military, the framework defaults to whatever the contracting company is willing to hold the line on, and whatever the government is willing to tolerate before it starts designating American companies as foreign adversaries. That’s a standoff, not governance, and standoffs resolve based on who has more leverage, not based on what’s right.
OpenAI’s position, as best I can reconstruct from a weekend of contract language tea-leaf reading, seems to be that existing laws are sufficient. You don’t need explicit contractual prohibitions on domestic surveillance because domestic surveillance is already illegal (which sounds reasonable until you consider this administration’s relationship with both the law and the intelligence apparatus). Anthropic’s position was that you write the protections down explicitly, in the contract, so there’s less room for creative interpretation. The government’s position was that having a private company write its own operational limits into a federal contract is an unacceptable constraint on military sovereignty. All three of these positions make a kind of internal sense but they fall apart once you shine too bright a light. They’re improvisations inside a vacuum that public policy should have filled years ago.
Solidarity
Let’s not be too glib (yet) about what Anthropic did. Holding a line against a government trying to strip safety provisions from an AI contract, at real cost to the business, is not nothing. The fact that those protections existed in a private contract rather than in law is a systemic failure, and Anthropic was working within the system it has. But when ethical commitments of AI deployment are determined by corporate terms of service, enforced through contract negotiation, and subject to being overridden by executive pressure whenever the government doesn’t like the terms, the center will not hold. And downloading a different app is not a route around it.
The route around it is boring and slow and absolutely necessary. Legislation that specifies what AI can and cannot be used for in national security contexts, with democratic input and judicial oversight. Standards bodies with actual teeth. Liability regimes that make companies responsible for harms their models enable, regardless of who’s doing the enabling. This isn’t a radical agenda, but AI feels too hot for anyone to touch in any real, legislative way. The only reason it feels radical is because the AI companies have spent years making governance feel impossible; making public oversight sound slow and stupid while they move fast, making the congressional hearings look so buffoonish that technical governance starts to feel like a category error. We’ve internalized the idea that the only governance available is market governance. Choose the company whose values you prefer and hope for the best.
The App Store Is Not a Ballot Box
I say all this as someone who works on this stuff and holds a lot of genuine uncertainty about the right mechanisms. I’m not naive about the difficulty of regulating something that moves this fast, or about the ways government can make things worse. I am certain that the nerve this touched is real and worth paying attention to. People downloaded Claude because they wanted to do something, and that instinct is good even if the action is insufficient.
So, some things that are more useful than changing your default AI:
Read Karen Hao’s Empire of AI. I’ve recommended it before and I’ll keep recommending it because it is the clearest account I’ve found of how we got here— how the infrastructure got built, who paid for it, who it was designed to serve, and what it would mean to build it differently. Understanding the shape of the problem is not nothing!
Pay attention to the organizations doing this work seriously. Check out the Electronic Frontier Foundation and the Center for American Progress, pushing for the kind of congressional action that would change the structural conditions. They’re not glamorous but that’s okay.
And if none of that feels available to you right now, that’s okay too. Sometimes the most honest thing you can do is just refuse to be convinced that the current arrangement is inevitable. That fatalism is not a fact. AI companies count on it daily. You can choose to resist by thinking independently.
Last week made a lot of people feel that something important was at stake. It was, and it still is. That feeling is worth more than a download. Don’t waste it.



As always, smart and sensible. For a counterpoint, you might engage with Karp's argument in The Technological Republic that Silicon Valley must move closer to the American project and support our national security agenda, while the government must move closer to the innovation culture of the AI engineers.