You can tell a lot about an empire by what it hoards: gold, spices, water rights, GPU clusters. Training data scraped from a billion unknowing sources.
I just finished Karen Hao's Empire of AI (highly recommend!) and it lays it out plainly. AI companies aren't just companies. They are empires in the oldest sense. Expansionist and extractive, hungry not only for land but for labor. We're not just watching tools emerge. We're watching the terms of intelligence be claimed by a handful of companies that see themselves less as innovators and more as sovereigns.
The models are built, the servers are humming, and the public institutions that might have set the terms were ushered in late and asked to please mind the cables. This week, we dig into the uneasy realization that the material stakes of AI are already entrenched. But here's what I think we're missing: governments are the rightful owners of AI technology and any cascading labor policy.
If AI is going to structure how we access health care, education, transportation, finance, language, memory; then it can’t remain a private product. It must become public infrastructure. The question isn't whether this is possible, but why we've been convinced it's not.
What Happens Without Public Ownership
Look at what's already happening. Artists are the canaries in the mine, and they see the fault lines before anyone else. Economically, socially, politically — they're always first to lose ground. When AI started filling in the blanks on creative work, it didn't pause to ask who was being written over.
This cycle isn't new. Artists' work is always the first to be undervalued and the first to be mechanized in any industrial revolution. But with AI, the pace has changed. Portfolios have been scraped and modeled, with those styles, voices, and arrangements collapsed into commodity. And once the creative labor is flattened, what's left of the economy that gave it shape? When people who were already on the margins are pushed off entirely, the system isn’t stretching, it's rupturing.
This is what private ownership of intelligence infrastructure looks like in practice. No consent, no compensation, no democratic input into how the technology develops or who it serves. The artist dies first because there's no public standard to protect them, no accountability beyond what shareholders demand.
The Corporate Distraction Campaign
Into this rupture comes a carefully orchestrated response from the very companies causing the damage. First, the promise of Universal Basic Income (UBI). In theory, it's lovely. A monthly stipend, no strings attached. Enough to create some space. But in practice, UBI would be nothing more than corporate stagecraft.
There is no altruism at the heart of any AI company. These empires were assembled through venture capital networks, exclusive infrastructure, permissive data policy, and the quiet erosion of public oversight. The same hands now raising concerns about safety are the ones that opened the floodgates in the first place. When people begin to murmur about UBI, it's not out of generosity. It's a concession made in the language of inevitability. A little money, perhaps, in exchange for everything else.
The second part of the campaign is more subtle: making democratically accountable institutions sound laughable. The people building AI also want to be the ones who govern it, and they've worked hard to make sure the only real alternative sounds impossible. To be fair, the optics haven't helped. Watching congressional hearings on AI feels like watching someone try to reboot the internet by hitting the TV. But those performances are part of the script. If government seems incompetent, then tech's monopoly begins to look like stewardship. And if we accept that story, we stop asking better questions.
Meanwhile, the same companies that warn about AI's future harms are draining water from public reservoirs to cool their servers. They're outsourcing moderation labor to vulnerable workers in the Global South. They're laundering their influence through think tanks and safety boards while shipping unfinished models into school districts and health clinics, transit authorities and courts. These aren't seen as contradictions but part of the grand design. And they persist because we're told, again and again, that there is no other way.
It's not that government couldn't act as AI reached hyper-development. It was actively sidelined. Tech companies didn't wait for approval to unleash a black box on the world; they never do. And they feed us convincing promises that the technology is in the right hands, with government too slow and inadequate to catch up.
Sewer AI
But to say government is incapable is defeatist at best. I understand that there's some well-earned embarrassment going around the public sphere these days, but we're ignoring the history of infrastructure itself.
Consider the case of Sewer Socialism. In Wisconsin, early 20th-century socialist mayors (we had those!) built livable cities not through slogans but by laying pipes. They invested in clean water, public parks, reliable transit, and basic human dignity. The work wasn't flashy. It was slow, careful, and designed to serve, not to dazzle. That's the model we need for AI. Not a new kind of emperor, just a decent public engineer.
Every previous technological infrastructure was regulated in some way. Trains. Telephones. Electricity. The internet. None of them arrived harmless and therefore, none of them were left entirely to the market. And those policy changes didn't come from goodwill. They came from the struggle of labor movements, journalists, regulators, and people who refused to be told the future had already been decided.
Government can act. It can and does set environmental limits, enforce data rights, create standards for compensation, labor, and consent. It can build systems meant to serve rather than extract. What it needs isn't technical permission, but political will. What it lacks isn't capacity, but credibility.
What feels new is the fatalism: this system is too big, too fast, too clever to try to intervene now. Maybe all we can do is watch it unfold and hope someone responsible is behind the curtain. These empires didn't just build tools, but a whole worldview that says scale is safety, speed is virtue, and public governance is a kind of charming failure. A worldview that recasts resistance as luddism and collectivity as a drag on progress.
What Public AI Could Look Like
That doesn't mean we nationalize the data centers tomorrow. It means we start thinking like citizens, not users. It means public models, transparent sourcing, licensed labor, paid annotation. It means the people shaping intelligence systems are accountable not to shareholders, but to a public standard. And that public standard has to be built on more than safety optics. It needs plumbing.
We've done this before. We've built libraries and roads and weather satellites. We fund open science (or we used to). We know how to build things that last and serve everyone. We just have to remember that it's possible. The real problem is not that AI is too advanced for the public sector to catch up. It's that its power is being distributed like empire. Quietly, unevenly, and in the dark.
Don’t mistake polish for order. The empire of AI is chaotic, extractive, and unaccountable by design. Public ownership won’t clean it up overnight. But it’s the only thing that can drag it into the light.