Blink and you may have missed it: $100 billion. That’s the figure Satya Nadella and Sam Altman reportedly use to define the arrival of artificial general intelligence, or AGI. The industry has ostensibly concluded its cognitive milestones and philosophical debates and agreed that when a model is reasonably expected to generate gobs of profit, It Is General.
OpenAI’s charter still keeps things deliberately broad: AGI is "a highly autonomous system that outperforms humans at most economically valuable work." But that definition is conceptual and hard to pin down. The financial framing, on the other hand, is specific. It lives inside the real-world partnership between OpenAI and Microsoft. The working assumption is that if OpenAI builds a model the board believes is capable of generating that level of profit, and Microsoft doesn’t successfully object, then OpenAI can withhold future models.
In a field awash with metaphors like ghosts and gods, $100 billion stands out for its cold specificity. It doesn’t necessarily care if the model understands anything, it just measures returns. It says: this system is powerful enough to reshape markets, displace labor, and lock in dominance. Cynical, yes, but also useful as we try to understand “intelligence” in this moment.
The Gap
This is not how most people think about intelligence. Outside of these corporate agreements, the AGI debate still carries a trace of wonder, or at least unease. It gestures at something grander: an intelligence that not only rivals humans but a system that could, if it wanted, talk to us like equals, or replace us, or care about us.
The distance between those expectations and the $100 billion definition is a chasm, and it’s growing wider every day. While tech companies are calibrating their investments and contractual escape hatches, regular people are left to navigate a version of AI that feels both deeply personal and entirely impersonal. We talk to these systems. We’re summarized, evaluated, and occasionally gaslit by them. But no one outside the industry has any real say in how they're trained, what goals they're optimized for, or what counts as progress.
AGI, for most people, is a creeping presence rather than a future state. We’re kind of already there, and it’s hard to imagine it getting any smarter or more intrusive. It’s a job application filtered out by an opaque model, a class of students submitting AI-generated essays, or a friend who used to be a designer and now writes prompts all day. It’s the uncanny feeling that you’re being asked to adapt to an alien logic. AGI as a matter of projected returns highlights the extent to which the conversation has drifted away from questions of human understanding and into the realm of corporate leverage. Fun!
But Like, What Is It
Frankly, no one agrees on what AGI is. To some, it means a machine that can do anything a human can do. To others, it’s a system that can perform well across a wide range of tasks without needing task-specific tuning. In academic circles, you’ll find careful distinctions between narrow AI, general AI, and superintelligence. But in public discourse, the term has been stretched thin. It means everything and nothing. It evokes HAL 9000, but it also means better autocomplete.
This conceptual slippage isn’t just a linguistic problem. Researchers and ethicists debate the thresholds, while the companies building these systems are setting terms that are far more concrete, with real-world consequences. They’re dealing with licensing agreements, regulatory filings, and intellectual property. They need metrics that signal value to investors and justify exclusivity. And so a term that was once philosophical becomes operational, and gets used to justify product launches, raise money, and renegotiate deals.
That’s how we end up with AGI defined, unofficially but effectively, as a hundred billion dollars. It’s a number tied to the cap on OpenAI's capped-profit model. It’s the point at which early investors stop earning returns, and the company pivots back to its nonprofit mission (allegedly). It is, quite literally, the financial ceiling that transforms the company’s obligations.
And yet, that number has very little to do with what most people are worried about when they hear the term AGI. No one is afraid of a spreadsheet (most days). People are afraid of systems that can manipulate them, steal their jobs, and alter elections. The public concern around AGI is emotional, and the industry answer, increasingly, is: we’ll know it when we’re rich enough.
Machine vs Human
There’s a long-running debate in cognitive science about what separates human intelligence from machine intelligence. Some say it’s consciousness. Others say it’s embodiment, or intentionality, or the capacity for abstraction. But one of the most persistent differences has been that human intelligence is shaped by meaning. We have a purpose here on this earth.
AI systems, impressive as they are, don’t live anywhere. They have no history, fear, or desire. They can simulate almost any conversational context, but they are not situated in one. And yet they’re increasingly being treated like an advisor, even a friend, that should be positioned alongside you. This is where the AGI conversation starts to collapse under its own metaphors. If intelligence is just performance across tasks, then sure; these systems are getting close. But if intelligence is something relational, something embedded in time and experience and memory, then no model, however powerful, can cross that gap by sheer scale alone.
The industry continues to behave as if scale is understanding. That more data, more parameters, more reinforcement, will eventually yield something indistinguishable from a thinking mind. This is the wager behind every AGI investment: once it clears a certain threshold, we will owe it new terms, and new respect. This should spook you. A system may be powerful, but it is not wise.
The clause reveals what this moment is really about, which is the coordination of artificial markets, which yes, also, is built around artificial intelligence. AGI is being defined — and preemptively monetized — in ways that ensure it will benefit those who control its distribution, not those who live alongside its consequences.
The Moon Belongs to Everyone
Of course, what we’re heading towards is the normalization of AGI. By putting it in market terms, it softens the sci-fi, doomsday narrative and makes it concrete. The concept becomes livable, acceptable, just part of the infrastructure. But I’ll say for the millionth time that artificial intelligence is just that — artificial. It will never be “general” enough to be brilliant the way you are brilliant, the way you shake a stone out of your shoe or tug a blanket under your chin. Would you bet $100 billion on outsourcing the feeling of an idle morning, letting your thoughts wander and collide? I pray you wouldn’t. The best things in life are free.