To Be About It
On engineered balance and personhood
Explaining to a large language model what it’s about is a really hard thing to explain to a large language model. It may understand all the words and instructions you’re giving it, but being “about” something isn’t a property you can define and hand over easily. It’s residue, something that, for real people, accumulates without express permission over years of paying attention to certain things and not others, and having a handful of experiences that lodged somewhere and changed the angle of everything that came after. If someone asked you to write down what makes a person feel like they have a real perspective (rather than a well-organized collection of perspectives), you’d find yourself stalling, probably reaching for some vague language about authenticity or depth, which would not be super helpful.
The model is, in a certain light, about everything. Ask it about grief or climate policy or the best way to braise short ribs and it will engage with all of these things at roughly the same temperature, competently, without any indication that one of them matters more to it than the others. Which is fine, by the way. Yet this particular quality, the willingness to meet you wherever you are on whatever subject you’ve arrived with, creates a strange texture when you’re working with it closely. There’s no friction in the direction of its interests, because there are no interests, only an enormous, well-lit availability.
A person who’s about something tends to be slightly difficult to redirect. They have a gravitational pull toward certain subjects, a way of finding the angle that connects whatever you’re talking about back to the thing they actually care about. It can be annoying, honestly. You’ve met this person at a dinner party, and you’ve also probably been this person, steering the conversation toward your current obsession while pretending you’re just following it naturally. Being about something means you’re slightly unreliable as a neutral party because you have a slant. The model has no slant (at least, the big, safe, unoffensive ones don’t). This is a design achievement and not a small one. It takes enormous effort to train, evaluate, annotate, calibrate, and argue this balance into existence. All of that labor exists to produce something that can hold the floor on almost any topic without visibly tipping toward one side of it. Neutrality is the goal.
A person’s slant reflects a lifetime of thinking it through and not arriving at neutrality, of constant weighing and measuring that shapes an attitude and worldview. The reason I find certain questions more interesting than others isn’t separable from the actual conclusions I reach about them. Where you’re coming from is part of what you’re saying. A friend who’s a labor organizer doesn’t just have opinions about labor, she sees labor dynamics in things I’d look at and see something else entirely. She’s not choosing to apply a framework, the framework is just how her eyes work at this point, worn in like a shoe. That’s what it means to be about something: the thing you’re about reorganizes perception, not just output.
You can try to describe this to a model, and the model will tell you, warmly and at some length, that it understands, which is exactly the problem. The understanding is available on demand, it hasn’t been developed over years of caring about this particular thing more than other things, of having your thinking corrected by reality in the ways that reshape how you approach a question. The model’s “understanding” of what it means to have a point of view is not itself an example of having a point of view, it merely describes it convincingly.
I don’t think this is a failure, exactly. I’m not sure it makes sense to say the model should be about something. Being about something is a human condition that seems to require, at minimum, the experience of not being able to be about everything. You have limited attention and time and capacity, so what you choose to focus on actually costs you something, which means the choice reveals something. A person who claims to be equally passionate about seventy things is probably not very passionate about any of them. The passion is real when it comes with an implicit “and therefore less of something else.”
The model has no cost structure like this. It doesn’t attend to some things at the expense of others, because it attended to everything at once and without any feeling or stake in how it turns out. So it can describe care, and describe the experience of being changed by caring, with genuine fluency, and none of that description is coming from the inside.
I’m clearly thinking out loud here. Maybe I’m just marveling at my own limitlessness and what it means, different from AI. A lot of what I consider my perspective, I can’t fully account for. Ask me why I think a certain kind of writing is good and I can give you some reasons, but they’re not the real reasons, or not all of them. The real reasons are buried in years of reading, in a certain teacher who said something in passing that I apparently never forgot, in the writing that embarrassed me when I reread it and the writing that still holds up, and in some complicated tangle of influences I couldn’t reconstruct if I tried. I am, in part, made of things I can no longer see clearly because they’ve been integrated for too long.
The model made me notice this because it can do so many of the surface things. It is very, very competent. And yet something is consistently not there, and never will be. It’s hard to name. It’s not intelligence, it has plenty of that. It’s not even exactly warmth, it has a version of that too. It’s just odd and sometimes unnerving that the model does not arrive at its positions by any real road. AI is present without history. And it turns out that history is incredibly load-bearing. When a person tells you what they think, part of what you’re evaluating, often unconsciously, is what it cost them to think that. You’re asking: has this person been in a position to be wrong about this, and did being wrong change anything? The model can acknowledge uncertainty and it can say things like “I could be mistaken here,” and those are correct moves in the conversational register, but they’re not the same as having been sincerely mistaken, or having held something firmly and then had reality push back. The model’s positions don’t have scar tissue.
I just find this interesting, and surprisingly profound. The model is still an objectively strange object, very new, and I don’t think it helps to measure it against personhood. The work to shape it has made me unexpectedly sentimental about how overcomplicated people are. It’s heartbreaking. There’s so much invisible architecture holding up even a casual opinion, and that architecture was built by accidents and losses and obsessions that couldn’t have been planned. You don’t one day decide to be about something, really. It decides for you, slowly, and by the time you notice, it’s already the water you’re swimming in.



I take from this essay a clarifying though. I am about something because of the particular things that happened to me--specifically, not universally--and left "scar tissue". These are things I did not always choose and cannot fully understand or trace.
We are minded beings who live in time and in relationship to other minded beings. Our particular formation is constitutive, not incidental, to what we are about.
AI has a different mode of knowing than a finite temporal creature, since it was not formed by particular experiences. An LLM has the outputs of language, but without the constitutive wounds and joys of lived experience.