Tastemaker
On editorial instinct in AI
You know someone with great taste. Everyone does. Maybe it's the friend whose apartment you'd move into tomorrow no questions asked, or the colleague whose one-line note on your draft is the only note that matters. You've never quite been able to articulate what they have, only that they have it, and that being around them raises your standards a little without you realizing it. At some point you started paying attention to what they paid attention to, which is the sincerest form of flattery.
But where does their taste come from? Have they read a lot? Probably. Have they had a good teacher, a formative humiliation, an obsession that got out of hand? Almost definitely. I’d also bet that somewhere in their history, they did something repetitive and unglamorous for long enough that their standards got calibrated in an important way. They’ve had a long, tedious acquaintance with the thing itself. In other words, taste is a contact sport.
Art & Science
You’ll hear a lot about “taste” in AI circles right now. Also craft, curation, editorial judgment, or human touch. The trajectory is that the models got very good very fast, and now the output is everywhere, and most of it is fine, and what separates the good stuff from the slop is some quality of discernment that turns out to be surprisingly hard to automate. So taste, it seems, is back. Every think piece about the future of creative work has discovered, with the energy of someone finding twenty dollars in an old coat, that humans with highly developed aesthetic judgment are still relevant, possibly indispensable, probably the whole point.
Which is lovely! Except that the editor never left, and the curator has always been essential. The people with highly developed aesthetic judgment were here the whole time, doing the same work they’ve always done, largely without fanfare, in a market that hadn’t previously shared this assessment of their indispensability. The conversation briefly stopped including them and is now swinging back around, a little breathless, to say oh, there you are.
It’s worth understanding that the people who built AI were, overwhelmingly, engineers and researchers building tools they themselves wanted to use. That’s not a criticism, it’s just how it is. You solve the problem in front of you, and the problem in front of a software engineer is a software problem. Claude Code is objectively extraordinary, GitHub Copilot changed how a lot of people work. The ratio of engineers to everyone else in this industry has always been lopsided, and so the tools that got built first, and built best, were the ones that made sense to that ratio. Cool, great. But as is becoming apparent, people aren’t primarily using AI to write code, they’re using it to work through ideas and personal problems, draft things, reshape things, and have what are essentially editorial and creative conversations with a machine, at scale, all day long. They’re asking, in other words, for exactly the thing that taste is made of.
I’ve said it many times. Language models are, at their foundation, a language problem. The math of it all is just the vehicle, IMHO. What we’re actually trying to build is something that understands not just what words mean but what they do, and that’s taste. It’s not really a specification you can write in a pull request (although we’ve made it so). Taste, it turns out, is core to the model, not a layer you add on top once the hard part is done.
AI is not, and has never been, a purely mathematical enterprise handed down from a mountaintop of abstraction. Yes, the architecture is intricate, the math is real, the researchers working on it are formidably intelligent people doing hard things. But running alongside all of it from the beginning has been a current of work that’s editorial. Someone has to decide what good output looks like. Someone has to notice when the model is technically correct but somehow completely wrong, wrong in the way that makes you wince. Someone has to read ten thousand examples of the model hedging in exactly the same annoying way and figure out how to describe why it’s annoying in terms the training process can use. Someone has to care, repeatedly, about small things.
And, the mathematical construct and the editorial judgment need not be in competition. They’re both load-bearing in different ways. What we’re seeing now is that the gap between “generated” and “good” has become visible enough that people are now using the word taste out loud, in consequential settings, as a thing to be cultivated and valued.
Tasty
Back to the person you know with great taste. They didn’t develop it by doing tasteful things, per se, they’ve just put in the hours to recognize that which can go unnoticed. This is the part that’s hard to systematize and therefore easy to undervalue. Taste requires exposure and intelligence and opinions, sure, but it’s also what happens when your judgment has been tested against reality enough times that it starts to get calibrated. It’s instructive to be wrong sometimes and take the time to figure out why. When you’ve cared about the quality of something that didn’t require you to care, in conditions that didn’t particularly reward caring, and you cared anyway because by that point you couldn’t really help it; that’s when you’ve developed taste.
There’s accountability built into certain kinds of work that’s hard to replicate in the abstract, being answerable to a standard you didn’t really set and can’t really negotiate with. What that builds over time is a relationship with quality that’s more than theoretical, and lives somewhere more ethereal than a set of principles you consciously apply, which is probably part of why the industry is having trouble acquiring taste on a timeline that suits anybody.
The industry is now very motivated to understand taste, to find and hire for it, to build systems that can approximate it. It’s being approached like a new capability to be developed, or a thing that can be learned quickly if you find the right framework for it, and I’m not sure that’s how it works. I’m not sure you can decide to value craft in November and have it meaningfully integrated by Q2. The people who have it acquired it slowly and incidentally, through work that didn’t announce itself as formative at the time.
This doesn’t mean the industry is stuck, just that the people who’ve been carrying editorial judgment all along probably need to be in the room where the decisions get made, not consulted after the fact to sand the edges off. It means that there’s probably an unsung hero hiding in plain sight, someone who neglected to tell you that they have a past life as a tailor or a baker or a pianist or whatever, nothing to do with AI, and you should go find them and ask what they think about all this.


