Anthropic just released something called AI Fluency, a full curriculum with lesson plans, teaching notes, and a framework of four D's (delegation, description, discernment, diligence) that promises to train people to use generative AI responsibly. It's free, well-designed, and already being adopted by universities and workplaces. By all accounts, it's a thoughtful response to a real problem.
I’m for it. People are copying and pasting AI outputs without thinking, citing sources that don't exist, accepting confident-sounding nonsense as fact. Teaching someone to delegate a task clearly, check what comes back, and take responsibility for the output is the bare minimum we need right now.
And yet. We've jumped straight to teaching people how to work with AI without first teaching them the kind of deep reading (call it literacy) that would help them recognize when something's off about the writing itself. And maybe more importantly: why isn't the expectation that AI should get better at reading us, not the other way around?
The Thing You Can't Systematize
There used to be this thing people did: reading just to read. Not for school or work, just because they wanted to. Imagine!
When you read and read and read, you reach fluency in every sense. Your brain starts picking up patterns you can’t exactly name. You develop a certain radar; the ability to sense when something's off about a piece of writing, even if you can’t immediately explain why. It’s not a skill learned deliberately. It has nothing to do with fact-checking and everything to do with recognizing something sneaky going on.
I fear reading for the pure love of it is becoming a lost art. Anyone who still reads this way can immediately tell when something was written by AI. It lacks something you can only describe as lived-in-ness. It's writing that has never stubbed its toe, never had its heart broken, never sat in traffic wondering what the hell it's doing with its life. It's technically proficient prose that has never actually experienced anything.
The Anthropic curriculum is designed to make us fluent in AI. But why isn't the goal to make AI fluent in us? Think about what fluency actually means between humans. More than just grammar rules or having a big vocabulary, it's understanding context, subtext, the thing that's not being said. Real fluency comes from living alongside language for years, from using it to navigate actual human experiences — love, loss, boredom, panic, that weird feeling you get when you smell your childhood home. AI has access to all the words we've ever written about these experiences, but it has never had them, and that shows up in the writing.
AI can mimic the structure of a personal essay, but it has no personal stories to tell. It can generate poetry that scans perfectly and uses sophisticated metaphors, but those metaphors aren't drawn from a life lived. In fact, they’re sad and sort of pathetic: they're statistical combinations of how humans have used language to describe their own lives.
Too Fast, As Always
The Anthropic curriculum is designed to be efficient. You can work through it in a reasonable amount of time and come out with practical skills for managing AI systems. That efficiency is both its strength and its alarm bell.
What happens through years of reading for pleasure takes time (emphasis on: that is okay!). The real learning happens slowly, imperceptibly, in the spaces between books. It’s the accumulation of small recognitions in everything from dialogue to style to meaning. You can't compress that into a tidy framework. There's no lesson plan for developing intuition, and this is why I worry about the order we're doing things in. We’re rushing to teach people AI fluency when, arguably, we’re skipping the part that teaches one to teach AI fluency. It’s like reaching for the spell-check before telling someone why writing even matters.
Don't get me wrong. People need to learn how to use these systems without being used by them. But when we treat that as the primary literacy, we miss something crucial. The kind of deep reading I'm talking about teaches you to notice not just what a text says, but how it's working on you. When you've spent enough time with literature, you start to recognize how rhythm affects meaning, how structure shapes argument, and how silence can be more powerful than what's spoken.
This matters for AI in ways that go beyond rote fluency training. A person who learned discernment through literature brings a different kind of attention to AI outputs. They're reading for presence, and the lack thereof.
Asymmetry
In all of this AI fluency-ness, we're meeting it halfway, learning to communicate in the simplified, structured way that works best for the machine. But humans don't communicate in prompts. We’re more about implications than explicit instructions. The look that means "get me out of here," the story that goes absolutely nowhere but speaks volumes.
Real human fluency is messy and highly contextual. You understand what information means to a person who has to live with it, who has to make decisions based on it, who has complicated histories wrapped up in how they interpret the world. Teach people to delegate tasks to AI systems, absolutely. Teach them to write clear prompts and check outputs carefully. But don't let that be the only fluency we value. Keep teaching the slow, seemingly impractical work of reading deeply. Keep assigning the books that don't have obvious applications, the poems that resist easy interpretation, the essays that leave you with more questions than answers.
When you read deeply for the love of it, the skills that seem most useless turn out to be exactly what you need for navigating a world full of very sophisticated systems that can mimic human communication but can't actually participate in it.
AI fluency will teach you to manage the machines. But first, maybe, everyone just needs to pick up a book, forget why, and lose yourself.
This got me...
"something you can only describe as lived-in-ness." As someone trying (not their best, but trying) to make AI do a useful thing.