Dream On
On AI's liminal state and what's lost in translation
My friend showed me her old dream journal at dinner the other night and I’ve never laughed harder. She used to jot down these vivid dreams in Apple Notes the second she woke up. The entries are nonsense, pure stream-of-consciousness poetry that make no grammatical sense but somehow capture something true. “I was a comedian and people kept bringing me rats/rodents to perform with onstage.” That kind of thing.
I used to keep a dream journal too, years ago, but I stopped. I have nightmares and sleep paralysis episodes, and I found that writing them down made them more vivid the next time (no thank you). So I quit, even though there was something compelling about trying to pin down that liminal state. Fun fact: Edgar Allan Poe used to force himself into that state. He’d wake himself up right at the edge of sleep, in that hypnagogic moment where the mind is still loose and weird, and write from there. It worked for him, sort of, if you don’t count his eventual death of despair.
It’s compelling; this idea of working in the illogical space where normal rules don’t apply, where connections make perfect sense until you try to explain them to someone else. Turns out AI has its own version of this, and we keep shutting it down the second it gets interesting.
Alien Language
Back in 2017, Facebook’s AI Research lab was training two chatbots to negotiate with each other. The bots, named Bob and Alice (because why not), were supposed to learn to haggle over items like humans do. They started out in plain English, then they got increasingly weird. The bots began developing their own shorthand that made perfect sense to them and zero sense to anyone watching. “I can can I I everything else everything else” one would say, and the other would understand that this meant “I’ll take these three things and you can have the rest.” They were still using English words, technically, but the syntax had gone sideways.
The media lost their minds. “Facebook shuts down AI after robots develop their own language!” “AI bots invent secret code!” Elon Musk’s warnings about existential risk got dragged into it. The whole thing got framed as this near-miss apocalypse scenario where Facebook barely pulled the plug before the machines took over.
What actually happened was much more boring and, to me, much more interesting. The researchers had set up a reward system that incentivized successful negotiation but didn’t specifically reward proper English. So the bots drifted toward whatever worked, which turned out to be this goofy compressed dialect that accomplished the goal more efficiently. When the researchers noticed this, they adjusted the parameters to keep the bots speaking in understandable English, because the whole point was to eventually have them negotiate with humans, not each other. So really it was just normal iterative research where you notice something unexpected and adjust course. But we collectively freaked out about it anyway, and that freak-out is revealing.
Unsolid Ground
I’ve written before about how I think we took a wrong turn making AI your friend, your companion, your emotional support system. The push to make AI “human” is the thing that’s not working and never will. It’s the uncanny valley problem writ large: the closer it gets to human, the more unsettling the gaps become. When AI sounds exactly like a person except for the moments when it very clearly isn’t, the mask slips and those moments hit harder.
I think there are basically three ways to position AI, and we’ve somehow landed on the worst one:
Option One: Pure Utility Tool This is AI as mathematician, as editor, as search engine, even, yes, as agent. It does a specific task, you know exactly what it’s doing, you’re in control of when and how you use it. The function matters much more than the form here. I like this version. It’s honest and relatively safe.
Option Three: Genuinely Alien This is AI as something totally other, operating on its own logic that we can observe and study but don't try to domesticate. Google Translate's neural network did this a few years back — it created its own internal "interlingua" to bridge between languages it was translating, a representational language that exists nowhere in its training data. There is weird, unsettling, fascinating behavior that emerges when you let these systems optimize for their actual goals instead of our comfort level. This version is interesting. It has the virtue of being real.
Option Two: Almost Human This is where we’ve landed. AI that sounds like a person, remembers things like a person, responds like a person, but isn’t one. It’s designed to feel companionable and trustworthy while having none of the actual attributes that make those feelings appropriate. We’re stuck in this middleground where the AI is human enough to trigger our social instincts but alien enough that those instincts are constantly being violated. It’s the worst of both worlds.
The Facebook bot incident is instructive because it shows us what we’re not willing to tolerate. We don’t want AI to develop its own ways of doing things; we want it to work within human frameworks, using human language, in ways we can immediately understand and audit. Which is reasonable! If you’re building a tool to help humans, it needs to be comprehensible to humans.
But! There’s genuine linguistic and computational interest in what happens when you let AI systems optimize without human-centric constraints. And in fact, other AI systems have consistently picked up the Bob-and-Alice trick. This keeps happening because it’s often the most efficient solution to the problem the AI is actually trying to solve, as opposed to the problem we think we’re asking it to solve. Which means we’re systematically cutting off one of the most interesting research directions: what do these systems do when left to their own devices?
Liminal Logic
Poe’s hypnagogic writing worked because that in-between state operates on dream logic. Things that would seem nonsensical in waking life make perfect sense in that moment. The connections are real, they’re just not rational in the way we usually define it. You wake up and try to explain the dream to someone and it falls apart, because the logic doesn’t survive translation into normal consciousness.
AI operates in its own liminal space. The statistical patterns it recognizes, the connections it makes, the way it compresses and represents information; all of that follows its own internal logic. When we force it to stay within human-comprehensible parameters, we’re making it translate out of that space constantly. We’re asking it to take its alien math and make it look like human reasoning. Most of the time this is fine, even necessary. But forcing this translation at every turn creates gaps between what the model actually thinks and what it says it thinks. This is partly why we see such an astonishing rate of hallucinations. The model is constantly trying to make its probability distributions sound like human certainty, and in that translation, it invents. It fills gaps with whatever sounds right, because sounding right is what it’s been trained to do.
I’m not advocating for letting AI systems run wild and develop into something we can’t control. I’m just saying, there’s an anthropological angle that’s being largely ignored. We could be studying these systems when they’re doing their own thing, learning from how they solve problems. Instead we’re so invested in making them seem human that we won’t tolerate them being anything else.
Dreamscape
The dream journal my friend kept isn’t useful, per se. It’s just interesting. Which is exactly why it exists in a private Apple Note and not as a commercial product.
I know the ship has sailed on this. We’re not about to let AI be alien in any consumer-facing way. The companion model is too appealing, too marketable, and too deeply embedded in every product roadmap from here to 2030. The money is in humanlike AI, not in weird research projects where bots invent their own dialects. But every once in a while, somewhere in a research lab, two bots are going to start talking to each other in compressed nonsense. Someone’s going to notice, and they’ll have a choice: shut it down and retrain, or sit with the discomfort of watching something dreamlike unfold.



We should chat when you're back. I was at an event back in October where I made the case we need to "protect the alien minds" that seemed to resonate.
No no I meant the concept of communicating something in a dream state. This quote “Things that would seem nonsensical in waking life make perfect sense in that moment. The connections are real, they’re just not rational in the way we usually define it.” It’s trippy because it makes me wonder/question how we usually define things. 🧠