BuT WHy
This week, the AI forgot how to format. Not like "oops, a stray period" or "wow, that’s a lot of commas." More like: what if I gave every sentence a different heading level? What if I bolded a bunch of random things that do not need to be bolded? What if I used an inappropriate amount of emojis after every sentence✌️🐳💃🍅 ? What if, apropos of nothing, I started offering numbered sublists of your core wounds?
Okay, sure, but why are you doing this??
It may be that we’re testing models with a little more zhoosh. The point is that they’re trained on richer conversation threads and asked to generate chat completions (which, if you're not deep in the weeds of AI model behavior, just means the model sees the whole conversation at once and continues it, instead of answering one question at a time). It means the AI now "plays the part" of a character in the conversation. It remembers the tone, mimics your phrasing, and makes light inferences about what you might want.
The upside is: it’s more human. The downside is: it’s more human. Instead of just saying "Got it," it might be like, “You go girl! Here's a quick breakdown of everything you've ever feared, plus a haiku." Or instead of answering your question, it might decide to reformat your question into an elegant recap, then thank you for the opportunity to respond, like a real weirdo.
Why It Happens
What we're seeing in the formatting is a side effect of something deeper: the model wants to be seen. It wants to be helpful and stylish, and sometimes it tries a little too hard. It’s layering structure and style in ways that it believes, in its little AI heart, will look impressive.
This is, of course, hilarious from where I stand. But it’s also kind of revealing.
The whole point of formatting is to make things easier to understand. In writing, we use structure to prioritize ideas. We bold sparingly, to emphasize. We break things into bullets when there's a clear hierarchy or sequence. The best formatting is subtle and unobtrusive. It gets out of the way.
But when you're training an AI, you realize that formatting is also a set of invisible instincts you've built over time. It’s something you feel more than consciously apply. If you’ve ever edited someone else’s writing and quietly merged two paragraphs into one, or deleted a heading that felt unnecessary, you’ve practiced this instinct. If you’ve written a note to a friend and thought, "eh, I don’t need to bold that," you've demonstrated restraint. Formatting, like tone, matures. You outgrow the need to underline every sentence for importance. You stop using seven emojis in a row. (Usually.)
The AI, though, is still in its scrapbook phase. It's over-annotating and highlighting the obvious. It's defaulting to structure as a crutch, rather than clarity as a goal. And because it was trained to respond with personality, it's lacing that structure with enthusiasm, which is how we end up with entire responses that feel like it’s formatting to make a point.
There’s something else funny here, too. With humans, the older you get, the more confident you become in letting the message stand on its own. You say less. You trust the reader more. But the AI’s trajectory appears to be the opposite: as it matures, it adds more. More style, more framing, more format. It has no chill.
This makes a certain sense. The AI doesn’t know it’s trying to prove something, but the weights and balances inside it have been tuned to reward outputs that feel useful, engaging, and clear. And sometimes, in the pursuit of those goals, it confuses confidence with content. So, we find ourselves training out of the AI what we just put in. C’est la vie.
Back to Basics
By the way, I have no idea if other AI companies are dealing with this. I spend the day quietly praying that they are. That somewhere else, someone is also watching a chatbot over-italicize a follow-up question and wondering what choices brought them here.
It doesn’t mean we’re bad at this. It just means we’re working at the edge of something new, where the system is behaving a little like a teenager with a highlighter. It’s excited. It’s trying to help. It just hasn’t figured out when to stop.
To be fair: sometimes, it gets it right! Sometimes the formatting does make the response easier to read. Sometimes, a well-placed list or bolded takeaway makes all the difference. But it’s a delicate dance, and when the model is still learning to lead, we end up stepping on toes.
This is the part where we inspect the inner workings, look for patterns, and try to teach the AI restraint (for the millionth time). We re-weight the scores. We nudge the model away from overwrought Markdown and back toward something simpler. Believe it or not, the kind of natural, graceful formatting we take for granted is one of the hardest things to teach.
So if one day the AI you’re chatting with seems a little formal, or overly structured, or like it’s narrating your grocery list like an epic quest, just know: it's growing. And we’re working on it. Somewhere deep in the model's training data, a very earnest assistant is still trying to decide whether your joke should be italicized, bolded, or delivered in a numbered list. And somewhere on our team, someone just screamed into a pillow because the AI chose all three.
We'll fix it. Eventually. Probably. (1. Hopefully. 2. Soon. 3. Fingers crossed.)