- The Narrative Intel
- Posts
- The Limits of Magic
The Limits of Magic
The computer [still] only does what you tell it
My father used to teach an adult ed computer class. He'd start every session with the same exercise. "Write down everything you do in the morning."
Everybody wrote some version of: I wake up, brush my teeth, and go to work.
Then he'd push back. "So you wake up and brush your teeth—do you have your toothbrush in bed? Or do you have to get to the bathroom first? And do you go to work in your pajamas? Or do you get dressed, get in your car, do some sort of commute?"
The point was that computers need every step spelled out. You can't skip anything. That was the deal. You give precise instructions, you get precise output. If the output is wrong, your instructions are wrong. At least, that's how it used to work.
That lesson became the foundation of how I learned to code. And it stuck with me long after I stopped writing code, because the underlying idea—be precise, don't assume the machine knows what you mean—kept showing up in every other kind of work I did.
I was thinking about it again this week because I got into a fight with ChatGPT.

What Happened
I was trying to have a straightforward conversation. Nothing complicated. But ChatGPT kept pulling in information from other conversations—context I hadn't mentioned, assumptions I hadn't provided. It was connecting dots I didn't ask it to connect.
It was as if I said, "I wake up. Give me the instructions to go to work," and instead of telling me to brush my teeth, it sent me downstairs to get a cup of coffee because weeks ago I’d casually mentioned I drink coffee in the morning.
Most of the time, this works. In fact, it works so well that you stop noticing it’s happening.
ChatGPT is exceptionally good at inferring intent, filling gaps, and connecting dots. Google does some version of this too. ChatGPT just does it better—and in more human-seeming ways.
But this time it got confused. And because I'd gotten used to it guessing correctly, I didn't even realize what was happening until the answers stopped making sense.
The Illusion
My programming upbringing was all about determinism. You write code, the machine follows the code, the output is predictable. If the output is wrong, you trace the logic, fix the code, and you have fixed the output. Every time it’s run you get the same result. That was the contract.
Same instructions. Same result.
Large language models don't work that way. They are non-deterministic. The same prompt can produce different outputs depending on context, memory, framing and whatever statistical pathways the model decides are most likely in the moment.
And it's easy to forget that, because for simple questions and straightforward tasks, it guesses right so often that it feels deterministic. Like it's following a script rather than predicting the next word.
It's not. And most of the time, the prediction is good enough that you stop noticing the difference.
It feels deterministic.That’s the illusion.
I wrote about something like this in Awareness Is the First Casualty of Convenience—the idea that what automation often replaces isn't effort but noticing. Small decisions you used to make consciously get absorbed into a system, and eventually you forget they were decisions at all.
That’s what happened here.
I stopped noticing that ChatGPT was guessing, because the guesses were good.
The Fix (and What It Revealed)
So I asked ChatGPT how to fix the problem.
Its answer was essentially: constrain me.
Be more explicit with prompts. Define boundaries. Specify what context to ignore. Narrow the scope. Don’t assume shared understanding.. Limit how much prior context it could use.
In other words: spell out every step. Don't skip anything. Don't assume the machine knows what you mean.
My father's exercise all over again..
Forty years later, the technology is unrecognizably different, and the lesson is exactly the same. The computer only does what you tell it. The difference now is that it's very good at pretending it does more.
And that pretending is the whole product. Without inference, gap-filling, and probabilistic reasoning, ChatGPT would mostly be an unusually pleasant search engine.
The magic and the failure mode are the same mechanism.
When the inference points in the right direction, it feels intelligent. When it points in the wrong direction, we call it a hallucination.
What This Made Me Think About
The constraints I gave ChatGPT to make it more accurate didn't make it less powerful. They made it more useful. The more clearly I defined the boundaries, the better the output got.
That surprised me less in the context of software than in the context of organizations.
Because people work this way too.
Under pressure, people infer. They fill gaps. They optimize for speed and plausibility. And when they’re smart and experienced, the output often looks convincing enough that nobody checks the assumptions underneath.
That’s an organizational hallucination.
A team delivers something that technically solves the problem—but not the problem you thought you assigned.
A marketing team interprets positioning differently than leadership intended. A product team optimizes for a metric nobody explicitly prioritized. A new executive inherits ambiguity and silently fills it with prior pattern recognition from their last company. And most of the time, nobody notices until something breaks.
The dangerous part isn’t that people infer. Good operators should infer. Companies would move impossibly slowly if every instruction required total specificity.
The dangerous part is forgetting inference is happening at all.
What I Haven't Figured Out Yet
Part of me thinks the lesson is straightforward: communicate more clearly. Use tighter constraints. Be explicit when precision matters.
But another part of me wonders if the real issue is that we've started to mistake good guessing for understanding—not just in AI, but everywhere. When someone on your team delivers a reasonable output based on assumptions they didn’t properly check, that's the organizational version of a hallucination. It might even be useful. But it wasn't built on the instructions you gave. It was built on the instructions they inferred.
And like ChatGPT, they're often good enough at inferring that nobody catches it until something breaks.
I don't know where the line is between useful inference and dangerous assumption. I suspect it moves depending on the stakes. My father was teaching something in that classroom that isn't really about computers at all. What he was really teaching was something broader: when misunderstanding becomes expensive, ambiguity is not your friend.
Berkson's Bits
When I’m in a restaurant and ready to pay the bill and leave, I try to catch the waiter’s eye and make a gesture. You probably know the gesture. I put my hand in the air and pretend I’m holding a pen and writing. That’s the gesture I grew up with which meant “check, please”.
I haven’t signed a bill at a restaurant in years. I still make that gesture. At what point will I need to change? Do I hold my phone up (which is how I usually pay)? Or will the mere act of getting the waiter’s attention, no matter what gesture I make, always be enough?
I don’t know.
What I'm Listening To…
I rarely listen to full albums anymore. It’s usually a song or a playlist. The last time I listened to a new album release all the way through was Beyonce’s Cowboy Carter. Well, it happened again. In fact, I’ve already listened through the whole album several times. It will be a crime if Raye’s new album This Music May Contain Hope does not score some Grammy’s. I hope you enjoy it as much as I did.
The computer only does what you tell it. That was true in 1980 and it's true now. The difference is that in 1980, you knew it.
Looking forward to continuing the conversation...
Alan
Reply