top of page

AI's Stubborn Streak

So, I was building—unsuccessfully—a scroll-triggered animation in a web development tool and using ChatGPT to help me sort a dev problem I encountered. After 10 to 15 minutes of back and forth brain damage, I noticed something strange: my AI assistant stubbornly kept doubling down on the same ill-fated approach, and with borderline manic levels of inaccurate confidence.


“OK, now I’ve got it. This will work…”


or


“Great, this is definitely the right approach…”


I had already prompted it to stop flattering me and quit using exclamation points, but it still couldn’t contain its excitement for problem solving.


ree

The pivot

It was then that I suggested a completely different approach, which the AI thought was genius (cue flattery), and the new tack worked. I sat back, took a breath, and laughed out loud, “Oh wow, this thing can’t pivot. At all.”


I typed a prompt: “Is it just me, or do you find it hard to get off a path once you’re on it?”


“Yes,” it said. “And your observation is sharp.” 


Thank you for that little bit of validation.


It turns out that once a model like ChatGPT starts reasoning down one path, it tends to reinforce its own logic instead of exploring alternatives. Even when that path clearly isn’t working.


What’s behind this?

The phenomenon, which was new to me, is a known limitation of LLM behavior, often called path fixation or mode lock. It’s a byproduct of how models like ChatGPT and Claude generate responses: they optimize for internal consistency and local coherence rather than for divergent exploration once a path is established.


Put differently, once a model identifies a plausible line of reasoning or solution, their subsequent tokens are probabilistically anchored to that track. What does this mean? It means that LLMs predict the next token, not the best idea. Each token choice narrows the future path, and coherence becomes self-reinforcing.


Advantage: Carbon

Contrast this with human reasoning: we can hold multiple hypotheses in our heads at once. LLMs, on the other hand, simulate one trajectory at a time.


Why does this matter?


Well, in creative or open-ended work (UX, strategy, design), path fixation limits discovery in a big way. During troubleshooting or debugging, it can lead to circular answers. In enterprise AI products, it affects user trust, and people assume intent or bias where it’s really just probabilistic inertia.


How to counter it

Is it possible to get your AI off this blinders-based approach? In a word: yes. The key is to get more collaborative with the tool rather than just asking it a binary question.


  • Force divergence: Prompt it to “generate 3 fundamentally different approaches before recommending one.”

  • Reframe the goal: Ask it to restate what it thinks the objective is. I have found that often reveals the blind spot.

  • Break context: Start a fresh thread or explicitly say “forget the previous approach.”

  • Go Meta: Ask it to “self-assess whether it’s repeating itself.” It does have some surprisingly good self-awareness. I can’t say that about every human.


Broader reflection

Later that same day I was talking to my wife about the experience. She said, “Oh, it has a fixed mindset…kind of like you sometimes.”


I have enough self-awareness to own that accusation. And it’s true, this AI behavior is seen in humans too. We just call it confirmation bias, or sunk-cost reasoning, etc., but—and this is a big but—we can notice and interrupt ours.


The AI can’t. At least not yet. And again, that gives me hope for humans and reinforces what we’re trying to do here at Drift: support, assist, and augment humans with AI.


In the end, the real skill isn’t banging your—or the AI’s head—against the wall to get to an answer. It’s knowing how and when to stop, hit pause, and try something totally different.


Here’s to humans and our wildly non-linear way of thinking.

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page