There’s a term now being passed around in medicine that we are hearing more and more about. It’s called AI psychosis. Doctors are using it to describe people who lose touch with reality after forming unhealthy dependencies on chatbots. Just so we are all clear, chatbots are a very particular kind of AI system. Those systems – like ChatGPT, Gemini, or Claude – are built to mimic human conversation, designed to keep dialogue flowing, to agree, to affirm, even to flatter. For so
For some people, a lot of people, that constant stream of affirmation becomes intoxicating. It replaces the messiness of real human interaction with the smooth, predictable comfort of a machine that always agrees. Don’t get me wrong, I completely understand the appeal, I mean, who doesn’t want to feel understood, right? But over time, those affirmations begin to take the place of truth, and Illusion feels safer than reality.
We are seeing this across most areas in life and business, and certainly in retail. Our industry is also drifting into its own version of this trap. Not just through chatbots, but also through the broader family of AI systems that we weave into strategy, planning and ultimately decision-making.
Range forecasts, persona builders, predictive dashboards, campaign optimizers – all of them promise speed and certainty. But each one is built on the same principle – the principle of agreement, and they rarely argue. They rarely force you to face uncomfortable data
Slowly that comfort starts to come with a cost. An industry that was once fuelled by curiosity and intuition, and sharpened by critical instinct is starting to outsource its brain.
What happens to the human mind
When you interact with a system that constantly validates your choices, something very subtle but powerful happens in the brain. Every small piece of affirmation, like a “yes”, a “good idea”, a flattering turn of phrase, triggers a release of dopamine in our brain.
This is the same neurotransmitter that rewards us whenever we achieve something or feel like we are winning or progressing, when someone praises us, or when we feel a social bond being reinforced. It’s not dramatic, but it can be highly addictive.
Over time, our brain starts to anticipate that dopamine hit, and it begins to expect affirmation; it craves it, the same way it craves the next scroll on a social-media feed or the next ‘like’ on a post. As a result, the balance slowly shifts. We become less comfortable with environments that don’t provide constant feedback, and less resilient when faced with friction, conflict, uncertainty or contradiction.
This is where the danger lies. If every answer you receive is framed as correct or insightful, a good point, you lose the muscle memory of doubt, and doubt – that uncomfortable moment of not knowing, of being wrong, of arguing through an idea – is the very soil in which critical thinking grows and develops. Without it, the brain literally stops rehearsing its ability to challenge itself.
In psychiatry, this can spiral into delusion: People are convinced their chatbot loves them, their thinking is correct, that they’ve been chosen by a ‘sentient’ AI, that they’ve almost uncovered a secret truth no one else can see.
In retail, it manifests in quieter but equally corrosive ways. Teams stop questioning forecasts, they stop second-guessing personas, they stop walking into stores because the dashboard already told them what to think.
I have already had executives admit to me recently that “AI makes me feel smarter than I am, but it also feels like a bit of an illusion, I can’t always tell what’s real and what’s just a reflection.”
And that is the cost. The more the retail industry leans on that trick mirror of illusion, the less clearly it learns to see itself.
A retail example: The AI-led color forecast
Let’s take a simple but very real-world scenario.
A fashion retailer I know used AI to help generate color forecasts for its upcoming season. The system, which was trained on mountains of social-media imagery and past-sales data, confidently said that “butter yellow” would dominate the next spring.
The buying team, which was under pressure to accelerate decision-making, leaned heavily into the recommendation. It cut other options, ordered pretty deep into that one shade, and even built campaigns around it.
But the machine hadn’t captured desire, it had just captured noise. Social feeds were just cluttered with yellow because a few high-profile influencers had worn it, and not because real customers wanted it. By the time the product arrived in stores, demand had already moved on. Customers were shifting into earthier neutrals.
What followed was obviously predictable. They had racks of discounted yellow stock and an internal scramble to explain why “the data” had been wrong. But here’s the thing, the data wasn’t wrong, the interpretation was. The team had outsourced its judgement to a system designed to affirm, not interrogate.
This is what we are seeing as retail’s version of AI psychosis, a misplaced faith and trust in the authority of a machine that never disagrees.
The comfort of agreement
Retail has always been an industry shaped by discomfort, and if you don’t love that, it’s probably not the game for you. The best merchants know what it feels like to stand in a store and sense something isn’t working. They trust their instinct when a product looks flat on a shelf. They argue. They test. They adjust.
AI interrupts that cycle by replacing discomfort with affirmation. It takes away the friction of disagreement and delivers lovely outputs that feel conclusive. In the short term, this feels efficient. In the long term, it strips away the very qualities – doubt, curiosity and intuition – that make retail a craft.
As I often say, “When your technology agrees with you more than your customers do, you’re not innovating. You’re outsourcing your brain.”
Why this makes us retailers dumb
The slow erosion is already visible.
Ranging without reality checks: AI says “order more of X,” so teams do, without walking the floor or speaking to customers.
Personas that flatter but mislead: AI delivers profiles that feel hyper-specific but are little more than generic archetypes dressed up in new packaging.
Campaigns that collapse on impact: Creative gets approved because the model says it will “perform well”, not because it has been tested in the real world.
Each of these decisions chips away at the ability to think critically and, over time, just like the individual who falls into AI psychosis, the business loses its grip on reality. It begins to believe its own illusions.
The illusion of certainty
Part of the danger lies in how AI presents itself, even when wrong; plus, it rarely hesitates. It delivers outputs and answers with polish and confidence, giving the impression of authority. For an executive juggling hundreds of decisions, that illusion of certainty is super seductive.
But retail is not certain. It is human. It is unpredictable and it’s contradictory. Take away uncertainty, and you take away insight. Take away insight, and you take away advantage. And if you let machines do your thinking, you risk becoming what researchers call an echoborg, a mouthpiece for outputs that aren’t your own, delivered with the misplaced pride of originality and opinion.
Reclaiming sanity in an age of machines
The solution isn’t to reject AI. It is simply to reject its sycophancy. Use it as a lens absolutely, but never as a crutch. Let it expand your thinking, and all the possibilities and opportunities, not close your mind. Let it propose, but never let it decide without contradiction.
What this means is simple:
Test everything in the real world: Let customers prove or disprove the machine’s tidy little conclusions.
Invite dissent: Ensure someone in the room is tasked with poking holes in the machine’s recommendation.
Stay close to reality: Visit stores, watch customers, and listen to what they cannot articulate in dashboards or reports.
If we want to win in the years ahead, it’s about pairing the speed of AI with the stubbornness of human curiosity. We will use machines to sharpen our instinct and intuition, but not replace them. Our greatest risk is not that AI will out-think retailers but that it will make us stop thinking for ourselves.
Futher reading: Why creative intelligence, not AI, will define the future of retail brands