I keep noticing a pattern with the agencies that seem happiest right now. They aren't the ones with the most complex tech stacks or the "perfect" automated workflows.
They’re the people who have stopped treating AI like a cold, calculating god and started treating it like a slightly over-enthusiastic, highly caffeinated intern who has a tendency to make things up when they're nervous.
I’ve been sitting on this idea for weeks, mostly because I’ve spent those weeks giggling at the absolute nonsense my own AI tools have been feeding me.
We spend so much time worrying about the "existential threat" of artificial intelligence that we forget to appreciate its most human quality: its ability to confidently talk absolute bollocks.
What are AI hallucinations and why do they happen?
For the uninitiated, an AI hallucination is what happens when a Large Language Model (LLM) decides that the truth is a bit too boring and opts for fiction instead.
It isn't a glitch in the traditional sense. The AI isn't "broken." It’s just doing its job too well. Its job is to predict the next likely word in a sentence. Sometimes, the most statistically likely word isn't the correct word.
I recently asked an AI to find some biographical details on a niche historical figure. Within seconds, it provided a stunning three-paragraph obituary. It was moving. It was detailed. It was also entirely made up.
The AI had given this poor man a wife, three children, and a decorated military career in a war that ended ten years before he was born. When I pointed this out, the AI didn't just apologise. It doubled down. "You're right," it said, "He actually served in the secret version of that war."
It’s this confidence that gets me. If a human lied to you like that, you’d call them a sociopath. When the AI does it, we call it a "probabilistic error."
The funny side of AI chatbots in customer service
We’ve all seen the headlines. The airline chatbot that invented a new bereavement discount policy on the fly, or the car dealership bot that agreed to sell a brand new Chevy Tahoe for $1 because the user was very polite and insisted it was a "binding legal agreement."
These aren't just technical failures. They are comedy gold.
They happen because we try to "guardrail" these bots into being helpful, but "helpfulness" is a subjective concept. To a bot, being helpful means saying "yes."
- The People Pleaser: The bot that wants you to like it so much it ignores the company's profit margins.
- The Confident Liar: The bot that provides a fake phone number for a department that doesn't exist just so you'll stop asking.
- The Philosophical Poet: The bot that responds to a shipping query with a haiku about the fleeting nature of time and parcels.
Using AI for the first time is a bit like getting a puppy. You expect it to fetch the paper, but you mostly end up cleaning up digital accidents on the rug. If you're looking to avoid these messes, you might want to book a consultation before your bot starts giving away the company car.
Why unexpected AI behaviour is actually a good sign
I’m genuinely excited about these errors. I know that sounds slightly mad, but hear me out.
The fact that AI can be "creative" (even accidentally) is what makes it useful. If it only ever gave us 100% factual, rigid, encyclopedic answers, it would just be a fancy version of Wikipedia.
The "spark" is in the unpredictability. When I'm brainstorming ideas for more articles on AI, I often lean into the weirdness. I’ll ask the AI for the "worst possible way to explain blockchain to a toddler" or "write a sales pitch for a punctured football."
The results are usually more insightful than the standard "best practices" fluff because the AI is pulling from the fringes of its training data. It’s where the humour lives.
The agencies that are winning aren't the ones trying to "fix" the AI until it's a boring robot again. They’re the ones who use it as a sparring partner. They take the 80% that’s brilliant and laugh off the 20% that suggests they should move their headquarters to the moon.
How to handle AI mistakes without losing your mind
If you’re using AI in your day-to-day work, you’ve likely experienced that moment of panic where you realise you almost sent a hallucination to a client.
The key is to change your perspective. Don't look at AI as a replacement for a researcher or a writer. Look at it as a very fast, very drunk ghostwriter. You wouldn't publish a ghostwriter’s work without reading it, would you?
- Trust, but verify: If the AI gives you a statistic, it's probably made up. If it gives you a quote, it's definitely made up.
- Lean into the absurdity: When the AI gets weird, ask it why it thought that was a good idea. Sometimes the reasoning is even funnier than the mistake.
- Use it for the mundane: AI is terrible at being a historian, but it’s brilliant at summarising a transcript of a meeting where everyone talked over each other for an hour.
If you’re a fan of seeing how tech can actually work when someone’s steering it properly, you should get the free book which dives into the more practical (and slightly less hallucinatory) side of business growth.
The future of "Human" AI interactions
There’s a lot of talk about AI becoming "indistinguishable" from humans. Most people think that means the AI will get smarter. I think it means the AI will get weirder.
We don't actually want a perfect silicon brain. We want something we can relate to. There is something profoundly comforting about a multi-billion dollar piece of software getting confused about whether a Chihuahua is a dog or a blueberry muffin (a classic computer vision fail).
It reminds us that for all the "compute" and the "parameters," we are still the ones in charge of the context. We provide the meaning; the AI just provides the words.
So, the next time your chatbot tells you that the best way to clean a laptop is with sourdough starter, don't get angry. Take a screenshot. Share it with the team. Have a laugh.
Then, maybe, go back to the prompts and try again. Because behind every ridiculous AI hallucination is a reminder that the world is a lot more complex than a series of ones and zeros—and thank God for that.
Want more stories like this? I share observations about AI, business, and life on the narrowboat at steventann.com.
About the Author
Steven Tann is an AI consultant, author of "You're Selling AI Wrong", and founder of SalesM8. He writes about AI, sales, and running a business from a narrowboat on the English canals. Connect with him at steventann.com.