I keep noticing a pattern with the teams who are just starting to use AI. They treat it like a serious, dependable graduate with a first-class degree. They trust it. They give it the keys to the kingdom.
Then, about three days later, I get a message that says some version of, "Steven, the bot told a customer we offer free lifetime holidays in the Maldives. We don't do that."
Between us, these moments are my favourite part of the current tech boom. We are living through a period where the smartest tools on the planet still have the occasional tendency to confidently lie to your face. It's like working with a highly sophisticated pathological liar who happens to be great at spreadsheets.
If you’ve been feeling a bit intimidated by the tech, let me share some of the more ridiculous things I’ve seen lately. It might help you realise that we’re all just figuring this out as we go.
The Confidence of a Rogue Chatbot
The real magic of an AI hallucination isn't that it's wrong. It's how remarkably confident it is while being wrong.
I was working with a friend who wanted to use AI to help research potential partners for his consultancy. He asked the bot to find "reputable experts in sustainable urban drainage."
The bot didn't just find him a list. It found him a stellar candidate: a man named Arthur Penhaligon who apparently pioneered a specific type of porous concrete in the late 90s. The bot even provided a link to a white paper.
The problem? Arthur Penhaligon is the fictional protagonist of a fantasy book series for teenagers. He doesn't exist. He definitely doesn't know anything about concrete.
The bot didn't blink. It didn't say, "I think this might be a character." It just presented Arthur as the leading mind in civil engineering.
This happens because these models are built to predict the next word, not to check the truth. If you ask it for an expert, its job is to give you something that looks like an expert. Sometimes, that means making one up.
When AI Tries to be Helpful (And Fails)
Something weird happened on a client call last week. We were looking at a tool designed to summarise internal meetings. It’s a great bit of kit, usually.
Midway through our review of the transcript, we found a bullet point that said: "John agreed to bring the ceremonial goat to the Thursday workshop."
There were no goats at the meeting. There were no ceremonies. John is an accountant from Surrey.
When we looked at the audio, it turned out John had actually said, "I’ll bring the preliminary quotes."
The AI had decided that "ceremonal goat" was a much more interesting sentence. And honestly? It was right. The meeting was fairly dry until the livestock was mentioned.
The lesson here is that AI has a strange sense of "fill in the blanks." If the audio is slightly fuzzy, it won't just leave a gap. It will hallucinate a reality that is far more vibrant than your actual business life.
The Curse of the "As an AI Language Model"
We’ve all seen the LinkedIn posts that clearly weren't written by humans. You know the ones. They start with "In today's fast-paced digital landscape" and end with a list of fifteen hashtags that make no sense.
But my favourite misuse is when people try to get AI to be their "vibe check."
I know a guy who tried to use a chatbot to help him win an argument with his partner. He pasted their entire WhatsApp argument into the prompt and asked, "Who is being more reasonable here?"
The AI, being a polite piece of software, gave him a very balanced reply. It told him that both parties had valid points but suggested he might want to be more empathetic.
He didn't like that. So he spent two hours trying to "jailbreak" the bot to get it to tell him he was right. Eventually, the bot just started repeating, "As an AI language model, I cannot take sides in a dispute about who forgot to take the bins out."
If you find yourself arguing with a machine about your household chores, it might be time to take a walk.
How to Stop Your Bots from Making Things Up
It’s easy to laugh at the fictional engineers and the goats, but there is a serious side to this if you're using these tools in your business.
You can't just set them and forget them. You need to build in what I call "human guardrails." Here is a simpler way to think about managing the weirdness:
- Temperature checks: Many AI tools have a "temperature" setting. High temperature means more creativity (and more hallucinations). Low temperature means it sticks closer to the facts. If you're doing data work, keep it cold.
- The "Cite Your Sources" rule: If I’m using AI for research, I always tell it: "Do not tell me anything you cannot provide a functioning URL for." This forces it to look at real data rather than its own imagination.
- Double-entry bookkeeping: Never let an AI-generated fact reach a client without a set of human eyes on it. Especially if that fact involves a quote, a date, or a name.
- Context is everything: The more background you give the bot, the less it has to guess. If you give it the "what," "how," and "why," it’s much less likely to invent a "who."
For those looking to get more out of these tools without the accidental goats, I've shared more articles on AI that go into the practicalities.
The "Fake Cousin" Incident
Finally, let me share the story of a small business owner who tried to use AI to write her "About Us" page.
She gave it a few notes about her background. The AI decided her story was a bit boring, so it added a paragraph about her "beloved cousin, Mateo," who had inspired her to start the business after a life-changing trip to the Italian Alps.
She doesn't have a cousin named Mateo. She’s never been to the Alps.
She almost published it because the writing was so "professional." She told me, "I felt like I was gaslighting myself. I started wondering if I’d just forgotten Mateo."
This is the bit most people miss. We are so used to tech being "correct" (like a calculator) that when it behaves like an "artist" (like ChatGPT), our brains struggle to adjust. We assume the machine knows something we don't.
It doesn't. Sometimes it’s just having a bit of a moment.
Keeping it Real in an Automated World
The goal isn't to avoid AI. It's to use it while maintaining a healthy amount of scepticism.
I use these tools every day, but I treat them like a very talented, slightly erratic intern. I'm happy for them to do the heavy lifting, but I’m definitely checking the final report before it goes out.
If you’re feeling overwhelmed by it all, or you’re worried your chatbot is about to invent a fictional relative for you, book a consultation and we can chat about how to put some proper systems in place.
And if you’re reading this on a Thursday, you should probably get the free book which covers the more grounded, less hallucinatory side of business growth.
In the meantime, just remember: if your AI tells you that you’ve won an Oscar or that your company is now based on Mars, it might be worth double-checking.
Unless you actually are on Mars. In which case, I have a lot more questions.
Want more stories like this? I share observations about AI, business, and life on the narrowboat at steventann.com.
About the Author
Steven Tann is an AI consultant, author of "You're Selling AI Wrong", and founder of SalesM8. He writes about AI, sales, and running a business from a narrowboat on the English canals. Connect with him at steventann.com.