I remember the exact moment I realised I'd been doing this wrong.
I was sitting in a boardroom, watching a demo of a "revolutionary" customer support bot. The CEO was beaming. The developers were sweating. And the AI, bless its digital heart, was trying to convince a prospective customer that the company’s flagship software was actually a brand of artisanal sourdough bread.
It wasn’t a glitch. It was a masterpiece of creative fiction.
We’ve all been told that AI is going to replace us, out-think us, and probably take our jobs before lunch. But anybody who has actually spent time in the trenches of AI implementation support knows the truth. AI is currently like a very eager, slightly drunk intern who has read the entire internet but has never actually been outside.
Common AI Hallucinations in Business Sales
The word "hallucination" is a bit too polite, isn't it? In human terms, we call it lying. In business terms, we call it a "creative interpretation of the brand guidelines."
I recently watched a sales bot handle a question about business execution speed. The client asked how fast the team could ship a new feature. The AI, sensing the need for urgency, promised that the feature would be ready "yesterday, provided the customer travelled at the speed of light."
It was technically correct, which is the most dangerous kind of AI correct. But it’s not exactly the kind of client-facing communication that builds trust.
When we talk about effective AI sales, we are usually trying to solve for efficiency. But if your bot is busy promising your clients time travel, your efficiency gains are going to be eaten up by your legal department’s therapy bills.
The reality is that AI doesn't know what it doesn’t know. It fills the gaps with whatever sounds most plausible based on a statistical probability of words. This is why you end up with:
- The Over-Promiser: Bots that offer 99% discounts because they want to please the user.
- The Philosopher: Bots that respond to a billing query with a 500-word essay on the nature of value.
- The Ghost: Bots that simply stop responding when they encounter a typo, as if they’ve gone on a digital strike.
The Struggle with Time Zone Alignment and Global AI
If you’ve ever managed a global team, you know that time zone alignment is the final boss of business operations. We thought AI would solve this. We thought it would be the bridge between London, New York, and Sydney.
Instead, I’ve seen AI tools invent entirely new time zones just to make the math work.
I had a client whose automated scheduling tool convinced three different stakeholders to meet at 4:30 AM on a Sunday because it had "optimised for maximum silence." It wasn't wrong—the office was definitely quiet—but the website feedback from the disgruntled participants was less than glowing.
The logic was sound; the human application was non-existent. This is the gap where most businesses fall over. They buy the tool, they plug it in, and they assume the "intelligence" part of Artificial Intelligence includes common sense. It doesn't.
Common sense is a premium feature that hasn't been coded yet.
Why Your Beta Testing Strategy Needs More Humans
I’ve been watching how people pitch AI lately. It’s like watching someone explain the internet to their nan. There’s a lot of nodding, a lot of "it's magic," and a complete lack of understanding about what happens when it breaks.
This is why a robust beta testing strategy is non-negotiable. You cannot unleash an LLM on your customers without first putting it through a gauntlet of human stupidity. You need to find the person in your office who asks the most annoying questions and tell them to break the bot.
If your bot can survive a frustrated customer at 5:00 PM on a Friday, it’s ready. Until then, you’re just gambling with your reputation.
We often see businesses prioritising business execution speed over accuracy. They want the bot live today. But a bot that hallucinates your pricing is worse than no bot at all. If you need a hand figuring out how to balance speed with sanity, you can always book a consultation to discuss the finer points of not letting your software lie to people.
Unexpected AI Behaviour in Professional Services
There is a certain brand of "quiet confidence" that AI has when it is completely and utterly wrong. It’s a trait it has clearly picked up from us.
I saw a recruiter use AI to screen CVs. One candidate was rejected because the AI decided that having a "gap year in France" meant they were likely to "surrender under pressure." It’s funny until you realise that’s someone’s career being decided by a stereotype buried in a training set from 2014.
We also see this in creative services. Ask an AI to design a logo for a "fast-growing startup" and there is a 90% chance it will include a rocket ship. Ask it to write a "disruptive" LinkedIn post and it will use the word "landscape" and "leverage" until your eyes bleed.
The absurdity comes from the fact that we are teaching machines to be us, but they are only learning the bits of us that we put in brochures. They are learning our jargon, our clichés, and our tendency to use ten words when two would do.
Specifically, if you're looking for tools that actually work without the fluff, you might want to see what we're doing with SalesM8 to keep the process human-centric.
How to Avoid the "AI Facepalm" Moment
If you want to use AI without becoming a cautionary tale at a networking event, you need to follow a few simple, often ignored, rules:
- Kill the Autonomy: Never let a bot make a final decision on pricing or legal commitments without a human "kill switch" or review process.
- Verify the Data: If an AI gives you a fact, assume it’s a lie until you see a source. Even then, double-check the source.
- Monitor the Feedback: Your website feedback is the canary in the coal mine. If customers start complaining that your support agent sounds "a bit too obsessed with sourdough," take it offline.
- Embrace the Weird: Sometimes the AI will do something weird. If it isn't harmful, laugh at it. Then fix the prompt.
We are currently in the "teething" phase of business automation. It's messy, it's slightly ridiculous, and there's a lot of dribble. But beneath the hallucinations and the weird time zone logic, there is genuine value. You just have to be the adult in the room.
If you’re worried about your own implementation, or you just want to make sure your bots aren't secretly plotting to turn your office into a bakery, you can get the free book for some slightly more serious thoughts on the matter.
Final Thoughts: The Human Element
At the end of the day, AI is a mirror. If the results are absurd, it’s usually because our own business processes are a bit absurd to begin with. We give it messy data, vague instructions, and high expectations, then act surprised when it comes back with something nonsensical.
The trick isn't to find the "perfect" AI. It’s to build the perfect guardrails.
Business is, and always will be, a series of human interactions supported by tools. If you lose the human element, you don't have a business; you just have an expensive calculator that likes to tell stories.
Stay curious, stay skeptical, and for the love of all that is holy, don't let your AI handle the Christmas party invitations. You'll end up in a different time zone, eating sourdough bread, wondering where it all went wrong.
Want more stories like this? I share observations about AI, business, and life on the narrowboat at steventann.com.
About the Author
Steven Tann is an AI consultant, author of "You're Selling AI Wrong", and founder of SalesM8. He writes about AI, sales, and running a business from a narrowboat on the English canals. Connect with him at steventann.com.