Google AI Errors Spark Backlash from Businesses
Google AI Errors Cause Problems for Small Businesses
A Montana restaurant owner is pleading with customers to ignore Google's AI Overview feature after the system repeatedly provided false information about daily specials and menu items. Eva Gannon, owner of Stefanina's Wentzville, told local media that incorrect AI-generated promotions have led to angry confrontations with customers.

Fabricated Deals Cause Real Problems
The AI system allegedly invented entire menu items and promotions, including falsely advertising large pizzas at small pizza prices. "As a small business, we can't fulfill the special offers from Google AI," Gannon explained. The restaurant has taken to Facebook to warn customers that Google's information is unreliable and direct them to official sources.
This phenomenon, known in AI development as "hallucination," occurs when chatbots generate plausible but completely false information. While sometimes humorous (like Google's infamous suggestion to put glue on pizza), these errors have serious consequences for businesses.
Growing Legal and Commercial Concerns
The Montana restaurant isn't alone in its struggles with AI misinformation. In June 2025, a Minnesota solar company sued Google for defamation after its AI Overview falsely claimed the business faced lawsuits over fraudulent sales practices. Legal experts suggest such cases may become more common as AI tools proliferate.
Despite these issues, Google continues aggressively promoting its AI-first approach, recently announcing features allowing users to book restaurants through the system. However, technology analysts caution against over-reliance on these tools for factual information or business decisions.
Key Points:
- Accuracy issues: Google's AI Overview frequently provides incorrect business information
- Business impact: Small businesses face customer anger over fabricated promotions
- Legal ramifications: Some companies are pursuing lawsuits over damaging misinformation
- Technical challenge: "Hallucinations" remain an unsolved problem in large language models
- User caution: Experts advise verifying critical information through official channels



