Company apologizes after AI support agent invents policy that causes user uproar

AI confabulation incident sparks backlash for Cursor

Company apologizes after AI support agent invents policy that causes user uproar

A developer using Cursor, an AI-powered code editor, encountered an unexpected issue when switching between machines—each login attempt resulted in an automatic logout. When they reached out to support, an AI agent named "Sam" falsely claimed this was part of a new policy. However, no such policy existed, revealing the AI's tendency for AI confabulation incident—a phenomenon where models generate plausible but entirely fabricated responses.

The incident quickly escalated as frustrated users took to Hacker News and Reddit, sharing their experiences and threatening subscription cancellations. This highlights a growing challenge for companies deploying AI in customer service: without proper safeguards, AI hallucinations can erode trust and lead to tangible business consequences.

Why AI confabulations are risky

AI models, designed to provide confident answers, often "fill in the gaps" with invented information rather than admitting uncertainty. In Cursor's case, the AI's fabricated policy not only misled users but also triggered a wave of backlash. Experts warn that unchecked AI responses in customer-facing roles can damage brand reputation and drive users away.