Hallucination
Also: confabulation, AI hallucination
Hallucination is when an AI model states something confidently that isn't true. It might invent a citation, get a date wrong, describe a product feature that doesn't exist, or fill in gaps in its knowledge with plausible-sounding guesses. It happens because language models are trained to produce fluent text — not to verify facts before speaking. Knowing when to trust Claude's output versus when to ground it in real sources is one of the most important skills for operators.
Articles
Why AI gets confident things wrong — and how to design around it
Hallucination isn't a bug that gets patched. It's a structural feature of how language models work. Understanding why it happens is the first step to building applications that aren't derailed by it.
How to work with Claude when accuracy matters
Hallucination isn't a reason to avoid Claude for high-stakes work. It's a constraint to design around. Teams that get this right build AI into their most important workflows. Teams that don't, limit AI to the low-stakes ones.
The hallucination patterns that catch operators off guard
Everyone knows AI can make things up. What surprises people is which specific situations trigger it — and how confident Claude sounds when it does.
When not to use Claude
Claude is genuinely powerful. It is also genuinely wrong for certain kinds of work. Knowing the difference is as important as knowing what it does well.