Back to Glossary

What is an AI Hallucination?

AI Writing Terms

An AI hallucination is when an LLM generates false information with complete confidence. The AI might cite non-existent studies, make up statistics, or claim things happened that never did - all while sounding authoritative and certain.

Hallucinations are not bugs or errors in the model. They're fundamental to how LLMs work. These models predict plausible-sounding text based on patterns, not actual knowledge. Sometimes plausible-sounding means completely fabricated.

Why AI hallucinations happen

LLMs don't have access to databases of facts. They generate text by predicting what should come next based on statistical patterns from training data. When asked about something, they generate what "sounds like" a correct answer, whether or not it's true.

If training data contains many scientific papers with citations, the model learns the pattern "[research shows X](Author Year)." It can then generate fake citations that follow this pattern perfectly while being completely invented.

Common types include fake citations (studies that don't exist but sound legitimate), made-up statistics (specific numbers with no basis in reality), confident errors (wrong explanations delivered with certainty), and false connections (real facts combined incorrectly).

How to catch and prevent hallucinations

Hallucinations sound just as confident as accurate information. There's no flag indicating "this might be made up." You have to verify everything. Publishing hallucinated content damages your credibility and can mislead readers.

Verify every fact, statistic, and claim. Be especially skeptical of specific numbers, citations, historical dates, and technical specifications. If AI cites a study, search for it - if you can't find it, assume it's hallucinated.

Detailed prompts with accurate context reduce but don't eliminate hallucinations. Asking AI to cite sources doesn't help - it will happily generate plausible-looking citations whether or not they exist. The only reliable approach: treat all AI output as first drafts requiring fact-checking. Never publish AI-generated content without human-in-the-loop verification of factual claims.

Put this knowledge into practice

PostGenius helps you write SEO-optimized blog posts with AI — applying concepts like this automatically.

PostGenius goes live this month

Drop your email below, and we'll send you a heads-up when it's ready — no spam, just the news. You'll also get your first month free.