Hallucination Cascade
The Anti-Pattern
Agent hallucinates a fact or tool output, then reasons on the hallucination as truth, compounding errors through every subsequent step.
Why It Happens
A single hallucinated fact enters the reasoning chain and becomes βcontextβ for all future steps. Each step builds on the false foundation, moving further from reality. The agent is confidently wrong, and the error is invisible without external grounding. The cascade is especially dangerous in multi-step workflows where each stepβs output becomes the next stepβs input β one hallucination at step 2 corrupts steps 3 through N.
How to Fix It
Ground every reasoning step in verified tool outputs using the ReAct pattern. Add fact-checking gates between steps that verify key claims against source documents. Use RAG to cross-reference generated claims against a known-good corpus. Never let an LLM output become trusted input for the next step without independent verification. The architectural principle: treat LLM outputs as hypotheses, not facts. Every claim should be verifiable, and critical claims should be verified.
Diagram
Step 1 Step 2 Step 3 Output
βββββββ ββββββββ ββββββββ ββββββββββ
β LLM βββββββΆβ LLM βββββββββΆβ LLM βββββββββΆβ Report β
β β β β β β β β
β β β β β β β ββ β β βββ β
βfact β βhallu-β βbuildsβ βconfid- β
β β βcinateβ βon lieβ βently β
βββββββ ββββββββ ββββββββ βwrong β
ββββββββββ
Correct βββΆ Hallucination βββΆ Compounded βββΆ Cascaded failureSymptoms
- Agent makes confident claims that are demonstrably wrong
- Errors grow more severe through multi-step reasoning chains
- Agent outputs contradict the source data it was given
- Cited sources donβt exist or donβt say what the agent claims
False Positives
- Creative generation tasks where βhallucinationβ is a feature, not a bug
- Brainstorming where strict accuracy is less important than idea generation
- Single-step tasks where thereβs no chain to cascade through
Warning Signs & Consequences
Warning Signs
- Agent citing sources that donβt exist or canβt be verified
- Logical reasoning chains built on premises that have no factual basis
- Confidence increasing even as accuracy decreases through the chain
- Output that βsounds rightβ but contradicts easily verifiable facts
Consequences
- Compounding errors that are extremely hard to trace back to the source
- User trust destroyed when confident answers turn out to be fabricated
- Legal and compliance risks from presenting hallucinations as facts
- Debugging requires checking every step in the chain to find where truth diverged
Remediation Steps
- 1Implement ReAct pattern: every factual claim must be grounded in a tool call
- 2Add fact-checking gates between reasoning steps for critical claims
- 3Use RAG to cross-reference generated claims against verified sources
- 4Implement confidence calibration β flag low-confidence claims explicitly
- 5Never use an LLM output as trusted input without independent verification
Real-World Example
Legal Research Gone Wrong
A legal research agent hallucinates a case citation β βSmith v. Johnson (2019)β β which doesnβt exist. It then builds an entire legal argument around this non-existent precedent, analyzing its βimplications,β comparing it to real cases, and presenting the user with a confident 10-page brief. The brief would be immediately thrown out of court, and the lawyer who submitted it would face sanctions.