ptrnsai

Hallucination Cascade

Advanced🚫 Anti-Pattern🀯 Anti-Patterns: ReasoningAcademic research
🚫Anti-Patternβ€” This describes a common mistake to avoid, not a pattern to follow.

The Anti-Pattern

Agent hallucinates a fact or tool output, then reasons on the hallucination as truth, compounding errors through every subsequent step.

Why It Happens

A single hallucinated fact enters the reasoning chain and becomes β€˜context’ for all future steps. Each step builds on the false foundation, moving further from reality. The agent is confidently wrong, and the error is invisible without external grounding. The cascade is especially dangerous in multi-step workflows where each step’s output becomes the next step’s input β€” one hallucination at step 2 corrupts steps 3 through N.

How to Fix It

Ground every reasoning step in verified tool outputs using the ReAct pattern. Add fact-checking gates between steps that verify key claims against source documents. Use RAG to cross-reference generated claims against a known-good corpus. Never let an LLM output become trusted input for the next step without independent verification. The architectural principle: treat LLM outputs as hypotheses, not facts. Every claim should be verifiable, and critical claims should be verified.

Diagram

  Step 1        Step 2           Step 3           Output
  β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”€β”        β”Œβ”€β”€β”€β”€β”€β”€β”        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚ LLM │─────▢│ LLM  │───────▢│ LLM  │───────▢│ Report β”‚
  β”‚     β”‚      β”‚      β”‚        β”‚      β”‚        β”‚        β”‚
  β”‚ βœ“   β”‚      β”‚ βœ—    β”‚        β”‚ βœ—βœ—   β”‚        β”‚ βœ—βœ—βœ—   β”‚
  β”‚fact β”‚      β”‚hallu-β”‚        β”‚buildsβ”‚        β”‚confid- β”‚
  β”‚     β”‚      β”‚cinateβ”‚        β”‚on lieβ”‚        β”‚ently   β”‚
  β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”€β”˜        β””β”€β”€β”€β”€β”€β”€β”˜        β”‚wrong   β”‚
                                               β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  Correct ──▢ Hallucination ──▢ Compounded ──▢ Cascaded failure

Symptoms

  • Agent makes confident claims that are demonstrably wrong
  • Errors grow more severe through multi-step reasoning chains
  • Agent outputs contradict the source data it was given
  • Cited sources don’t exist or don’t say what the agent claims

False Positives

  • Creative generation tasks where β€˜hallucination’ is a feature, not a bug
  • Brainstorming where strict accuracy is less important than idea generation
  • Single-step tasks where there’s no chain to cascade through

Warning Signs & Consequences

Warning Signs

  • Agent citing sources that don’t exist or can’t be verified
  • Logical reasoning chains built on premises that have no factual basis
  • Confidence increasing even as accuracy decreases through the chain
  • Output that β€˜sounds right’ but contradicts easily verifiable facts

Consequences

  • Compounding errors that are extremely hard to trace back to the source
  • User trust destroyed when confident answers turn out to be fabricated
  • Legal and compliance risks from presenting hallucinations as facts
  • Debugging requires checking every step in the chain to find where truth diverged

Remediation Steps

  1. 1Implement ReAct pattern: every factual claim must be grounded in a tool call
  2. 2Add fact-checking gates between reasoning steps for critical claims
  3. 3Use RAG to cross-reference generated claims against verified sources
  4. 4Implement confidence calibration β€” flag low-confidence claims explicitly
  5. 5Never use an LLM output as trusted input without independent verification

Real-World Example

Legal Research Gone Wrong

A legal research agent hallucinates a case citation β€” β€˜Smith v. Johnson (2019)’ β€” which doesn’t exist. It then builds an entire legal argument around this non-existent precedent, analyzing its β€˜implications,’ comparing it to real cases, and presenting the user with a confident 10-page brief. The brief would be immediately thrown out of court, and the lawyer who submitted it would face sanctions.

References