Blind Delegation
The Anti-Pattern
A supervisor agent delegates to sub-agents without verifying their output, trusting results uncritically.
Why It Happens
The supervisor or orchestrator assumes workers always produce correct output. No validation, no sanity checks, no quality gates at delegation boundaries. One bad worker output cascades through the entire system. The supervisor degrades from a quality controller to a dumb router that adds cost without adding reliability.
How to Fix It
Implement output validation at every delegation boundary. Use evaluator-optimizer or consensus voting for critical subtasks. The supervisorβs job isnβt just to route β itβs to verify. Every delegation should have a validation step that checks the workerβs output before passing it downstream. The cheaper alternative: structured output schemas that make invalid results syntactically impossible.
Diagram
Blind (cascading failure): Verified (caught at boundary):
ββββββββββ ββββββββββ ββββββββββ ββββββββββ
β Super βββββΆβWorker Aβ β Super βββββΆβWorker Aβ
ββββββββββ βββββ¬βββββ ββββββββββ βββββ¬βββββ
β (bad output) β (bad output)
βΌ βΌ
ββββββββββ ββββββββββ
βWorker Bβ β builds on bad data βValidateββββΆ β Reject
βββββ¬βββββ ββββββββββ
β (worse output) β retry
βΌ βΌ
βββββββββββ ββββββββββ
β Output β β garbage βWorker Aβ (2nd attempt)
βββββββββββ ββββββββββSymptoms
- Downstream agents process garbage input from upstream without complaint
- No validation exists between agent handoff points
- Errors are only discovered at the final output β never at intermediate steps
- End-to-end quality is worse than a single agent doing everything
False Positives
- Low-stakes delegation where the cost of verification exceeds the cost of errors
- Well-tested deterministic sub-agents with known reliability characteristics
- Pipelines where each step has its own independent error handling
Warning Signs & Consequences
Warning Signs
- Final output quality worse than what a single agent produces
- Errors traceable to intermediate steps that were never validated
- No logging or monitoring at delegation boundaries
- Sub-agents receiving nonsensical inputs and dutifully processing them
Consequences
- Cascading errors that amplify through each delegation step
- Debugging nightmares β the root cause is 3 agents upstream from the symptom
- Lower overall quality than just using a single agent end-to-end
- Wasted compute processing and building on invalid intermediate results
Remediation Steps
- 1Add output validation schemas at every delegation boundary
- 2Implement retry logic when worker output fails validation
- 3Use evaluator-optimizer pattern for high-stakes subtasks
- 4Log all inputs and outputs at delegation boundaries for debugging
- 5Set quality thresholds β reject and retry rather than pass through bad output
Real-World Example
Research Report Hallucination
A research agent delegates fact-gathering to three sub-agents, then synthesizes their findings into a report. One sub-agent hallucinates a statistic β βMarket grew 340% in Q3β β which the supervisor passes through unchecked. The final report prominently features this fabricated number. A simple validation step (check that cited statistics have source URLs) would have caught it.