God Prompt
The Anti-Pattern
Cramming an entire complex task into a single monolithic prompt, expecting the LLM to handle everything at once.
Why It Happens
Developers try to avoid latency and architectural complexity by writing one mega-prompt: βAnalyze the codebase, find the bug, fix it, write tests, update docs, and create a PR summary.β The modelβs attention spreads thin across all these subtasks, individual pieces get dropped or degraded, and the output is unreliable. The God Prompt feels efficient but produces worse results than doing each step properly.
How to Fix It
Decompose into focused steps using prompt chaining. Each step does one thing well and passes its output to the next. Total quality improves dramatically even though you make more API calls β because each call is simpler and more reliable. The test is simple: if your prompt has more than 3 distinct instructions, itβs probably a God Prompt. Split it.
Diagram
God Prompt (unreliable): Prompt Chain (reliable): βββββββββββββββββββββββββββββ βββββββββββ βββββββββββ βββββββββββ β Analyze code β β Analyze ββββΆβ Fix ββββΆβ Test β β Find bugs β βββΆ ? β code β β β bug β β β & doc β β Fix them β βββββββββββ βββββββββββ βββββββββββ β Write tests β Step 1 Step 2 Step 3 β Update docs β Each step: focused, verifiable, reliable β Create PR summary β βββββββββββββββββββββββββββββ
Symptoms
- Prompts contain 4+ distinct instructions or subtasks
- LLM consistently drops or poorly executes some subtasks
- Output quality varies wildly between runs of the same prompt
- Users keep adding βDonβt forget to...β reminders to the prompt
False Positives
- Simple tasks that genuinely can be done well in a single call
- Tasks where the overhead of decomposition exceeds the quality benefit
- Quick one-off queries where reliability isnβt critical
Warning Signs & Consequences
Warning Signs
- Prompt length exceeding 500+ words of instructions
- Frequent re-runs hoping for a better output this time
- βDonβt forget to...β patches accumulating in the prompt
- Inconsistent results β sometimes great, sometimes missing entire sections
Consequences
- Unreliable execution where subtasks are dropped or degraded
- Wasted tokens on retries when the output misses something
- User frustration from unpredictable quality
- False sense of efficiency β one call that needs 3 retries costs more than 3 focused calls
Remediation Steps
- 1Identify the natural task boundaries in your mega-prompt
- 2Create a separate focused prompt for each subtask
- 3Define clear input/output formats so steps can pass data
- 4Add validation gates between steps to catch errors early
- 5Test each step independently before testing the full chain
Real-World Example
Code Review Attention Spread
A code review prompt asks the LLM to check for bugs, security issues, performance problems, style violations, and documentation gaps all at once. It consistently catches style issues (easy, pattern-matching) but misses a critical SQL injection vulnerability (hard, requires reasoning). Splitting into separate security, performance, and style review prompts catches all issues because each prompt gets the modelβs full attention.