ptrnsai

Orchestrator-Workers

Advanced⛓️ Workflow PatternsAnthropic

Intent

A central LLM dynamically breaks down a task, delegates subtasks to worker LLMs, and synthesizes their results.

Problem

Some tasks can't be decomposed into a fixed set of subtasks at design time. The number and nature of subtasks depends on the specific input. A coding task might require editing 2 files or 20, and you can't know which until you analyze the request.

Solution

An orchestrator LLM receives the task, analyzes it, and dynamically creates a plan of subtasks. It delegates each subtask to a worker LLM (potentially with different specializations), monitors progress, and synthesizes the final result. Unlike Parallelization where subtasks are predefined, the orchestrator determines what work needs to be done on the fly. This is the most flexible workflow pattern but also the most complex.

Diagram

Input → [Orchestrator LLM]
              │ Analyze & plan
              ├→ [Worker 1: Edit file A] ──┐
              ├→ [Worker 2: Edit file B] ──┼→ [Orchestrator] → Synthesize → Output
              └→ [Worker 3: Write tests] ──┘

When to Use

  • Complex tasks where subtasks can't be predicted in advance
  • Coding agents that need to modify multiple files
  • Research tasks gathering information from multiple sources
  • Any task where the work breakdown depends on the specific input

When NOT to Use

  • Tasks with a predictable, fixed structure (use Prompt Chaining)
  • Simple tasks that don't need decomposition
  • When orchestrator overhead isn't justified by task complexity

Pros & Cons

Pros

  • Highly flexible — adapts to any input
  • Workers can be specialized and run in parallel
  • Orchestrator maintains big-picture awareness
  • Can handle arbitrary complexity

Cons

  • Orchestrator is a potential bottleneck and point of failure
  • Higher cost and latency than fixed workflows
  • Complex to debug — need to trace orchestrator decisions
  • Orchestrator quality directly limits overall quality

Implementation Steps

  1. 1Design the orchestrator prompt to analyze tasks and produce structured plans
  2. 2Create specialized worker prompts for different types of subtasks
  3. 3Implement the delegation mechanism — how orchestrator assigns work to workers
  4. 4Build the synthesis step — how results are combined into a final output
  5. 5Add error handling for worker failures and retries
  6. 6Implement monitoring so you can see the orchestrator's reasoning and decisions

Real-World Example

Multi-file Coding Agent

User requests a feature: 'Add user authentication.' The orchestrator analyzes the codebase, then delegates: Worker 1 creates the auth middleware, Worker 2 updates the database schema, Worker 3 modifies the API routes, Worker 4 writes tests. The orchestrator reviews all changes for consistency before committing.

PythonCoding Agent with Dynamic Task Delegation
import anthropic
import json

client = anthropic.Anthropic()

def orchestrate(task: str, files: list[str]) -> list[dict]:
    # Orchestrator analyzes task and creates work plan
    plan = client.messages.create(
        model="claude-sonnet-4-20250514", max_tokens=1024,
        messages=[{"role": "user", "content": f"Task: {task}\nFiles: {files}\n\nReturn a JSON array: [{{\"file\": str, \"action\": str}}]"}]
    )
    subtasks = json.loads(plan.content[0].text)

    # Workers execute each subtask
    results = []
    for subtask in subtasks:
        result = client.messages.create(
            model="claude-sonnet-4-20250514", max_tokens=2048,
            messages=[{"role": "user", "content": f"Edit {subtask['file']}: {subtask['action']}"}]
        )
        results.append({"file": subtask["file"], "changes": result.content[0].text})

    return results

References