Iterative Refinement
Intent
Execute a plan, observe results, and refine the plan based on what you learned — repeating until the goal is met.
Problem
Plans made with incomplete information are often wrong. The world is unpredictable — APIs fail, data is messier than expected, and assumptions break. A rigid plan-then-execute approach can't adapt to reality.
Solution
Combine planning and execution in a loop. The agent creates an initial plan, executes one or more steps, observes the results, and revises the plan accordingly. Each cycle incorporates new information, making the plan more grounded and realistic. This is the OODA loop (Observe-Orient-Decide-Act) applied to AI agents. It balances the structure of planning with the adaptability of reactive systems.
Diagram
Goal → [Initial Plan]
↓
[Execute next step]
↓
[Observe results]
↓
[Revise plan based on results]
↓
Goal met? ── No → [Execute next step]
│
Yes → DoneWhen to Use
- Uncertain environments where initial plans are likely wrong
- Long-running tasks that span multiple phases
- Research tasks where early findings change the direction
- When you can't predict all requirements upfront
When NOT to Use
- Tasks with perfectly predictable structure
- When the cost of re-planning exceeds the benefit
- Short tasks where a single plan suffices
Pros & Cons
Pros
- Adapts to unexpected results and changing requirements
- Each iteration is more informed than the last
- Combines planning discipline with execution flexibility
- Catches and corrects errors early
Cons
- More complex control flow than one-shot planning
- Re-planning overhead on every iteration
- Risk of scope creep as the plan keeps evolving
- Hard to predict total time/cost upfront
Implementation Steps
- 1Create an initial plan from the goal and available context
- 2Execute the first step of the plan
- 3Evaluate: did the step succeed? What did we learn?
- 4Update the plan with new information
- 5Repeat until the goal is met or a maximum iteration count is reached
- 6Log each plan revision for transparency and debugging
Real-World Example
Data Pipeline Development
Goal: Build a data pipeline from a new API. Initial plan assumes JSON responses. After step 1 (explore API), discovers it returns XML. Plan revises to add XML parsing. After step 2 (parse data), discovers inconsistent date formats. Plan adds normalization step. Each discovery refines the approach.
from openai import OpenAI
client = OpenAI()
def iterative_refinement(goal: str, max_iterations: int = 3) -> dict:
plan = f"Initial approach for: {goal}"
history = []
for i in range(max_iterations):
result = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Execute this plan:\n{plan}"}],
).choices[0].message.content
history.append({"plan": plan, "result": result})
revision = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Goal: {goal}\n\nLast result:\n{result}\n\nRevise the plan to improve. If goal is met, say DONE."}],
).choices[0].message.content
if "DONE" in revision:
return {"success": True, "iterations": i + 1, "result": result}
plan = revision
return {"success": False, "iterations": max_iterations, "history": history}