Planning Long-Horizon Tasks with AI Agents - Todo-MCP Blog
Methodology 10 min read | January 2025

Planning Long-Horizon Tasks with AI Agents

How to plan multi-day projects with Claude Code and Cursor. The question exhaustion method, planning-execution boundary, and conversational techniques that work.

Get Your Free AI Coding Tutorial

Learn how to solve context loss in Claude Code and Cursor

or enter your details

By continuing, you agree to our Terms and Privacy Policy.

A Note to the Reader

This guide distills patterns that emerged from hundreds of planning sessions working with Claude as an AI development partner. Your workflow may differ, and you should adapt these practices to your context.

Consider this a starting point - a collection of what worked well in practice, offered to help you develop your own effective human-AI collaboration process. The specific techniques matter less than the underlying philosophy: thorough planning enables smooth execution.

The Core Principle: Iterate Until No Questions Remain

This is the single most important technique in AI-assisted planning.

When you and your AI collaborate on a plan, the goal is to iterate back and forth until neither party has any remaining questions about any aspect of the plan.

How It Works

Human: "I want to add feature X"
   ↓
AI: Creates initial plan, asks clarifying questions
   ↓
Human: Answers questions, asks their own questions
   ↓
AI: Refines plan, addresses concerns, asks more questions
   ↓
Human: Reviews, satisfied OR asks more questions
   ↓
... repeat until ...
   ↓
BOTH: "I have no more questions. The plan is clear."

Why This Works

  • Surfaces hidden assumptions - Questions expose what each party assumed but did not state
  • Catches gaps early - Missing pieces become obvious when you cannot answer a question
  • Builds shared understanding - Both parties end up with the same mental model
  • Prevents mid-execution surprises - Thorough questioning means thorough thinking

The Test

Before starting execution, both parties should honestly say:

  • "I understand every task in this plan"
  • "I know what success looks like"
  • "I have no questions about scope, approach, or verification"

If either party has remaining questions, keep iterating.

The Planning-Execution Boundary

Planning and Execution are distinct phases with different characteristics.

Understanding this boundary is crucial for effective AI collaboration:

  • Planning Mode: Collaborative iteration, questions expected and welcomed, research happens here, approvals given here, back-and-forth dialogue
  • Execution Mode: Delegated work, questions should not occur, research already complete, approvals already given, smooth uninterrupted flow

Why This Matters

During execution, stopping to research or await approval disrupts productive flow. The AI should be able to complete tasks in a steady rhythm without interruption.

The Goal of Planning: Produce execution-ready tasks that contain all needed context, require no further research, and need no additional approval.

The Handoff

Planning ends when:

  • Neither party has remaining questions
  • Approach is approved
  • Tasks are self-contained and execution-ready
  • Human explicitly says "Go"

After the handoff, execution should flow smoothly. If the AI must stop to ask questions during execution, that is a signal that planning was incomplete.

Question Exhaustion: The Six Domains

Ask questions across six domains until neither party can think of any more:

Domain 1: Scope Definition

  • What exactly are we building?
  • What are the boundaries?
  • What is explicitly OUT of scope?

Domain 2: Approach Analysis

  • What approaches exist?
  • What are the trade-offs of each?
  • Why this approach over others?

Domain 3: Dependency Mapping

  • What must exist first?
  • What blocks what?
  • What can be parallelized?

Domain 4: Risk Assessment

  • What could go wrong?
  • What are the unknowns?
  • What is the recovery plan if something fails?

Domain 5: Effort Analysis

  • What effort is required per task?
  • What skills or knowledge are needed?
  • Are there blocking constraints?

Domain 6: Validation Planning

  • How do we know each task is done?
  • How do we verify correctness?
  • What tests need to exist?

The Key: Keep asking until you genuinely cannot think of another question in any domain.

Work-Type-Aware Planning

Different types of work require different planning approaches. Identify the work type before planning:

Bug Fixing: Investigation IS Planning

For bugs, the investigation to find root cause IS the planning phase:

PLANNING:
├── Reproduce the bug
├── Characterize: what works vs what doesn't
├── Investigate: trace to root cause
├── Present root cause finding
├── Propose minimal fix
├── Get approval on fix approach
└── Create 2-4 small fix tasks

EXECUTION:
└── Apply fix, add tests, verify

Key Insight: Do not create "investigation tasks." If you are still investigating, you are still planning.

Feature Implementation

Research patterns, design approach during planning. Then execute the build.

Refactoring: Audit IS Planning

PLANNING:
├── Find ALL occurrences (audit)
├── Define the transformation pattern
├── Present: "N files need this change"
├── Get approval on pattern
└── Create file/batch tasks

EXECUTION:
└── Apply pattern systematically (X of N)

Conversational Techniques

The words you use to communicate with your AI matter. These phrases have proven effective for controlling the planning-execution boundary.

Mode Declaration Phrases

Explicitly declare when you are in planning mode to prevent premature execution:

  • "We're planning" - Declares current state
  • "Consider this planning mode" - Formal mode entry
  • "Let's plan..." - Initiates planning
  • "We're going to stay in planning mode until..." - Sets boundary

Action Blockers

Prevent the AI from jumping to execution before you are ready:

  • "Don't do anything yet" - After requesting research
  • "Before you do anything, tell me..." - Gate before action
  • "Study this but don't make changes" - Research-only request
  • "Stop all work and do nothing, let's plan..." - Emergency brake

Understanding Verification

Verify the AI understands before proceeding:

  • "Tell me what you think I'm asking for" - Check understanding
  • "Feed this back to me" - Verify interpretation
  • "How does this sound?" - Invite critique
  • "Do you have any questions?" - Surface gaps

Deep Analysis Request

When you need thorough thinking, not a quick answer:

  • "Think through this" - Request reasoning
  • "Reason through" - Analyze trade-offs
  • "UltraThink" - Extended deep analysis
  • "Use deep thinking on this" - High-stakes decision

When to use extended thinking: Production code changes, complex bug investigation, architectural decisions, anything where being wrong is costly.

Execution Approval

Clear signals that planning is complete:

  • "Go ahead and..." - Proceed with specific action
  • "Let's proceed with this plan" - Full approval
  • "Make the change" - Execute now
  • "Yes, let's do that" - Confirmation

Essential Plan Attributes

Every effective plan should include:

Must Have

  1. Clear Goal - What are we achieving? (1-2 sentences)
  2. Explicit Scope - What is in AND what is out
  3. Concrete Tasks - Specific, actionable items with file paths
  4. Verification Criteria - How to prove completion

Should Have

  1. Priority/Phase Structure - What order, what is critical vs nice-to-have
  2. Dependencies - What blocks what
  3. Effort Estimates - Rough sizing
  4. Risk Identification - What could go wrong

For AI-Assisted Work

  1. Roles Defined - What AI decides vs asks
  2. Handoff Readiness - Confirmation that tasks are execution-ready

Anti-Patterns to Avoid

1. Premature Execution

Do not start coding with questions remaining. If either party is unclear, keep planning.

2. Planning Without Research

Do not plan based on assumptions. Read the code first. Understand existing patterns before proposing changes.

3. Skipping Verification

"I'm pretty sure it works" is not verification. Always run the actual tests. Always verify the actual behavior.

4. Embedded Research in Tasks

Do not create tasks that require investigation. Research belongs in planning, not execution.

5. Stale Plans

A plan that does not match reality is worse than no plan. Update the plan or acknowledge it is abandoned.

Quick Reference Checklist

Before Planning

  • Understand the current state
  • Identify constraints
  • Gather requirements

During Planning

  • Use question exhaustion across all six domains
  • Document decisions and rationale
  • Identify dependencies
  • Create concrete, actionable tasks
  • Continue until neither party has questions

Before Execution (Handoff)

  • Both parties confirm understanding
  • Approach is approved
  • Tasks are self-contained
  • Human says "Go"

During Execution

  • One task at a time
  • Complete each task without stopping for approval
  • Verify before proceeding to next
  • If stuck, return to planning

The Mantra

Plan thoroughly, execute smoothly.
All questions answered before any work begins.
If execution requires stopping, planning was not complete.

This guide represents patterns that worked in practice. Adapt them to your context, develop your own variations, and always prioritize clear communication with your AI partner over rigid adherence to any methodology.

Ready to Never Lose Context Again?

Get your free setup tutorial and start shipping faster

or enter your details

By continuing, you agree to our Terms and Privacy Policy.

Continue Reading

Back to all articles