Sub-Agents: Stop Getting Slop From Your AI
Sub-agents are one of the most vital features in OpenClaw, dramatically improving work quality and reducing "slop" output. By distributing tasks across parallel agents with focused contexts, you achie
Sub-Agents: Stop Getting Slop From Your AI
Overview
Sub-agents are one of the most vital features in OpenClaw, dramatically improving work quality and reducing "slop" output. By distributing tasks across parallel agents with focused contexts, you achieve better results, faster execution, and more reliable outputs.
Why Sub-Agents Matter
The Problem with Single-Agent Workflows
When one agent handles everything:
- Context overload - Agent tries to remember too much
- Rushed execution - Pressure to deliver results quickly leads to shortcuts
- Generic output - Lack of specialization produces boilerplate responses
- Higher error rates - No cross-checking or validation
The Sub-Agent Solution
Massive improvement in quality - Specialized agents produce focused, accurate work
Parallel execution - Multiple tasks run simultaneously, saving time
Context optimization - Each sub-agent has minimal context, maximizing intelligence
Built-in validation - Multiple agents can cross-check each other's work
Reduced hallucination - If one agent hallucinates, others can correct it
How Sub-Agents Work
Architecture

┌─────────────────────────────────────┐
│ Main Agent (Orchestrator) │
│ - Knows everything about you │
│ - Plans and coordinates │
│ - Synthesizes results │
└──────────────┬──────────────────────┘
│
┌───────┴───────┐
│ │
┌──────▼──────┐ ┌─────▼───────┐
│ Sub-Agent 1 │ │ Sub-Agent 2 │
│ Research │ │ Graphics │
│ (Minimal │ │ (Minimal │
│ context) │ │ context) │
└─────────────┘ └─────────────┘
Key Principles
- Orchestrator knows you - Main agent has full context about your preferences
- Sub-agents are specialized - Each focuses on one specific task
- Minimal context per sub-agent - They don't need your life story
- Parallel execution - Work happens simultaneously
- Results aggregation - Main agent synthesizes outputs
Enabling Sub-Agents
Default Behavior
Important: Sub-agents are NOT enabled by default. You must explicitly request them.
Basic Invocation
Use sub-agents to research this topic and create a presentation
Advanced Invocation
Can you make a presentation on [topic]?
- Send sub-agents to research why it's important
- Use another sub-agent to make the presentation
- Use other sub-agents to do the SVG graphics
Context Window Optimization
Why Sub-Agents Save Context
Main agent context:
- Your preferences: 50K tokens
- Project history: 30K tokens
- Current conversation: 20K tokens
- Total: 100K tokens
Sub-agent context:
- Task instructions: 5K tokens
- Relevant data only: 10K tokens
- Total: 15K tokens per sub-agent
Performance Impact

Lower-end models (MiniMax, Qwen):
- Perform significantly better with smaller context
- Sub-agents keep them in the "smart zone" (under 40% context)
- Cost-effective for parallel workflows
Higher-end models (Opus, Sonnet):
- Still benefit from specialization
- Faster execution through parallelism
- Better quality through focused attention
Practical Examples
Example 1: Research and Presentation
Prompt:
Create a presentation on OpenClaw sub-agents.
Use parallel sub-agents for research, then synthesize
into a presentation with graphics.
What happens:
Main agent spawns:
- Research Agent 1: Reads OpenClaw documentation
- Research Agent 2: Searches web for examples
Research agents work in parallel:
- Agent 1 returns: Official documentation insights
- Agent 2 returns: Community use cases and patterns
Main agent spawns:
- Presentation Agent: Creates slides from research
- Graphics Agent: Generates SVG illustrations
Main agent synthesizes:
- Combines all outputs
- Delivers final presentation
Result: Comprehensive, well-researched presentation with custom graphics
Example 2: Code Review
Prompt:
Review this codebase using sub-agents:
- One for security analysis
- One for performance review
- One for code style consistency
Benefits:
- Each agent specializes in one aspect
- Parallel execution saves time
- Cross-validation catches more issues
Example 3: Content Creation
Prompt:
Write a blog post about AI trends.
Use sub-agents to:
- Research latest developments
- Analyze competitor content
- Generate outline
- Write sections in parallel
Result: Higher quality content with diverse perspectives
Model-Specific Considerations
MiniMax and Chinese Models
Critical for success:
- Sub-agents are essential for good results
- Main agent should explicitly instruct sub-agent usage
- May need to remind agent daily to use sub-agents
Configuration tip: Add to your agent's memory or skills:
When handling complex tasks, always use parallel sub-agents
to optimize context and improve quality.
Claude Opus
Natural sub-agent usage:
- Understands when to spawn sub-agents automatically
- Better at orchestration
- Handles complex multi-agent workflows
Still benefits from:
- Explicit instructions for critical tasks
- Parallel execution for time savings
Sub-Agent Improvements (Latest Version)
Error Reporting
Old behavior:
- Sub-agent fails silently
- No notification
- Main agent waits indefinitely
New behavior:
- Failure notifications delivered immediately
- Error details provided
- Can retry failed sub-agents
Status Updates
You now receive real-time updates:
- "Sub-agent 1 started: Researching documentation"
- "Sub-agent 2 started: Searching web"
- "Sub-agent 1 completed: Found 15 relevant sources"
- "Sub-agent 2 failed: Retrying with adjusted parameters"
Cost Considerations
Token Usage
More work = More tokens:
- Each sub-agent consumes tokens
- Parallel execution means simultaneous API calls
- Total cost is higher than single-agent approach
But:
- Each sub-agent uses fewer tokens (smaller context)
- Better results mean less rework
- Time savings offset cost increase
Cost Optimization
For MiniMax users:
- Subscription includes generous prompt allowance
- Sub-agents are cost-effective within plan limits
For Opus users:
- Monitor usage for expensive workflows
- Use sub-agents for high-value tasks
- Consider cheaper models for sub-agent tasks
Configuring Sub-Agent Models
Mixed Model Strategy
You can configure sub-agents to use different models:
Use Opus for main orchestration, but spawn
sub-agents using Sonnet for cost efficiency
Benefits:
- Main agent has full intelligence for coordination
- Sub-agents use cheaper models for focused tasks
- Significant cost savings on large workflows
Configuration: (Advanced topic - see model configuration guide)
Best Practices
1. Explicit Task Breakdown
Good:
Create a presentation using sub-agents:
- Research agent: Find latest data
- Analysis agent: Identify key trends
- Writing agent: Draft content
- Graphics agent: Create visuals
Bad:
Make a presentation (hoping agent uses sub-agents)
2. Parallel When Possible
Efficient:
Spawn all research agents in parallel, then synthesize
Inefficient:
Research topic 1, then topic 2, then topic 3 sequentially
3. Clear Success Criteria
Good:
Research agent should return:
- Key findings
- Notable opinions
- Links to sources
- Patterns across sources
- Gaps nobody is talking about
Bad:
Research this topic
4. Validate Results
Main agent should:
- Cross-check sub-agent outputs
- Identify contradictions
- Synthesize coherent final result
- Flag low-confidence findings
5. Retry Failed Sub-Agents
With new error reporting:
Sub-agent 2 failed. Retry with more specific instructions.
Don't accept partial results - ensure all sub-agents complete successfully.
Common Pitfalls
Pitfall 1: Not Requesting Sub-Agents
Problem: Agent does everything itself, produces slop
Solution: Explicitly request sub-agent usage in prompt
Pitfall 2: Too Much Context Per Sub-Agent
Problem: Sub-agents receive full context, defeating the purpose
Solution: Main agent should send only task-relevant information
Pitfall 3: Sequential Execution
Problem: Sub-agents run one after another, wasting time
Solution: Request parallel execution explicitly
Pitfall 4: No Result Synthesis
Problem: Sub-agents return raw data, no coherent output
Solution: Main agent must synthesize and format results
Troubleshooting
"Sub-agents not being used"
Cause: Not explicitly requested, or agent doesn't understand
Solution:
- Add to prompt: "Use parallel sub-agents for this task"
- Add to agent memory: "Always use sub-agents for complex tasks"
- Upgrade to latest OpenClaw version
"Sub-agent failed with no output"
Cause: Old OpenClaw version, or task too vague
Solution:
- Update to latest version for error reporting
- Provide more specific sub-agent instructions
- Check sub-agent logs for details
"Results are inconsistent"
Cause: Sub-agents have different context or instructions
Solution:
- Ensure main agent provides consistent instructions
- Have main agent validate and reconcile differences
- Use structured output formats
"Too expensive"
Cause: Too many sub-agents or wrong model selection
Solution:
- Use sub-agents only for high-value tasks
- Configure cheaper models for sub-agent work
- Reduce number of parallel sub-agents
Advanced Patterns
Research Synthesis Pattern
1. Spawn 3-5 research agents with different sources
2. Each returns structured findings
3. Main agent identifies:
- Common themes
- Contradictions
- Unique insights
- Gaps in coverage
4. Synthesize into executive summary
Validation Pattern
1. Main agent completes task
2. Spawn validation sub-agent
3. Validator checks:
- Accuracy
- Completeness
- Consistency
4. Main agent incorporates feedback
Iterative Refinement Pattern
1. Sub-agent produces draft
2. Main agent reviews
3. Spawn refinement sub-agent with specific feedback
4. Repeat until quality threshold met
Measuring Success
Quality Indicators
- Reduced hallucinations - Cross-validation catches errors
- More detailed output - Specialized agents go deeper
- Consistent formatting - Structured outputs maintained
- Faster completion - Parallel execution saves time
Before/After Comparison
Without sub-agents:
- Generic presentation
- Missing sources
- Boilerplate content
- 30 minutes execution time
With sub-agents:
- Detailed, researched presentation
- Cited sources with links
- Unique insights and analysis
- 15 minutes execution time (parallel)
Related Resources
Duration: 9 minutes
Difficulty: Intermediate
Video Reference: OpenClaw Sub-Agents EXPLAINED